Compare commits

..

66 Commits

Author SHA1 Message Date
Yan Song 1c9c819942 smoke: add basic nydusify copy test
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 18:58:54 +08:00
Yan Song cf9dbdd5e1 nydusify: fix copy race issue
1. Fix lost namespace on containerd image pull context:

```
pull source image: namespace is required: failed precondition
```

2. Fix possible semaphore Acquire race on the same one context:

```
panic: semaphore: released more than held
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 18:58:54 +08:00
Yan Song 01f2bf24e6 storage: fix compatibility on fetching token for registry backend
The registry backend received an unauthorized error from Harbor registry
when fetching registry token by HTTP GET method, the bug is introduced
from https://github.com/dragonflyoss/image-service/pull/1425/files#diff-f7ce8f265a570c66eae48c85e0f5b6f29fdaec9cf2ee2eded95810fe320d80e1L263.

We should insert the basic auth header to ensure the compatibility of
fetching token by HTTP GET method.

This refers to containerd implementation: dc7dba9c20/remotes/docker/auth/fetch.go (L187)

The change has been tested for Harbor v2.9.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-20 14:05:00 +08:00
lihuahua123 1d1eb7c05d storage: fix auth compatibility for registry backend
Signed-off-by: lihuahua123 <771725652@qq.com>
2023-10-20 14:05:00 +08:00
zyfjeff e8c324687a add --original-blob-ids args for merge
the default merge command is to get the name of the original
blob from the bootstrap name, and add a cli args for it

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 16:45:22 +08:00
zyfjeff 7833d84b17 bugfix: do not fill 0 buffer, and skip validate features
1. Buffer reset to 0 will cause race during concurrency.

2. Previously, the second validate_header did not actually take effect. Now
it is repaired, and it is found that the features of blob info do not
set the --inline-bootstrap position to true, so the check of features is
temporarily skipped. Essentially needs to be fixed from nydus-image from
upstream.

Signed-off-by: zhaoshang <zhaoshangsjtu@linux.alibaba.com>
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 16:45:22 +08:00
zyfjeff 37f9af882f Support use /dev/stdin as SOURCE path for image build
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 16:45:22 +08:00
Yan Song 847725c176 docs: add nydusify copy usage
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-08-25 18:04:19 +08:00
Yan Song 5f17cff4fd nydusify: introduce copy subcommand
`nydusify copy` copies an image from source registry to target
registry, it also supports to specify a source backend storage.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-25 18:04:19 +08:00
David Baird f9ab2be073 Fix image-create with ACLs. Fixes #1394.
Signed-off-by: David Baird <dhbaird@gmail.com>
2023-08-17 14:16:58 +08:00
Yan Song 3faf95a1c9 storage: adjust token refresh interval automatically
- Make registry mirror log pretty;
- Adjust token refresh interval automatically;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-17 10:30:11 +08:00
Yan Song 193b7a14f2 storage: remove auth_through option for registry mirror
The auth_through option adds user burden to configure the mirror
and understand its meaning, and since we have optimized handling
of concurrent token requests, this option can now be removed.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-17 10:30:11 +08:00
Yan Song 04bc601e7e storage: implement simpler first token request
Nydusd uses a registry backend which generates a surge of blob requests without
auth tokens on initial startup. This caused mirror backends (e.g. dragonfly)
to process very slowly, the commit fixes this problem.

It implements waiting for the first blob request to complete before making other
blob requests, this ensures the first request caches a valid registry auth token,
and subsequent concurrent blob requests can reuse the cached token.

This change is worthwhile to reduce concurrent token requests, it also makes the
behavior consistent with containerd, which first requests the image manifest and
caches the token before concurrently requesting blobs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-17 10:30:11 +08:00
Qinqi Qu accf15297e deps: change tar-rs to upstream version
Since upstream tar-rs merged our fix for reading large uids/gids from
the PAX extension, so change tar-rs back to the upstream version.

Update tar-rs dependency xattr to 1.0.1 as well.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-09 17:22:04 +08:00
Xuewei Niu c9792e2dd7 deps: Bump dependent crate versions
This pull request is mainly for updating vm-memory and vmm-sys-util.

The affacted crates include:

- vm-memory: from 0.9.0 to 0.10.0
- vmm-sys-util: from 0.10.0 to 0.11.0
- vhost: from 0.5.0 to 0.6.0
- virtio-queue: from 0.6.0 to 0.7.0
- fuse-backend-rs: from 0.10.4 to 0.10.5
- vhost-user-backend: from 0.7.0 to 0.8.0

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2023-08-04 15:02:06 +08:00
Qinqi Qu b2376dfca7 deps: update tar-rs to handle very large uid/gid in image unpack
update tar-rs to support read large uid/gid from PAX extensions to
fix very large UIDs/GIDs (>=2097151, limit of USTAR tar) lost in
PAX style tar during unpack.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-04 15:01:12 +08:00
Yan Song 53c38c005a nydusify: support --with-referrer option
With this option, we can track all nydus images associated with
an OCI image. For example, in Harbor we can cascade to show nydus
images linked to an OCI image, deleting the OCI image can also delete
the corresponding nydus images. At runtime, nydus snapshotter can also
automatically upgrade an OCI image run to nydus image.

Prior to this PR, we had enabled this feature by default. However,
it is now known that Docker Hub does not yet support Referrer.

Therefore, adding this option to disable this feature by default,
to ensure broad compatibility with various image registries.

Fix https://github.com/dragonflyoss/image-service/issues/1363.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-04 15:00:48 +08:00
dependabot[bot] 1b66204987 dep: upgrade dependencies in /contrib/nydusify
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-04 15:00:48 +08:00
Bin Tang e7624dac7a fs: add test for filling auth
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-28 09:46:13 +08:00
Bin Tang eb102644c4 docs: introduce IMAGE_PULL_AUTH env
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-28 09:46:13 +08:00
Bin Tang d8799e6e40 nydusd: parse image pull auth from env
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-28 09:46:13 +08:00
xwb1136021767 19d5b12bb0 nydus-image: add unit test for setting default compression algorithm
Signed-off-by: xwb1136021767 <1136021767@qq.com>
2023-07-15 16:46:56 +08:00
Jiang Liu 3181b313db rafs: avoid a debug_assert related to v5 amplify io
In function RafsSuper::amplify_io(), is the next inode `ni` is
zero-sized, the debug assertion in function calculate_bio_chunk_index()
(rafs/src/metadata/layout/v5.rs) will get triggered. So zero-sized
file should be skipped by amplify_io().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-07-14 09:55:10 +08:00
ccx1024cc b6ee7bb34e fix: amplify io is too large to hold in fuse buffer (#1311)
* fix: amplify io is too large to hold in fuse buffer

Fuse request buffer is fixed by `FUSE_KERN_BUF_SIZE * pagesize() + FUSE_HEADER_ SIZE`. When amplify io is larger than it, FuseDevWriter suffers from smaller buffer. As a result, invalid data error is returned.

Reproduction:
    run nydusd with 3MB amplify_io
    error from random io:
        reply error header OutHeader { len: 16, error: -5, unique: 108 }, error Custom { kind: InvalidData, error: "data out of range, available 1052656 requested 1250066" }

Details:
    size of fuse buffer = 1052656 + 16 (size of inner header) = 256(page number) * 4096(page size) + 4096(fuse header)
    let amplify_io = min(user_specified, fuseWriter.available_bytes())

Resolution:
    This pr is not best implements, but independent of modification to [fuse-backend-rs]("https://github.com/cloud-hypervisor/fuse-backend-rs").
    In future, evalucation of amplify_io will be replaced with [ZeroCopyWriter.available_bytes()]("https://github.com/cloud-hypervisor/fuse-backend-rs/pull/135").

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

* feat: e2e for amplify io larger than fuse buffer

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

---------

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
Co-authored-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-13 10:23:25 +08:00
泰友 7e39a5d8f1 fix: large files broke prefetch
Files larger than 4G leads to prefetch panic, because the max blob io
range is smaller than 4G. This pr changes blob io max size from u32 to
u64.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-13 10:23:25 +08:00
泰友 14de0912af feat: add more types of file to smoke
Including:
    * regular file with chinese name
    * regular with long name
    * symbolic link of deleted file
    * large regular file of 13MB
    * regular file with hole at both head and tail
    * empty regular file

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-13 10:23:25 +08:00
Yiqun Leng 6b61aade61 change a new nydus image for ci test
The network is not stable when pulling the old image, which may result in
ci test failure, so use the new image instead.

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-13 10:19:52 +08:00
YanSong 4707593d3a action: fix checkout on pull_request_target
The `pull_request_target` trigger will checkout the master branch
codes by default, but we need to use the new PR codes on smoke test.

See: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-11 15:08:33 +08:00
泰友 b9ceb71657 dep: openssl from 0.10.48 to 0.10.55
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 15:08:33 +08:00
泰友 a613f4876f fix: deprecated docker field leads to failure of nydusify check
`NydusImage.Config.Config.ArgsEscaped` is present only for legacy compatibility
with Docker and should not be used by new image builders. Nydusify (1.6 and
above) ignores it, which is an expected behavior.

This pr ignores comparision of it in nydusify checking, which leads to failure.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 15:08:33 +08:00
泰友 9e266281e4 fix: merge io from same blob panic
When merging io from same blob with different id, assertion breaks. The
images without blob deduplication suffers from it.

This pr removes the assertion that requires merging in same blob index.
By design, it makes sense, because different blob layer may share same
blob file. A continuous read from same blob for different layer is
helpful for performance.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 15:08:33 +08:00
Jiang Liu 0dda5dd1f1 dep: upgrade base64 to v0.21
Upgrade base64 to v0.21, to avoid multiple versions of the base64
crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-11 15:08:33 +08:00
Jiang Liu dd82282391 dep: upgrade openssl to 0.10.55 to fix cve warnings
error[vulnerability]: `openssl` `X509VerifyParamRef::set_host` buffer over-read
    ┌─ /github/workspace/Cargo.lock:122:1
    │
122 │ openssl 0.10.48 registry+https://github.com/rust-lang/crates.io-index
    │ --------------------------------------------------------------------- security vulnerability detected
    │
    = ID: RUSTSEC-2023-0044
    = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0044
    = When this function was passed an empty string, `openssl` would attempt to call `strlen` on it, reading arbitrary memory until it reached a NUL byte.
    = Announcement: https://github.com/sfackler/rust-openssl/issues/1965
    = Solution: Upgrade to >=0.10.55

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-11 15:08:33 +08:00
Yiqun Leng 67a7addb15 fix incidental bugs in ci test
1. sleep for a while after restart containerd
2. only show detailed logs when test failed

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-10 16:52:27 +08:00
lihuahua123 a508dddd16 Nydusify: fix some bug about the subcommand mount of nydusify
- The `nydusify mount` subcomend don't require `--backend-type` and `--backend-config` options when the backend is registry.
    - The methord to resolve it is we can get the `--backend-type` and `--backend-config` options  from the docker configuration.
    - Also, we have refractor the code of checker module in order to reuse the code

Signed-off-by: lihuahua123 <771725652@qq.com>
2023-06-19 15:53:50 +08:00
Huang Jianan da501f758e builder: set the default compression algorithm for meta ci to lz4
We set the compression algorithm of meta ci to zstd by default, but there
is no option for nydus-image to configure it.

This could cause compatibility problems on the nydus version that does
not support zstd. Let's reset it to lz4 by default.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
2023-06-12 09:54:46 +08:00
Jiang Liu e33e68b9cb dep: update dependency to fix a CVE warning
error[vulnerability]: Resource exhaustion vulnerability in h2 may lead to Denial of Service (DoS)
   ┌─ /github/workspace/Cargo.lock:68:1
   │
68 │ h2 0.3.13 registry+https://github.com/rust-lang/crates.io-index
   │ --------------------------------------------------------------- security vulnerability detected
   │
   = ID: RUSTSEC-2023-0034
   = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0034
   = If an attacker is able to flood the network with pairs of `HEADERS`/`RST_STREAM` frames, such that the `h2` application is not able to accept them faster than the bytes are received, the pending accept queue can grow in memory usage. Being able to do this consistently can result in excessive memory use, and eventually trigger Out Of Memory.

     This flaw is corrected in [hyperium/h2#668](https://github.com/hyperium/h2/pull/668), which restricts remote reset stream count by default.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-12 09:54:46 +08:00
Huang Jianan cf6a216f02 contrib: support nydus-overlayfs and ctr-remote on different platforms
Otherwise, the binary we compiled cannot run on other platforms such as
arm.

Signed-off-by: Huang Jianan <jnhuang@linux.alibaba.com>
2023-05-16 15:35:46 +08:00
Yan Song 04fb92c5aa action: fix smoke test for branch pattern
To match `master` and `stable/*` branches at least.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-20 18:32:45 +08:00
Yan Song 8c9054264c action: upgrade golangci-lint to v1.51.2
To resolve the panic when run golangci-lint:

```
panic: load embedded ruleguard rules: rules/rules.go:13: can't load fmt
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-17 11:43:46 +08:00
imeoer 154bbbf4c7
Merge pull request #1215 from jiangliu/v2.2-backport
Backports two bugfixes from master into stable/v2.2
2023-04-17 11:02:07 +08:00
Jiang Liu 8482792dab rafs: fix a regression caused by commit 2616fb2c05
Fix a regression caused by commit 2616fb2c05.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-14 18:04:04 +08:00
Jiang Liu 27fd2b4925 rafs: fix a possible bug in v6_dirent_size()
Function Node::v6_dirent_size() may return wrong result when "." and
".." are not at the first and second entries in the sorted dirent array.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-14 18:04:00 +08:00
imeoer 72da69cb3d
Merge pull request #1195 from jiangliu/is_present
nydus: fix a possible panic caused by SubCmdArgs::is_present()
2023-04-10 15:26:40 +08:00
imeoer 460454a635
Merge pull request #1199 from taoohong/mushu/stable/v2.2
service: add README for nydus-service
2023-04-10 10:15:04 +08:00
taohong c0293263ec service: add README for nydus-service
Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-04-07 16:51:37 +08:00
Jiang Liu 5153260d7a nydus: fix a possible panic caused by SubCmdArgs::is_present()
Fix a possible panic caused by SubCmdArgs::is_present().

Fixes: https://github.com/dragonflyoss/image-service/issues/1194

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-05 10:54:41 +08:00
Jiang Liu 41a8e11c80
Merge pull request #1191 from adamqqqplay/v2.2-backport
[backport] contrib: upgrade runc to v1.1.5
2023-03-31 16:32:22 +08:00
Qinqi Qu 5ac2a5b666 contrib: upgrade runc to v1.1.5
Runc v1.1.5 fixes three CVEs, we should upgrade it.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-31 15:01:29 +08:00
Jiang Liu 4bcccd7ccd deny: fix cargo deny warnings related to openssl
Fix cargo deny warnings related to openssl.

https://github.com/dragonflyoss/image-service/actions/runs/4522515576/jobs/7965040490

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-31 15:01:01 +08:00
Jiang Liu d2bbd82149
Merge pull request #1171 from ccx1024cc/morgan/backport
backport fix/feature to stable 2.2
2023-03-24 22:33:24 +08:00
Qinqi Qu 3031f7573a deps: bump tempfile version to 3.4.0
Update tempfile related crates to fix https://github.com/advisories/GHSA-mc8h-8q98-g5hr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-23 18:04:25 +08:00
Yiqun Leng 3c4ceb6118 ci test: fix bug of compiling nydus-snapshotter
Since developers changed "make clear" to "make clean" in the Makefile
in nydus-snapshotter, it also needs to be updated in ci test.
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-03-23 18:04:20 +08:00
泰友 6973d9db3e fix: ci: actions are not triggered for stable/v2.2
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-23 15:27:50 +08:00
Yan Song d885d1a25b nydusify: cleanup work directory when conversion finish
Remove the work directory to clean up the temporary image
blob data after the conversion is finished.

We should only clean up when the work directory not exists
before, otherwise it may delete user data by mistake.

Fix: https://github.com/dragonflyoss/image-service/issues/1162

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:26:53 +08:00
Yan Song 009443b91e nydusify: fix oci media type handle
Bump nydus snapshotter v0.7.3 and bring some fixups:

1. If the original image is already an OCI type, we should forcibly set the bootstrap layer to the OCI type.
2. We need to append history item for bootstrap layer, to ensure the history consistency, see: e5d5810851/manifest/schema1/config_builder.go (L136)

Related PR: https://github.com/containerd/nydus-snapshotter/pull/427, https://github.com/goharbor/acceleration-service/pull/119

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:26:25 +08:00
泰友 62d213e6fa rafs: fix amplify can not be skipped
``` json
{
    "device":{
        "backend":{
            "type":"registry",
            "config":{
                "readahead":false,
                "host":"dockerhub.kubekey.local",
                "repo":"dfns/alpine",
                "auth":"YWRtaw46SGFyYm9VMTIZNDU=",
                "scheme":"https",
                "skip_verify":true,
                "proxy":{
                    "fallback":false
                }
            }
        },
        "cache":{
            "type":"",
            "config":{
                "work_dir":"/var/lib/containerd-nydus/cache",
                "disable_indexed_map":false
            }
        }
    },
    "mode":"direct",
    "digest_validate":false,
    "jostats_files":true,
    "enable_xattr":true,
    "access_pattern":true,
    "latest_read_files":true,
    "batch_size":0,
    "amplify_io":0,
    "fs_prefetch":{
        "enable":false,
        "prefetch_all":false,
        "threads_count":10,
        "merging_size":131072,
        "bandwidth_rate":1048576,
        "batch_size":0,
        "amplify_io":0
    }
}
```
`{.fs_prefetch.merging_size}` is used, instead of `{.amplify_io}`

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-23 15:25:58 +08:00
Yan Song 3fb31b91c2 nydusify: forcibly enabled `--oci` option when `--oci-ref` be enabled
We need to forcibly enable `--oci` option for allowing to append
related annotation for zran image, otherwise an error be thrown:

```
merge nydus layers: invalid label containerd.io/snapshot/nydus-ref=: invalid checksum digest format
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song 37e382c72d nydusify: fix unnecessary golang-lint error
```
golangci-lint run
Error: pkg/converter/provider/ported.go:47:64: SA1019: rCtx.ConvertSchema1 is deprecated: use Schema 2 or OCI images. (staticcheck)
	if desc.MediaType == images.MediaTypeDockerSchema1Manifest && rCtx.ConvertSchema1 {
	                                                              ^
Error: pkg/converter/provider/ported.go:20:2: SA1019: "github.com/containerd/containerd/remotes/docker/schema1" is deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1. (staticcheck)
	"github.com/containerd/containerd/remotes/docker/schema1"
	^
```

Disabled the check, it's unnecessary to check the ported codes.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song b60e92ae6a nydusify: fix `--oci` option for convert subcommand
The `--oci` option is not working, we make it reverse before,
this patch fix it and keep compatibility with the old option
`--docker-v2-format`.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song da8083c550 nydusify: fix pulling all platforms of source image
We should only handle specific platform for pulling by
`platforms.MatchComparer`, otherwise nydusify will pull
the layer data of all platforms for an source image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song b0f5edbbc7 rafs: do not fix blob id for old bootstrap
In fact, there is no way to tell if a separate old bootstrap file
was inline to the blob, for example, for an old merged bootstrap,
we can't set the blob id it references to as the filename, otherwise
it will break blob table on loading rafs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:20:06 +08:00
Yan Song a2ad16d4d2 smoke: add `--parent-bootstrap` for merge test
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:20:06 +08:00
Yan Song 7e6502711f builder: support `--parent-bootstrap` for merge
This option allows merging multiple bootstraps of upper layer with
the bootstrap of a parent image, so that we can implement container
commit operation for nydus image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:20:03 +08:00
Jiang Liu 115525298f
Merge pull request #1133 from jiangliu/v2.2-fix-get-compressed-size
nydus-image: fix a underflow issue in get_compressed_size()
2023-03-03 11:20:46 +08:00
Jiang Liu 6e0f69b673 nydus-image: fix a underflow issue in get_compressed_size()
Fix a underflow issue in get_compressed_size() by skipping generating
useless Tar/Toc headers.

Fixes: https://github.com/dragonflyoss/image-service/issues/1129

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-03 10:22:19 +08:00
458 changed files with 27889 additions and 47968 deletions

View File

@ -1,44 +0,0 @@
## Additional Information
_The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._
### Version of nydus being used (nydusd --version)
<!-- Example:
Version: v2.2.0
Git Commit: a38f6b8d6257af90d59880265335dd55fab07668
Build Time: 2023-03-01T10:05:57.267573846Z
Profile: release
Rustc: rustc 1.66.1 (90743e729 2023-01-10)
-->
### Version of nydus-snapshotter being used (containerd-nydus-grpc --version)
<!-- Example:
Version: v0.5.1
Revision: a4b21d7e93481b713ed5c620694e77abac637abb
Go version: go1.18.6
Build time: 2023-01-28T06:05:42
-->
### Kernel information (uname -r)
_command result: uname -r_
### GNU/Linux Distribution, if applicable (cat /etc/os-release)
_command result: cat /etc/os-release_
### containerd-nydus-grpc command line used, if applicable (ps aux | grep containerd-nydus-grpc)
```
```
### client command line used, if applicable (such as: nerdctl, docker, kubectl, ctr)
```
```
### Screenshots (if applicable)
## Details about issue

View File

@ -1,21 +0,0 @@
## Relevant Issue (if applicable)
_If there are Issues related to this PullRequest, please list it._
## Details
_Please describe the details of PullRequest._
## Types of changes
_What types of changes does your PullRequest introduce? Put an `x` in all the boxes that apply:_
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation Update (if none of the other choices apply)
## Checklist
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have updated the documentation accordingly.
- [ ] I have added tests to cover my changes.

23
.github/codecov.yml vendored
View File

@ -1,23 +0,0 @@
coverage:
status:
project:
default:
enabled: yes
target: auto # auto compares coverage to the previous base commit
# adjust accordingly based on how flaky your tests are
# this allows a 0.2% drop from the previous base commit coverage
threshold: 0.2%
patch: false
comment:
layout: "reach, diff, flags, files"
behavior: default
require_changes: true # if true: only post the comment if coverage changes
codecov:
require_ci_to_pass: false
notify:
wait_for_ci: true
# When modifying this file, please validate using
# curl -X POST --data-binary @codecov.yml https://codecov.io/validate

View File

@ -1,250 +0,0 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

View File

@ -1,40 +0,0 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

View File

@ -1,329 +0,0 @@
name: Benchmark
on:
schedule:
# Run at 03:00 clock UTC on Monday and Wednesday
- cron: "0 03 * * 1,3"
pull_request:
paths:
- '.github/workflows/benchmark.yml'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
benchmark-description:
runs-on: ubuntu-latest
steps:
- name: Description
run: |
echo "## Benchmark Environment" > $GITHUB_STEP_SUMMARY
echo "| operating system | cpu | memory " >> $GITHUB_STEP_SUMMARY
echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=oci
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-oci.json
export SNAPSHOTTER=overlayfs
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: smoke/${{ matrix.image }}-oci.json
benchmark-fsversion-v5:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-5
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v5.json
benchmark-fsversion-v6:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-6
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v6.json
benchmark-zran:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=zran
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: smoke/${{ matrix.image }}-zran.json
benchmark-result:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download benchmark-oci
uses: actions/download-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v5
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v6
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-zran
uses: actions/download-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: benchmark-result
- name: Benchmark Summary
run: |
case ${{matrix.image}} in
"wordpress")
echo "### workload: wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"node")
echo "### workload: node index.js; wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"python")
echo "### workload: python -c 'print("hello")'" > $GITHUB_STEP_SUMMARY
;;
"golang")
echo "### workload: go run main.go" > $GITHUB_STEP_SUMMARY
;;
"ruby")
echo "### workload: ruby -e "puts \"hello\""" > $GITHUB_STEP_SUMMARY
;;
"amazoncorretto")
echo "### workload: javac Main.java; java Main" > $GITHUB_STEP_SUMMARY
;;
esac
cd benchmark-result
metric_files=(
"${{ matrix.image }}-oci.json"
"${{ matrix.image }}-fsversion-v5.json"
"${{ matrix.image }}-fsversion-v6.json"
"${{ matrix.image }}-zran.json"
)
echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |convert-time(s)|" >> $GITHUB_STEP_SUMMARY
echo "|:-------------|:-----------:|:----------:|:---------------:|:--------------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for file in "${metric_files[@]}"; do
name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/')
data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024)) \(.conversion_elapsed / 1e9)"' "$file" | \
awk '{ printf "%.2f | %.0f | %.2f | %.2f | %.2f", $1, $2, $3, $4, $5 }')
echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY
done

View File

@ -18,18 +18,26 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Setup Golang - name: Setup Golang
uses: actions/setup-go@v5 uses: actions/setup-go@v3
with: with:
go-version-file: 'go.work' go-version: ~1.18
cache-dependency-path: "**/*.sum" - name: Golang Cache
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
- name: Build Contrib - name: Build Contrib
run: | run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0 curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.51.2
make -e DOCKER=false nydusify-release make -e DOCKER=false nydusify-release
- name: Upload Nydusify - name: Upload Nydusify
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@master
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify path: contrib/nydusify/cmd/nydusify
@ -38,18 +46,17 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Rust Cache - name: Rust Cache
uses: Swatinem/rust-cache@v2 uses: Swatinem/rust-cache@v2.2.0
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus - name: Build Nydus
run: | run: |
make release rustup component add rustfmt clippy
make
- name: Upload Nydus Binaries - name: Upload Nydus Binaries
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@master
with: with:
name: nydus-artifact name: nydus-artifact
path: | path: |
@ -60,15 +67,15 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Build fsck.erofs - name: Build fsck.erofs
run: | run: |
sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev
git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd .. cd erofs-utils && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/ sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/
- name: Upload fsck.erofs - name: Upload fsck.erofs
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@master
with: with:
name: fsck-erofs-artifact name: fsck-erofs-artifact
path: | path: |
@ -79,25 +86,25 @@ jobs:
needs: [nydusify-build, nydus-build, fsck-erofs-build] needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Login ghcr registry - name: Login ghcr registry
uses: docker/login-action@v3 uses: docker/login-action@v2
with: with:
registry: ${{ env.REGISTRY }} registry: ${{ env.REGISTRY }}
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydus-artifact name: nydus-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydusify-artifact name: nydusify-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download fsck.erofs - name: Download fsck.erofs
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: fsck-erofs-artifact name: fsck-erofs-artifact
path: /usr/local/bin path: /usr/local/bin
@ -106,7 +113,6 @@ jobs:
sudo chmod +x /usr/local/bin/nydus* sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-zran
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-oci-ref" echo "converting $I:latest to $I:nydus-nightly-oci-ref"
ghcr_repo=${{ env.REGISTRY }}/${{ env.ORGANIZATION }} ghcr_repo=${{ env.REGISTRY }}/${{ env.ORGANIZATION }}
@ -130,8 +136,7 @@ jobs:
--oci-ref \ --oci-ref \
--source localhost:5000/$I \ --source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref \ --target localhost:5000/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64 \ --platform linux/amd64,linux/arm64
--output-json convert-zran/${I}.json
# check zran image and referenced oci image # check zran image and referenced oci image
sudo rm -rf ./tmp sudo rm -rf ./tmp
@ -139,34 +144,29 @@ jobs:
--source localhost:5000/$I \ --source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref --target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot sudo fsck.erofs -d1 output/nydus_bootstrap
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
convert-native-v5: convert-native-v5:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build] needs: [nydusify-build, nydus-build]
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Login ghcr registry - name: Login ghcr registry
uses: docker/login-action@v3 uses: docker/login-action@v2
with: with:
registry: ${{ env.REGISTRY }} registry: ${{ env.REGISTRY }}
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydus-artifact name: nydus-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydusify-artifact name: nydusify-artifact
path: /usr/local/bin path: /usr/local/bin
@ -174,7 +174,6 @@ jobs:
run: | run: |
sudo chmod +x /usr/local/bin/nydus* sudo chmod +x /usr/local/bin/nydus*
sudo docker run -d --restart=always -p 5000:5000 registry sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v5
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v5" echo "converting $I:latest to $I:nydus-nightly-v5"
# for pre-built images # for pre-built images
@ -183,49 +182,42 @@ jobs:
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v5 \ --target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v5 \
--fs-version 5 \ --fs-version 5 \
--platform linux/amd64,linux/arm64 --platform linux/amd64,linux/arm64
# use local registry for speed # use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \ sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \ --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5 \ --target localhost:5000/$I:nydus-nightly-v5 \
--fs-version 5 \ --fs-version 5 \
--platform linux/amd64,linux/arm64 \ --platform linux/amd64,linux/arm64
--output-json convert-native-v5/${I}.json
sudo rm -rf ./tmp sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \ sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5 --target localhost:5000/$I:nydus-nightly-v5
done done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
convert-native-v6: convert-native-v6:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build] needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Login ghcr registry - name: Login ghcr registry
uses: docker/login-action@v3 uses: docker/login-action@v2
with: with:
registry: ${{ env.REGISTRY }} registry: ${{ env.REGISTRY }}
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydus-artifact name: nydus-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydusify-artifact name: nydusify-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download fsck.erofs - name: Download fsck.erofs
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: fsck-erofs-artifact name: fsck-erofs-artifact
path: /usr/local/bin path: /usr/local/bin
@ -234,7 +226,6 @@ jobs:
sudo chmod +x /usr/local/bin/nydus* sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6" echo "converting $I:latest to $I:nydus-nightly-v6"
# for pre-built images # for pre-built images
@ -243,147 +234,17 @@ jobs:
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6 \ --target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6 \
--fs-version 6 \ --fs-version 6 \
--platform linux/amd64,linux/arm64 --platform linux/amd64,linux/arm64
# use local registry for speed # use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \ sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \ --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 \ --target localhost:5000/$I:nydus-nightly-v6 \
--fs-version 6 \ --fs-version 6 \
--platform linux/amd64,linux/arm64 \ --platform linux/amd64,linux/arm64
--output-json convert-native-v6/${I}.json
sudo rm -rf ./tmp sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \ sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 --target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot sudo fsck.erofs -d1 output/nydus_bootstrap
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
convert-native-v6-batch:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 batch images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6-batch
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6-batch"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6-batch/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
convert-metric:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6, convert-native-v6-batch]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download Zran Metric
uses: actions/download-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
- name: Download V5 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
- name: Download V6 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
- name: Download V6 Batch Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
- name: Summary
run: |
echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY
echo "> Compare the size of OCI image and Nydus image."
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|oci/nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")")
zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")")
v5SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v5/${I}.json) / 1048576")")
v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")")
v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")")
v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")")
batchSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
batchTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|${batchSourceImageSize}/${batchTargetImageSize}|" >> $GITHUB_STEP_SUMMARY
done
echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY
echo "> Time elapsed to convert OCI image to Nydus image."
echo "|image name|nydus-zran|nydus-v5|nydus-v6|nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")")
v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")")
v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")")
batchConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6-batch/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|${batchConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
done
- uses: geekyeggo/delete-artifact@v2
with:
name: '*'

113
.github/workflows/integration.yml vendored Normal file
View File

@ -0,0 +1,113 @@
name: Integration Test
on:
schedule:
# Do conversion every day at 00:03 clock UTC
- cron: "3 0 * * *"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [amd64]
fs_version: [5, 6]
branch: [master, stable/v2.1]
steps:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
- name: Setup pytest
run: |
sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3
sudo python3 -m pip install --upgrade pip
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
- name: containerd runc and crictl
run: |
sudo wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz
sudo tar zxvf ./crictl-v1.17.0-linux-amd64.tar.gz -C /usr/local/bin
sudo wget https://github.com/containerd/containerd/releases/download/v1.4.3/containerd-1.4.3-linux-amd64.tar.gz
mkdir containerd
sudo tar -zxf ./containerd-1.4.3-linux-amd64.tar.gz -C ./containerd
sudo mv ./containerd/bin/* /usr/bin/
sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64 -O /usr/bin/runc
sudo chmod +x /usr/bin/runc
- name: Set up ossutils
run: |
sudo wget https://gosspublic.alicdn.com/ossutil/1.7.13/ossutil64 -O /usr/bin/ossutil64
sudo chmod +x /usr/bin/ossutil64
- uses: actions/checkout@v3
with:
ref: ${{ matrix.branch }}
- name: Cache cargo
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.1 cross
rustup component add rustfmt clippy
make -e RUST_TARGET=$RUST_TARGET -e CARGO=cross static-release
make release -C contrib/nydus-backend-proxy/
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
pwd
ls -lh target/$RUST_TARGET/release
- name: Set up anchor file
env:
OSS_AK_ID: ${{ secrets.OSS_TEST_AK_ID }}
OSS_AK_SEC: ${{ secrets.OSS_TEST_AK_SECRET }}
FS_VERSION: ${{ matrix.fs_version }}
run: |
sudo mkdir -p /home/runner/nydus-test-workspace
sudo mkdir -p /home/runner/nydus-test-workspace/proxy_blobs
sudo cat > /home/runner/work/image-service/image-service/contrib/nydus-test/anchor_conf.json << EOF
{
"workspace": "/home/runner/nydus-test-workspace",
"nydus_project": "/home/runner/work/image-service/image-service",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "localhost:5000",
"registry_namespace": "",
"registry_auth": "YOURAUTH==",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/home/runner/nydus-test-workspace/proxy_blobs"
},
"oss": {
"endpoint": "oss-cn-beijing.aliyuncs.com",
"ak_id": "$OSS_AK_ID",
"ak_secret": "$OSS_AK_SEC",
"bucket": "nydus-ci"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd",
"ossutil_bin": "/usr/bin/ossutil64"
},
"fs_version": "$FS_VERSION",
"logging_file": "stderr",
"target": "musl"
}
EOF
- name: run e2e tests
run: |
cd /home/runner/work/image-service/image-service/contrib/nydus-test
sudo mkdir -p /blobdir
sudo python3 nydus_test_config.py --dist fs_structure.yaml
sudo pytest -vs -x --durations=0 functional-test/test_api.py functional-test/test_nydus.py functional-test/test_layered_image.py

View File

@ -1,45 +0,0 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -19,60 +19,28 @@ jobs:
matrix: matrix:
arch: [amd64, arm64, ppc64le, riscv64] arch: [amd64, arm64, ppc64le, riscv64]
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
- name: Cache cargo - name: Cache cargo
uses: Swatinem/rust-cache@v2 uses: Swatinem/rust-cache@v1
with: with:
target-dir: |
./target
cache-on-failure: true cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1 - name: Build nydus-rs
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: | run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu") declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]} RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.4 cross
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image . sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl . sudo mv target/$RUST_TARGET/release/nydusctl .
sudo cp -r misc/configs . sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/ sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v2
with: with:
name: nydus-artifacts-linux-${{ matrix.arch }} name: nydus-artifacts-linux-${{ matrix.arch }}
path: | path: |
@ -82,33 +50,27 @@ jobs:
configs configs
nydus-macos: nydus-macos:
runs-on: macos-13 runs-on: macos-11
strategy: strategy:
matrix: matrix:
arch: [amd64, arm64] arch: [amd64, arm64]
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
- name: Cache cargo - name: Cache cargo
uses: Swatinem/rust-cache@v2 uses: Swatinem/rust-cache@v1
with: with:
target-dir: |
./target
cache-on-failure: true cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: build - name: build
run: | run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then rustup component add rustfmt clippy
RUST_TARGET="x86_64-apple-darwin" make -e INSTALL_DIR_PREFIX=. install
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo cp -r misc/configs . sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/ sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v2
with: with:
name: nydus-artifacts-darwin-${{ matrix.arch }} name: nydus-artifacts-darwin-${{ matrix.arch }}
path: | path: |
@ -125,22 +87,29 @@ jobs:
env: env:
DOCKER: false DOCKER: false
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
- name: Setup Golang - uses: actions/setup-go@v2
uses: actions/setup-go@v5
with: with:
go-version-file: 'go.work' go-version: '1.18'
cache-dependency-path: "**/*.sum" - name: cache go mod
uses: actions/cache@v2
with:
path: /go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
- name: build contrib go components - name: build contrib go components
run: | run: |
make -e GOARCH=${{ matrix.arch }} contrib-release make -e GOARCH=${{ matrix.arch }} contrib-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/nydusify/cmd/nydusify . sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs . sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v2
with: with:
name: nydus-artifacts-linux-${{ matrix.arch }}-contrib name: nydus-artifacts-linux-${{ matrix.arch }}
path: | path: |
ctr-remote
nydusify nydusify
nydus-overlayfs nydus-overlayfs
containerd-nydus-grpc containerd-nydus-grpc
@ -154,41 +123,7 @@ jobs:
needs: [nydus-linux, contrib-linux] needs: [nydus-linux, contrib-linux]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v4 uses: actions/download-artifact@v2
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare release tarball
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="nydus-static-$tag-${{ matrix.os }}-${{ matrix.arch }}.tgz"
chmod +x nydus-static/*
tar cf - nydus-static | gzip > ${tarball}
echo "tarball=${tarball}" >> $GITHUB_ENV
shasum="$tarball.sha256sum"
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
# use a seperate job for darwin because github action if: condition cannot handle && properly.
prepare-tarball-darwin:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64]
os: [darwin]
needs: [nydus-macos]
steps:
- name: download artifacts
uses: actions/download-artifact@v4
with: with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }} name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static path: nydus-static
@ -204,9 +139,42 @@ jobs:
sha256sum $tarball > $shasum sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v2
with: with:
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }} name: nydus-release-tarball
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
# use a seperate job for darwin because github action if: condition cannot handle && properly.
prepare-tarball-darwin:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64]
os: [darwin]
needs: [nydus-macos]
steps:
- name: download artifacts
uses: actions/download-artifact@v2
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static
- name: prepare release tarball
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="nydus-static-$tag-${{ matrix.os }}-${{ matrix.arch }}.tgz"
chmod +x nydus-static/*
tar cf - nydus-static | gzip > ${tarball}
echo "tarball=${tarball}" >> $GITHUB_ENV
shasum="$tarball.sha256sum"
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: nydus-release-tarball
path: | path: |
${{ env.tarball }} ${{ env.tarball }}
${{ env.tarball_shasum }} ${{ env.tarball_shasum }}
@ -216,10 +184,9 @@ jobs:
needs: [prepare-tarball-linux, prepare-tarball-darwin] needs: [prepare-tarball-linux, prepare-tarball-darwin]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v4 uses: actions/download-artifact@v2
with: with:
pattern: nydus-release-tarball-* name: nydus-release-tarball
merge-multiple: true
path: nydus-tarball path: nydus-tarball
- name: prepare release env - name: prepare release env
run: | run: |
@ -239,87 +206,3 @@ jobs:
generate_release_notes: true generate_release_notes: true
files: | files: |
${{ env.tarballs }} ${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true

View File

@ -18,208 +18,105 @@ env:
jobs: jobs:
contrib-build: contrib-build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Setup Golang - name: Setup Golang
uses: actions/setup-go@v5 uses: actions/setup-go@v3
with: with:
go-version-file: 'go.work' go-version: ~1.18
cache-dependency-path: "**/*.sum" - name: Golang Cache
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
- name: Build Contrib - name: Build Contrib
run: | run: |
make -e DOCKER=false GOARCH=${{ matrix.arch }} contrib-release curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2
make -e DOCKER=false nydusify-release
make -e DOCKER=false contrib-test
- name: Upload Nydusify - name: Upload Nydusify
if: matrix.arch == 'amd64' uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
contrib-lint:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- path: contrib/nydusify
- path: contrib/nydus-overlayfs
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache: false
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.64
working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose
nydus-build: nydus-build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Rust Cache - name: Rust Cache
uses: Swatinem/rust-cache@v2 uses: Swatinem/rust-cache@v2.2.0
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} - name: Build Nydus
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: | run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml) rustup component add rustfmt clippy
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION" make
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries - name: Upload Nydus Binaries
if: matrix.arch == 'amd64' uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: | path: |
nydus-image target/release/nydus-image
nydusd target/release/nydusd
nydusd-build-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
nydus-integration-test: nydus-integration-test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [contrib-build, nydus-build] needs: [contrib-build, nydus-build]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Docker Cache - name: Docker Cache
uses: jpribyl/action-docker-layer-caching@v0.1.0 uses: jpribyl/action-docker-layer-caching@v0.1.0
continue-on-error: true continue-on-error: true
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydus-artifact name: nydus-artifact
path: | path: |
target/release target/release
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@v4 uses: actions/download-artifact@master
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
- name: Prepare Older Binaries - name: Prepare Older Binaries
id: prepare-binaries id: prepare-binaries
run: | run: |
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name') versions=(v0.1.0 v2.1.4)
version_archs=(v0.1.0-x86_64 v2.1.4-linux-amd64)
versions=(v0.1.0 ${NYDUS_STABLE_VERSION})
version_archs=(v0.1.0-x86_64 ${NYDUS_STABLE_VERSION}-linux-amd64)
for i in ${!versions[@]}; do for i in ${!versions[@]}; do
version=${versions[$i]} version=${versions[$i]}
version_arch=${version_archs[$i]} version_arch=${version_archs[$i]}
wget -q https://github.com/dragonflyoss/nydus/releases/download/$version/nydus-static-$version_arch.tgz wget -q https://github.com/dragonflyoss/image-service/releases/download/$version/nydus-static-$version_arch.tgz
sudo mkdir nydus-$version /usr/bin/nydus-$version sudo mkdir nydus-$version /usr/bin/nydus-$version
sudo tar xzf nydus-static-$version_arch.tgz -C nydus-$version sudo tar xzf nydus-static-$version_arch.tgz -C nydus-$version
sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/ sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/
done done
- name: Setup Golang - name: Golang Cache
uses: actions/setup-go@v5 uses: actions/cache@v3
with: with:
go-version-file: 'go.work' path: |
cache-dependency-path: "**/*.sum" ~/.cache/go-build
- name: Free Disk Space ~/go/pkg/mod
uses: jlumbroso/free-disk-space@main key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
with: restore-keys: |
# this might remove tools that are actually needed, ${{ runner.os }}-golang-
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test - name: Integration Test
run: | run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest
sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest
sudo bash misc/prepare.sh
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name') versions=(v0.1.0 v2.1.4 latest)
export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}" version_exports=(v0_1_0 v2_1_4 latest)
versions=(v0.1.0 ${NYDUS_STABLE_VERSION} latest)
version_exports=(v0_1_0 ${NYDUS_STABLE_VERSION_EXPORT} latest)
for i in ${!version_exports[@]}; do for i in ${!version_exports[@]}; do
version=${versions[$i]} version=${versions[$i]}
version_export=${version_exports[$i]} version_export=${version_exports[$i]}
@ -228,159 +125,26 @@ jobs:
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8 curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2
sudo -E make smoke-only sudo -E make smoke-only
nydus-unit-test: nydus-unit-test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Rust Cache - name: Rust Cache
uses: Swatinem/rust-cache@v2 uses: Swatinem/rust-cache@v2.2.0
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Unit Test - name: Unit Test
run: | run: |
CARGO_HOME=${HOME}/.cargo make ut
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Unit Test
run: |
make -e DOCKER=false contrib-test
- name: Upload contrib coverage file
uses: actions/upload-artifact@v4
with:
name: contrib-test-coverage-artifact
path: |
contrib/nydusify/coverage.txt
nydus-unit-test-coverage:
runs-on: ubuntu-latest
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Generate code coverage
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
- name: Upload nydus coverage file
uses: actions/upload-artifact@v4
with:
name: nydus-test-coverage-artifact
path: |
codecov.json
upload-coverage-to-codecov:
runs-on: ubuntu-latest
needs: [contrib-unit-test-coverage, nydus-unit-test-coverage]
steps:
- uses: actions/checkout@v4
- name: Download nydus coverage file
uses: actions/download-artifact@v4
with:
name: nydus-test-coverage-artifact
- name: Download contrib coverage file
uses: actions/download-artifact@v4
with:
name: contrib-test-coverage-artifact
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: ./codecov.json,./coverage.txt
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
fail_ci_if_error: true
nydus-cargo-deny: nydus-cargo-deny:
name: cargo-deny name: cargo-deny
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: EmbarkStudios/cargo-deny-action@v2 - uses: EmbarkStudios/cargo-deny-action@v1
performance-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- mode: fs-version-5
- mode: fs-version-6
- mode: zran
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh
- name: Performance Test
run: |
export PERFORMANCE_TEST_MODE=${{ matrix.mode }}
sudo -E make smoke-performance
takeover-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh takeover_test
- name: Takeover Test
run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover

View File

@ -1,31 +0,0 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

8
.gitignore vendored
View File

@ -1,14 +1,8 @@
**/target* **/target*
**/*.rs.bk **/*.rs.bk
**/.vscode /.vscode
.idea .idea
.cargo .cargo
**/.pyc **/.pyc
__pycache__ __pycache__
.DS_Store .DS_Store
go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

View File

@ -1,6 +1,6 @@
## CNCF Dragonfly Nydus Adopters ## CNCF Dragonfly Nydus Adopters
A non-exhaustive list of Nydus adopters is provided below. A non-exhaustive list of containerd adopters is provided below.
Please kindly share your experience about Nydus with us and help us to improve Nydus ❤️. Please kindly share your experience about Nydus with us and help us to improve Nydus ❤️.
**_[Alibaba Cloud](https://www.alibabacloud.com)_** - Aliyun serverless image pull time drops from 20 seconds to 0.8s seconds. **_[Alibaba Cloud](https://www.alibabacloud.com)_** - Aliyun serverless image pull time drops from 20 seconds to 0.8s seconds.
@ -12,5 +12,3 @@ Please kindly share your experience about Nydus with us and help us to improve N
**_[KuaiShou](https://www.kuaishou.com)_** - Starting to deploy millions of containers with Dragonfly and Nydus. **_[KuaiShou](https://www.kuaishou.com)_** - Starting to deploy millions of containers with Dragonfly and Nydus.
**_[Yue Miao](https://www.laiyuemiao.com)_** - The startup time of micro service has been greatly improved, and reduced the network consumption. **_[Yue Miao](https://www.laiyuemiao.com)_** - The startup time of micro service has been greatly improved, and reduced the network consumption.
**_[CoreWeave](https://coreweave.com/)_** - Dramatically reduce the pull time of container image which embedded machine learning models.

2287
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -6,11 +6,9 @@ description = "Nydus Image Service"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause" license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus" repository = "https://github.com/dragonflyoss/image-service"
exclude = ["contrib/", "smoke/", "tests/"] edition = "2018"
edition = "2021"
resolver = "2" resolver = "2"
build = "build.rs"
[profile.release] [profile.release]
panic = "abort" panic = "abort"
@ -33,57 +31,46 @@ path = "src/lib.rs"
[dependencies] [dependencies]
anyhow = "1" anyhow = "1"
base64 = "0.21"
clap = { version = "4.0.18", features = ["derive", "cargo"] } clap = { version = "4.0.18", features = ["derive", "cargo"] }
flexi_logger = { version = "0.25", features = ["compress"] } flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.12.0" fuse-backend-rs = "^0.10.4"
hex = "0.4.3" hex = "0.4.3"
hyper = "0.14.11" hyper = "0.14.11"
hyperlocal = "0.8.0" hyperlocal = "0.8.0"
indexmap = "1"
lazy_static = "1" lazy_static = "1"
libc = "0.2" libc = "0.2"
log = "0.4.8" log = "0.4.8"
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
mio = { version = "0.8", features = ["os-poll", "os-ext"] } mio = { version = "0.8", features = ["os-poll", "os-ext"] }
nix = "0.24.0" nix = "0.24.0"
rlimit = "0.9.0" rlimit = "0.9.0"
rusqlite = { version = "0.30.0", features = ["bundled"] }
serde = { version = "1.0.110", features = ["serde_derive", "rc"] } serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.51" serde_json = "1.0.51"
sha2 = "0.10.2"
tar = "0.4.40" tar = "0.4.40"
tokio = { version = "1.35.1", features = ["macros"] } tokio = { version = "1.24", features = ["macros"] }
vmm-sys-util = "0.11.0"
xattr = "1.0.1"
# Build static linked openssl library # Build static linked openssl library
openssl = { version = '0.10.72', features = ["vendored"] } openssl = { version = "0.10.55", features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
#openssl-src = { version = "111.22" }
nydus-api = { version = "0.4.0", path = "api", features = [ nydus-api = { version = "0.2.1", path = "api", features = ["handler"] }
"error-backtrace", nydus-app = { version = "0.3.2", path = "app" }
"handler", nydus-error = { version = "0.2.3", path = "error" }
] } nydus-rafs = { version = "0.2.2", path = "rafs" }
nydus-builder = { version = "0.2.0", path = "builder" } nydus-service = { version = "0.2.0", path = "service" }
nydus-rafs = { version = "0.4.0", path = "rafs" } nydus-storage = { version = "0.6.2", path = "storage" }
nydus-service = { version = "0.4.0", path = "service", features = [ nydus-utils = { version = "0.4.1", path = "utils" }
"block-device",
] }
nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit",
] }
nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true } vhost = { version = "0.6.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.15.0", optional = true } vhost-user-backend = { version = "0.8.0", optional = true }
virtio-bindings = { version = "0.1", features = [ virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
"virtio-v5_0_0", virtio-queue = { version = "0.7.0", optional = true }
], optional = true } vm-memory = { version = "0.10.0", features = ["backend-mmap"], optional = true }
virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dev-dependencies]
xattr = "1.0.1"
vmm-sys-util = "0.12.1"
[features] [features]
default = [ default = [
@ -93,7 +80,6 @@ default = [
"backend-s3", "backend-s3",
"backend-http-proxy", "backend-http-proxy",
"backend-localdisk", "backend-localdisk",
"dedup",
] ]
virtiofs = [ virtiofs = [
"nydus-service/virtiofs", "nydus-service/virtiofs",
@ -102,29 +88,13 @@ virtiofs = [
"virtio-bindings", "virtio-bindings",
"virtio-queue", "virtio-queue",
"vm-memory", "vm-memory",
"vmm-sys-util",
] ]
block-nbd = ["nydus-service/block-nbd"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"] backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = [ backend-localdisk = ["nydus-storage/backend-localdisk"]
"nydus-storage/backend-localdisk",
"nydus-storage/backend-localdisk-gpt",
]
backend-oss = ["nydus-storage/backend-oss"] backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"] backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"] backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
[workspace] [workspace]
members = [ members = ["api", "app", "blobfs", "clib", "error", "rafs", "storage", "service", "utils"]
"api",
"builder",
"clib",
"rafs",
"storage",
"service",
"upgrade",
"utils",
]

View File

@ -1,15 +0,0 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

120
Makefile
View File

@ -1,4 +1,4 @@
all: release all: build
all-build: build contrib-build all-build: build contrib-build
@ -15,10 +15,9 @@ INSTALL_DIR_PREFIX ?= "/usr/local/bin"
DOCKER ?= "true" DOCKER ?= "true"
CARGO ?= $(shell which cargo) CARGO ?= $(shell which cargo)
RUSTUP ?= $(shell which rustup)
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
SUDO = $(shell which sudo) SUDO = $(shell which sudo)
CARGO_COMMON ?= CARGO_COMMON ?=
EXCLUDE_PACKAGES = EXCLUDE_PACKAGES =
UNAME_M := $(shell uname -m) UNAME_M := $(shell uname -m)
@ -44,6 +43,7 @@ endif
endif endif
RUST_TARGET_STATIC ?= $(STATIC_TARGET) RUST_TARGET_STATIC ?= $(STATIC_TARGET)
CTR-REMOTE_PATH = contrib/ctr-remote
NYDUSIFY_PATH = contrib/nydusify NYDUSIFY_PATH = contrib/nydusify
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
@ -51,6 +51,12 @@ current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
env_go_path := $(shell go env GOPATH 2> /dev/null) env_go_path := $(shell go env GOPATH 2> /dev/null)
go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go") go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
# Set the env DIND_CACHE_DIR to specify a cache directory for
# docker-in-docker container, used to cache data for docker pull,
# then mitigate the impact of docker hub rate limit, for example:
# env DIND_CACHE_DIR=/path/to/host/var-lib-docker make docker-nydusify-smoke
dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,)
# Functions # Functions
# Func: build golang target in docker # Func: build golang target in docker
@ -60,7 +66,7 @@ go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
define build_golang define build_golang
echo "Building target $@ by invoking: $(2)" echo "Building target $@ by invoking: $(2)"
if [ $(DOCKER) = "true" ]; then \ if [ $(DOCKER) = "true" ]; then \
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.21 $(2) ;\ docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.18 $(2) ;\
else \ else \
$(2) -C $(1); \ $(2) -C $(1); \
fi fi
@ -84,7 +90,7 @@ endef
@${CARGO} clean --target ${RUST_TARGET_STATIC} --release -p libz-sys @${CARGO} clean --target ${RUST_TARGET_STATIC} --release -p libz-sys
# Targets that are exposed to developers and users. # Targets that are exposed to developers and users.
build: .format build: .format .release_version
${CARGO} build $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) ${CARGO} build $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# Cargo will skip checking if it is already checked # Cargo will skip checking if it is already checked
${CARGO} clippy --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --bins --tests -- -Dwarnings --allow clippy::unnecessary_cast --allow clippy::needless_borrow ${CARGO} clippy --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --bins --tests -- -Dwarnings --allow clippy::unnecessary_cast --allow clippy::needless_borrow
@ -102,57 +108,60 @@ install: release
@sudo install -m 755 target/release/nydus-image $(INSTALL_DIR_PREFIX)/nydus-image @sudo install -m 755 target/release/nydus-image $(INSTALL_DIR_PREFIX)/nydus-image
@sudo install -m 755 target/release/nydusctl $(INSTALL_DIR_PREFIX)/nydusctl @sudo install -m 755 target/release/nydusctl $(INSTALL_DIR_PREFIX)/nydusctl
# unit test
ut: .release_version ut: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --no-fail-fast --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8 TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install miri first from https://github.com/rust-lang/miri/
miri-ut-nextest: .release_version
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install test dependencies
pre-coverage:
${CARGO} +stable install cargo-llvm-cov --locked
${RUSTUP} component add llvm-tools-preview
# print unit test coverage to console
coverage: pre-coverage
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
# write unit teset coverage to codecov.json, used for Github CI
coverage-codecov:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
smoke-only: smoke-only:
make -C smoke test make -C smoke test
smoke-performance:
make -C smoke test-performance
smoke-benchmark:
make -C smoke test-benchmark
smoke-takeover:
make -C smoke test-takeover
smoke: release smoke-only smoke: release smoke-only
contrib-build: nydusify nydus-overlayfs docker-nydus-smoke:
docker build -t nydus-smoke --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/nydus-smoke
docker run --rm --privileged ${CARGO_BUILD_GEARS} \
-e TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) \
-v ~/.cargo:/root/.cargo \
-v $(TEST_WORKDIR_PREFIX) \
-v ${current_dir}:/nydus-rs \
nydus-smoke
contrib-release: nydusify-release nydus-overlayfs-release # TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image.
# So musl compilation must be involved.
# And docker-in-docker deployment involves image building?
docker-nydusify-smoke: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestSmoke
contrib-test: nydusify-test nydus-overlayfs-test docker-nydusify-image-test: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestDockerHubImage
contrib-lint: nydusify-lint nydus-overlayfs-lint # Run integration smoke test in docker-in-docker container. It requires some special settings,
# refer to `misc/example/README.md` for details.
docker-smoke: docker-nydus-smoke docker-nydusify-smoke
contrib-clean: nydusify-clean nydus-overlayfs-clean contrib-build: nydusify ctr-remote nydus-overlayfs
contrib-release: nydusify-release ctr-remote-release \
nydus-overlayfs-release
contrib-test: nydusify-test ctr-remote-test \
nydus-overlayfs-test
contrib-clean: nydusify-clean ctr-remote-clean \
nydus-overlayfs-clean
contrib-install: contrib-install:
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX) @sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 contrib/ctr-remote/bin/ctr-remote $(INSTALL_DIR_PREFIX)/ctr-remote
@sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs @sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify @sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify
@ -168,8 +177,17 @@ nydusify-test:
nydusify-clean: nydusify-clean:
$(call build_golang,${NYDUSIFY_PATH},make clean) $(call build_golang,${NYDUSIFY_PATH},make clean)
nydusify-lint: ctr-remote:
$(call build_golang,${NYDUSIFY_PATH},make lint) $(call build_golang,${CTR-REMOTE_PATH},make)
ctr-remote-release:
$(call build_golang,${CTR-REMOTE_PATH},make release)
ctr-remote-test:
$(call build_golang,${CTR-REMOTE_PATH},make test)
ctr-remote-clean:
$(call build_golang,${CTR-REMOTE_PATH},make clean)
nydus-overlayfs: nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make) $(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
@ -183,9 +201,17 @@ nydus-overlayfs-test:
nydus-overlayfs-clean: nydus-overlayfs-clean:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean) $(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean)
nydus-overlayfs-lint:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make lint)
docker-static: docker-static:
docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static
docker-example: all-static-release
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydusd misc/example
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydus-image misc/example
cp contrib/nydusify/cmd/nydusify misc/example
docker build -t nydus-rs-example misc/example
@cid=$(shell docker run --rm -t -d --privileged $(dind_cache_mount) nydus-rs-example)
@docker exec $$cid /run.sh
@EXIT_CODE=$$?
@docker rm -f $$cid
@exit $$EXIT_CODE

152
README.md
View File

@ -1,82 +1,76 @@
[**[⬇️ Download]**](https://github.com/dragonflyoss/nydus/releases)
[**[📖 Website]**](https://nydus.dev/)
[**[☸ Quick Start (Kubernetes)**]](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md)
[**[🤓 Quick Start (nerdctl)**]](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md)
[**[❓ FAQs & Troubleshooting]**](https://github.com/dragonflyoss/nydus/wiki/FAQ)
# Nydus: Dragonfly Container Image Service # Nydus: Dragonfly Container Image Service
<p><img src="misc/logo.svg" width="170"></p> <p><img src="misc/logo.svg" width="170"></p>
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases) [![Release Version](https://img.shields.io/github/v/release/dragonflyoss/image-service?style=flat)](https://github.com/dragonflyoss/image-service/releases)
[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs) [![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
[![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule) [![Smoke Test](https://github.com/dragonflyoss/image-service/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/ci.yml)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule) [![Image Conversion](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml)
[![Release Test Daily](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml?query=event%3Aschedule) [![Release Test Daily](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml)
[![Benchmark](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml?query=event%3Aschedule) [![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Coverage](https://codecov.io/gh/dragonflyoss/nydus/branch/master/graph/badge.svg)](https://codecov.io/gh/dragonflyoss/nydus) [![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/image-service?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/image-service)
## Introduction ## Introduction
Nydus implements a content-addressable file system on the RAFS format, which enhances the current OCI image specification by improving container launch speed, image space and network bandwidth efficiency, and data integrity. The nydus project implements a content-addressable filesystem on top of a RAFS format that improves the current OCI image specification, in terms of container launching speed, image space, and network bandwidth efficiency, as well as data integrity.
The following Benchmarking results demonstrate that Nydus images significantly outperform OCI images in terms of container cold startup elapsed time on Containerd, particularly as the OCI image size increases. The following benchmarking result shows the performance improvement compared with the OCI image for the container cold startup elapsed time on containerd. As the OCI image size increases, the container startup time of using Nydus image remains very short.
![Container Cold Startup](./misc/perf.jpg) ![Container Cold Startup](./misc/perf.jpg)
## Principles Nydus' key features include:
***Provide Fast, Secure And Easy Access to Data Distribution*** - Container images can be downloaded on demand in chunks for lazy pulling to boost container startup
- Chunk-based content-addressable data de-duplication to minimize storage, transmission and memory footprints
- Merged filesystem tree in order to remove all intermediate layers as an option
- in-kernel EROFS or FUSE filesystem together with overlayfs to provide full POSIX compatibility
- E2E image data integrity check. So security issues like "Supply Chain Attach" can be avoided and detected at runtime
- Compatible with the OCI artifacts spec and distribution spec, so nydus image can be stored in a regular container registry
- Native [eStargz](https://github.com/containerd/stargz-snapshotter) image support with remote snapshotter plugin `nydus-snapshotter` for containerd runtime.
- Various container image storage backends are supported. For example, Registry, NAS, Aliyun/OSS, S3.
- Integrated with CNCF incubating project Dragonfly to distribute container images in P2P fashion and mitigate the pressure on container registries
- Capable to prefetch data block before user IO hits the block thus to reduce read latency
- Record files access pattern during runtime gathering access trace/log, by which user abnormal behaviors are easily caught
- Access trace based prefetch table
- User I/O amplification to reduce the amount of small requests to storage backend.
- **Performance**: Second-level container startup speed, millisecond-level function computation code package loading speed. Currently Nydus includes following tools:
- **Low Cost**: Written in memory-safed language `Rust`, numerous optimizations help improve memory, CPU, and network consumption.
- **Flexible**: Supports container runtimes such as [runC](https://github.com/opencontainers/runc) and [Kata](https://github.com/kata-containers), and provides [Confidential Containers](https://github.com/confidential-containers) and vulnerability scanning capabilities
- **Security**: End to end data integrity check, Supply Chain Attack can be detected and avoided at runtime.
## Key features
- **On-demand Load**: Container images/packages are downloaded on-demand in chunk unit to boost startup.
- **Chunk Deduplication**: Chunk level data de-duplication cross-layer or cross-image to reduce storage, transport, and memory cost.
- **Compatible with Ecosystem**: Storage backend support with Registry, OSS, NAS, Shared Disk, and [P2P service](https://d7y.io/). Compatible with the [OCI images](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-zran.md), and provide native [eStargz images](https://github.com/containerd/stargz-snapshotter) support.
- **Data Analyzability**: Record accesses, data layout optimization, prefetch, IO amplification, abnormal behavior detection.
- **POSIX Compatibility**: In-Kernel EROFS or FUSE filesystems together with overlayfs provide full POSIX compatibility
- **I/O optimization**: Use merged filesystem tree, data prefetching and User I/O amplification to reduce read latency and improve user I/O performance.
## Ecosystem
### Nydus tools
| Tool | Description | | Tool | Description |
| ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | | ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests | | [nydusd](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively | | [nydus-image](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage | | [nydusify](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it | | [nydusctl](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [ctr-remote](https://github.com/dragonflyoss/image-service/tree/master/contrib/ctr-remote) | An enhanced `containerd` CLI tool enable nydus support with `containerd` ctr |
| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed | | [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime | | [nydus-overlayfs](https://github.com/dragonflyoss/image-service/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd | | [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
### Supported platforms Currently Nydus is supporting the following platforms in container ecosystem:
| Type | Platform | Description | Status | | Type | Platform | Description | Status |
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | | ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ | | Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ | | Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ | | Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ | | Build | [Buildkit](https://github.com/moby/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Runtime | [Kubernetes](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md) | Run Nydus image using CRI interface | ✅ | | Runtime | [Kubernetes](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md) | Run Nydus image using CRI interface | ✅ |
| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | ✅ | | Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | ✅ |
| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 | | Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/image-service/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | ✅ | | Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | ✅ |
| Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | ✅ | | Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | ✅ |
## Build To try nydus image service:
1. Convert an original OCI image to nydus image and store it somewhere like Docker/Registry, NAS, Aliyun/OSS or S3. This can be directly done by `nydusify`. Normal users don't have to get involved with `nydus-image`.
2. Get `nydus-snapshotter`(`containerd-nydus-grpc`) installed locally and configured properly. Or install `nydus-docker-graphdriver` plugin.
3. Operate container in legacy approaches. For example, `docker`, `nerdctl`, `crictl` and `ctr`.
## Build Binary
### Build Binary
```shell ```shell
# build debug binary # build debug binary
make make
@ -86,36 +80,30 @@ make release
make docker-static make docker-static
``` ```
### Build Nydus Image ## Quick Start with Kubernetes and Containerd
Convert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).
Build Nydus image from Dockerfile directly: [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md).
Build Nydus layer from various sources: [Nydus Image Builder](./docs/nydus-image.md).
#### Image prefetch optimization
To further reduce container startup time, a nydus image with a prefetch list can be built using the NRI plugin (containerd >=1.7): [Container Image Optimizer](https://github.com/containerd/nydus-snapshotter/blob/main/docs/optimize_nydus_image.md)
## Run
### Quick Start
For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md) For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md)
### Run Nydus Snapshotter ## Build Nydus Image
Build Nydus image from directory source: [Nydus Image Builder](./docs/nydus-image.md).
Convert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).
## Nydus Snapshotter
Nydus-snapshotter is a non-core sub-project of containerd. Nydus-snapshotter is a non-core sub-project of containerd.
Check out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter). Check out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter).
It works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter. It works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter.
### Run Nydusd Daemon ## Run Nydusd Daemon
Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` when a container rootfs is prepared. Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` when a container rootfs is prepared.
Run Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md). Run Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md).
### Run Nydus with in-kernel EROFS filesystem ## Run Nydus with in-kernel EROFS filesystem
In-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then. In-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then.
@ -123,39 +111,47 @@ Since [Linux 5.19](https://lwn.net/Articles/896140), EROFS has added a new file-
Guide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md) Guide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md)
### Run Nydus with Dragonfly P2P system ## Run Nydus with Dragonfly P2P system
Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point pressure of the registry server. Benchmarking results in the production environment demonstrate that using Dragonfly can reduce network latency by more than 80%, to understand the performance results and integration steps, please refer to the [nydus integration](https://d7y.io/docs/setup/integration/nydus). Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point of network pressure for registry server, testing in the production environment shows that using Dragonfly can reduce network latency by more than 80%, to understand the performance test data and how to configure Nydus to use Dragonfly, please refer to the [doc](https://d7y.io/docs/setup/integration/nydus).
If you want to deploy Dragonfly and Nydus at the same time through Helm, please refer to the **[Quick Start](https://github.com/dragonflyoss/helm-charts/blob/main/INSTALL.md)**. ## Accelerate OCI image directly with Nydus
### Run OCI image directly with Nydus
Nydus is able to generate a tiny artifact called a `nydus zran` from an existing OCI image in the short time. This artifact can be used to accelerate the container boot time without the need for a full image conversion. For more information, please see the [documentation](./docs/nydus-zran.md). Nydus is able to generate a tiny artifact called a `nydus zran` from an existing OCI image in the short time. This artifact can be used to accelerate the container boot time without the need for a full image conversion. For more information, please see the [documentation](./docs/nydus-zran.md).
### Run with Docker(Moby) ## Build Images via Harbor
Nydus provides a variety of methods to support running on docker(Moby), please refer to [Nydus Setup for Docker(Moby) Environment](./docs/docker-env-setup.md) Nydus cooperates with Harbor community to develop [acceleration-service](https://github.com/goharbor/acceleration-service) which provides a general service for Harbor to support image acceleration based on kinds of accelerators like Nydus, eStargz, etc.
### Run with macOS ## Run with Docker
Nydus can also run with macfuse(a.k.a osxfuse). For more details please read [nydus with macOS](./docs/nydus_with_macos.md). A **experimental** plugin helps to start Docker container from nydus image. For more particular instructions, please refer to [Docker Nydus Graph Driver](https://github.com/nydusaccelerator/docker-nydus-graphdriver)
### Run eStargz image (with lazy pulling) ## Run with macOS
Nydus can also run with macfuse(a.k.a osxfuse).For more details please read [nydus with macOS](./docs/nydus_with_macos.md).
## Run eStargz image (with lazy pulling)
The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option. The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option.
In the future, `zstd::chunked` can work in this way as well. In the future, `zstd::chunked` can work in this way as well.
### Run Nydus Service ## Reuse Nydus Services
Using the key features of nydus as native in your project without preparing and invoking `nydusd` deliberately, [nydus-service](./service/README.md) helps to reuse the core services of nyuds. Using the key features of nydus as native in your project without preparing and invoking `nydusd` deliberately, [nydus-service](./service/README.md) helps to reuse the core services of nyuds.
## Documentation ## Documentation
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs) Browse the documentation to learn more. Here are some topics you may be interested in:
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus). - [A Nydus Tutorial for Beginners](./docs/tutorial.md)
- [Nydus Design Doc](./docs/nydus-design.md)
- Our talk on Open Infra Summit 2020: [Toward Next Generation Container Image](https://drive.google.com/file/d/1LRfLUkNxShxxWU7SKjc_50U0N9ZnGIdV/view)
- [EROFS, What Are We Doing Now For Containers?](https://static.sched.com/hosted_files/kccncosschn21/fd/EROFS_What_Are_We_Doing_Now_For_Containers.pdf)
- [The Evolution of the Nydus Image Acceleration](https://d7y.io/blog/2022/06/06/evolution-of-nydus/) \([Video](https://youtu.be/yr6CB1JN1xg)\)
- [Introduction to Nydus Image Service on In-kernel EROFS](https://static.sched.com/hosted_files/osseu2022/59/Introduction%20to%20Nydus%20Image%20Service%20on%20In-kernel%20EROFS.pdf) \([Video](https://youtu.be/2Uog-y2Gcus)\)
## Community ## Community
@ -163,7 +159,7 @@ Nydus aims to form a **vendor-neutral opensource** image distribution solution t
Questions, bug reports, technical discussion, feature requests and contribution are always welcomed! Questions, bug reports, technical discussion, feature requests and contribution are always welcomed!
We're very pleased to hear your use cases any time. We're very pleased to hear your use cases any time.
Feel free to reach us via Slack or Dingtalk. Feel free to reach/join us via Slack and/or Dingtalk.
- **Slack:** [Nydus Workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w) - **Slack:** [Nydus Workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)
@ -172,3 +168,5 @@ Feel free to reach us via Slack or Dingtalk.
- **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11) - **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)
<img src="./misc/dingtalk.jpg" width="250" height="300"/> <img src="./misc/dingtalk.jpg" width="250" height="300"/>
- **Technical Meeting:** Every Wednesday at 06:00 UTC (Beijing, Shanghai 14:00), please see our [HackMD](https://hackmd.io/@Nydus/Bk8u2X0p9) page for more information.

View File

@ -1,31 +1,29 @@
[package] [package]
name = "nydus-api" name = "nydus-api"
version = "0.4.0" version = "0.2.1"
description = "APIs for Nydus Image Service" description = "APIs for Nydus Image Service"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause" license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus" repository = "https://github.com/dragonflyoss/image-service"
edition = "2021" edition = "2018"
[dependencies] [dependencies]
libc = "0.2"
log = "0.4.8"
serde_json = "1.0.53"
toml = "0.5"
thiserror = "1.0.30"
backtrace = { version = "0.3", optional = true }
dbs-uhttp = { version = "0.3.0", optional = true } dbs-uhttp = { version = "0.3.0", optional = true }
http = { version = "0.2.1", optional = true } http = { version = "0.2.1", optional = true }
lazy_static = { version = "1.4.0", optional = true } lazy_static = { version = "1.4.0", optional = true }
libc = "0.2"
log = "0.4.8"
mio = { version = "0.8", features = ["os-poll", "os-ext"], optional = true } mio = { version = "0.8", features = ["os-poll", "os-ext"], optional = true }
serde = { version = "1.0.110", features = ["rc", "serde_derive"] } serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
serde_json = "1.0.53"
toml = "0.5"
url = { version = "2.1.1", optional = true } url = { version = "2.1.1", optional = true }
nydus-error = { version = "0.2", path = "../error" }
[dev-dependencies] [dev-dependencies]
vmm-sys-util = { version = "0.12.1" } vmm-sys-util = { version = "0.11" }
[features] [features]
error-backtrace = ["backtrace"]
handler = ["dbs-uhttp", "http", "lazy_static", "mio", "url"] handler = ["dbs-uhttp", "http", "lazy_static", "mio", "url"]

File diff suppressed because it is too large Load Diff

View File

@ -1,252 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fmt::Debug;
/// Display error messages with line number, file path and optional backtrace.
pub fn make_error(
err: std::io::Error,
_raw: impl Debug,
_file: &str,
_line: u32,
) -> std::io::Error {
#[cfg(feature = "error-backtrace")]
{
if let Ok(val) = std::env::var("RUST_BACKTRACE") {
if val.trim() != "0" {
error!("Stack:\n{:?}", backtrace::Backtrace::new());
error!("Error:\n\t{:?}\n\tat {}:{}", _raw, _file, _line);
return err;
}
}
error!(
"Error:\n\t{:?}\n\tat {}:{}\n\tnote: enable `RUST_BACKTRACE=1` env to display a backtrace",
_raw, _file, _line
);
}
err
}
/// Define error macro like `x!()` or `x!(err)`.
/// Note: The `x!()` macro will convert any origin error (Os, Simple, Custom) to Custom error.
macro_rules! define_error_macro {
($fn:ident, $err:expr) => {
#[macro_export]
macro_rules! $fn {
() => {
std::io::Error::new($err.kind(), format!("{}: {}:{}", $err, file!(), line!()))
};
($raw:expr) => {
$crate::error::make_error($err, &$raw, file!(), line!())
};
}
};
}
/// Define error macro for libc error codes
macro_rules! define_libc_error_macro {
($fn:ident, $code:ident) => {
define_error_macro!($fn, std::io::Error::from_raw_os_error(libc::$code));
};
}
// TODO: Add format string support
// Add more libc error macro here if necessary
define_libc_error_macro!(einval, EINVAL);
define_libc_error_macro!(enoent, ENOENT);
define_libc_error_macro!(ebadf, EBADF);
define_libc_error_macro!(eacces, EACCES);
define_libc_error_macro!(enotdir, ENOTDIR);
define_libc_error_macro!(eisdir, EISDIR);
define_libc_error_macro!(ealready, EALREADY);
define_libc_error_macro!(enosys, ENOSYS);
define_libc_error_macro!(epipe, EPIPE);
define_libc_error_macro!(eio, EIO);
/// Return EINVAL error with formatted error message.
#[macro_export]
macro_rules! bail_einval {
($($arg:tt)*) => {{
return Err(einval!(format!($($arg)*)))
}}
}
/// Return EIO error with formatted error message.
#[macro_export]
macro_rules! bail_eio {
($($arg:tt)*) => {{
return Err(eio!(format!($($arg)*)))
}}
}
// Add more custom error macro here if necessary
define_error_macro!(last_error, std::io::Error::last_os_error());
define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)]
mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 {
return Err(einval!());
}
Ok(())
}
#[test]
fn test_einval() {
assert_eq!(
check_size(0x2000).unwrap_err().kind(),
std::io::Error::from_raw_os_error(libc::EINVAL).kind()
);
}
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
}

View File

@ -4,23 +4,15 @@
// //
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
use std::convert::TryInto;
use std::io; use std::io;
use std::sync::mpsc::{RecvError, SendError}; use std::sync::mpsc::{RecvError, SendError};
use nydus_error::error::MetricsError;
use serde::Deserialize; use serde::Deserialize;
use serde_json::Error as SerdeError; use serde_json::Error as SerdeError;
use thiserror::Error;
use crate::BlobCacheEntry; use crate::{BlobCacheEntryConfig, BlobCacheEntryConfigV2};
/// Errors related to Metrics.
#[derive(Error, Debug)]
pub enum MetricsError {
#[error("no counter found for the metric")]
NoCounter,
#[error("failed to serialize metric: {0:?}")]
Serialize(#[source] SerdeError),
}
/// Mount a filesystem. /// Mount a filesystem.
#[derive(Clone, Deserialize, Debug)] #[derive(Clone, Deserialize, Debug)]
@ -51,6 +43,56 @@ pub struct DaemonConf {
pub log_level: String, pub log_level: String,
} }
/// Blob cache object type for nydus/rafs bootstrap blob.
pub const BLOB_CACHE_TYPE_META_BLOB: &str = "bootstrap";
/// Blob cache object type for nydus/rafs data blob.
pub const BLOB_CACHE_TYPE_DATA_BLOB: &str = "datablob";
/// Configuration information for a cached blob.
#[derive(Debug, Deserialize, Serialize)]
pub struct BlobCacheEntry {
/// Type of blob object, bootstrap or data blob.
#[serde(rename = "type")]
pub blob_type: String,
/// Blob id.
#[serde(rename = "id")]
pub blob_id: String,
/// Configuration information to generate blob cache object.
#[serde(default, rename = "config")]
pub(crate) blob_config_legacy: Option<BlobCacheEntryConfig>,
/// Configuration information to generate blob cache object.
#[serde(default, rename = "config_v2")]
pub blob_config: Option<BlobCacheEntryConfigV2>,
/// Domain id for the blob, which is used to group cached blobs into management domains.
#[serde(default)]
pub domain_id: String,
}
impl BlobCacheEntry {
pub fn prepare_configuration_info(&mut self) -> bool {
if self.blob_config.is_none() {
if let Some(legacy) = self.blob_config_legacy.as_ref() {
match legacy.try_into() {
Err(_) => return false,
Ok(v) => self.blob_config = Some(v),
}
}
}
match self.blob_config.as_ref() {
None => false,
Some(cfg) => cfg.cache.validate() && cfg.backend.validate(),
}
}
}
/// Configuration information for a list of cached blob objects.
#[derive(Debug, Default, Deserialize, Serialize)]
pub struct BlobCacheList {
/// List of blob configuration information.
pub blobs: Vec<BlobCacheEntry>,
}
/// Identifier for cached blob objects. /// Identifier for cached blob objects.
/// ///
/// Domains are used to control the blob sharing scope. All blobs associated with the same domain /// Domains are used to control the blob sharing scope. All blobs associated with the same domain
@ -132,7 +174,7 @@ pub enum DaemonErrorKind {
/// Unexpected event type. /// Unexpected event type.
UnexpectedEvent(String), UnexpectedEvent(String),
/// Can't upgrade the daemon. /// Can't upgrade the daemon.
UpgradeManager(String), UpgradeManager,
/// Unsupported requests. /// Unsupported requests.
Unsupported, Unsupported,
} }
@ -146,25 +188,25 @@ pub enum MetricsErrorKind {
Stats(MetricsError), Stats(MetricsError),
} }
#[derive(Error, Debug)] #[derive(Debug)]
#[allow(clippy::large_enum_variant)] #[allow(clippy::large_enum_variant)]
pub enum ApiError { pub enum ApiError {
#[error("daemon internal error: {0:?}")] /// Daemon internal error
DaemonAbnormal(DaemonErrorKind), DaemonAbnormal(DaemonErrorKind),
#[error("daemon events error: {0}")] /// Failed to get events information
Events(String), Events(String),
#[error("metrics error: {0:?}")] /// Failed to get metrics information
Metrics(MetricsErrorKind), Metrics(MetricsErrorKind),
#[error("failed to mount filesystem: {0:?}")] /// Failed to mount filesystem
MountFilesystem(DaemonErrorKind), MountFilesystem(DaemonErrorKind),
#[error("failed to send request to the API service: {0:?}")] /// Failed to send request to the API service
RequestSend(#[from] SendError<Option<ApiRequest>>), RequestSend(SendError<Option<ApiRequest>>),
#[error("failed to parse response payload type")] /// Unrecognized payload content
ResponsePayloadType, ResponsePayloadType,
#[error("failed to receive response from the API service: {0:?}")] /// Failed to receive response from the API service
ResponseRecv(#[from] RecvError), ResponseRecv(RecvError),
#[error("failed to wake up the daemon: {0:?}")] /// Failed to send wakeup notification
Wakeup(#[source] io::Error), Wakeup(io::Error),
} }
/// Specialized `std::result::Result` for API replies. /// Specialized `std::result::Result` for API replies.

View File

@ -140,7 +140,7 @@ impl EndpointHandler for MetricsFsFilesHandler {
(Method::Get, None) => { (Method::Get, None) => {
let id = extract_query_part(req, "id"); let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest") let latest_read_files = extract_query_part(req, "latest")
.is_some_and(|b| b.parse::<bool>().unwrap_or(false)); .map_or(false, |b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files)); let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics)) Ok(convert_to_response(r, HttpError::FsFilesMetrics))
} }

View File

@ -12,12 +12,12 @@ use dbs_uhttp::{Body, HttpServer, MediaType, Request, Response, ServerError, Sta
use http::uri::Uri; use http::uri::Uri;
use mio::unix::SourceFd; use mio::unix::SourceFd;
use mio::{Events, Interest, Poll, Token, Waker}; use mio::{Events, Interest, Poll, Token, Waker};
use nydus_error::error::MetricsError;
use serde::Deserialize; use serde::Deserialize;
use url::Url; use url::Url;
use crate::http::{ use crate::http::{
ApiError, ApiRequest, ApiResponse, DaemonErrorKind, ErrorMessage, HttpError, MetricsError, ApiError, ApiRequest, ApiResponse, DaemonErrorKind, ErrorMessage, HttpError, MetricsErrorKind,
MetricsErrorKind,
}; };
use crate::http_endpoint_common::{ use crate::http_endpoint_common::{
EventsHandler, ExitHandler, MetricsBackendHandler, MetricsBlobcacheHandler, MountHandler, EventsHandler, ExitHandler, MetricsBackendHandler, MetricsBlobcacheHandler, MountHandler,
@ -43,8 +43,9 @@ pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// right now, below way makes it easy to obtain query parts from uri. // right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path()); let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix) let url = Url::parse(&http_prefix)
.inspect_err(|e| { .map_err(|e| {
error!("api: can't parse request {:?}", e); error!("api: can't parse request {:?}", e);
e
}) })
.ok()?; .ok()?;
@ -325,30 +326,35 @@ mod tests {
#[test] #[test]
fn test_http_api_routes_v1() { fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon")); assert!(HTTP_ROUTES.routes.get("/api/v1/daemon").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events")); assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/events").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend")); assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/backend").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start")); assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/start").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit")); assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/exit").is_some());
assert!(HTTP_ROUTES assert!(HTTP_ROUTES
.routes .routes
.contains_key("/api/v1/daemon/fuse/sendfd")); .get("/api/v1/daemon/fuse/sendfd")
.is_some());
assert!(HTTP_ROUTES assert!(HTTP_ROUTES
.routes .routes
.contains_key("/api/v1/daemon/fuse/takeover")); .get("/api/v1/daemon/fuse/takeover")
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount")); .is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics")); assert!(HTTP_ROUTES.routes.get("/api/v1/mount").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files")); assert!(HTTP_ROUTES.routes.get("/api/v1/metrics").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern")); assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/files").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend")); assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/pattern").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache")); assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/backend").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight")); assert!(HTTP_ROUTES
.routes
.get("/api/v1/metrics/blobcache")
.is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/inflight").is_some());
} }
#[test] #[test]
fn test_http_api_routes_v2() { fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon")); assert!(HTTP_ROUTES.routes.get("/api/v2/daemon").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs")); assert!(HTTP_ROUTES.routes.get("/api/v2/blobs").is_some());
} }
#[test] #[test]

View File

@ -14,11 +14,11 @@ extern crate serde;
#[cfg(feature = "handler")] #[cfg(feature = "handler")]
#[macro_use] #[macro_use]
extern crate lazy_static; extern crate lazy_static;
#[macro_use]
extern crate nydus_error;
pub mod config; pub mod config;
pub use config::*; pub use config::*;
#[macro_use]
pub mod error;
pub mod http; pub mod http;
pub use self::http::*; pub use self::http::*;

14
app/CHANGELOG.md Normal file
View File

@ -0,0 +1,14 @@
# Changelog
## [Unreleased]
### Added
### Fixed
### Deprecated
## [v0.1.0]
### Added
- Initial release

1
app/CODEOWNERS Normal file
View File

@ -0,0 +1 @@
* @bergwolf @imeoer @jiangliu

24
app/Cargo.toml Normal file
View File

@ -0,0 +1,24 @@
[package]
name = "nydus-app"
version = "0.3.2"
authors = ["The Nydus Developers"]
description = "Application framework for Nydus Image Service"
readme = "README.md"
repository = "https://github.com/dragonflyoss/image-service"
license = "Apache-2.0 OR BSD-3-Clause"
edition = "2018"
build = "build.rs"
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dependencies]
regex = "1.5.5"
flexi_logger = { version = "0.25", features = ["compress"] }
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive"] }
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
nydus-error = { version = "0.2", path = "../error" }

202
app/LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

57
app/README.md Normal file
View File

@ -0,0 +1,57 @@
# nydus-app
The `nydus-app` crate is a collection of utilities to help creating applications for [`Nydus Image Service`](https://github.com/dragonflyoss/image-service) project, which provides:
- `struct BuildTimeInfo`: application build and version information.
- `fn dump_program_info()`: dump program build and version information.
- `fn setup_logging()`: setup logging infrastructure for application.
## Support
**Platforms**:
- x86_64
- aarch64
**Operating Systems**:
- Linux
## Usage
Add `nydus-app` as a dependency in `Cargo.toml`
```toml
[dependencies]
nydus-app = "*"
```
Then add `extern crate nydus-app;` to your crate root if needed.
## Examples
- Setup application infrastructure.
```rust
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use clap::App;
use std::io::Result;
use nydus_app::{BuildTimeInfo, setup_logging};
fn main() -> Result<()> {
let level = cmd.value_of("log-level").unwrap().parse().unwrap();
let (bti_string, build_info) = BuildTimeInfo::dump();
let _cmd = App::new("")
.version(bti_string.as_str())
.author(crate_authors!())
.get_matches();
setup_logging(None, level)?;
print!("{}", build_info);
Ok(())
}
```
## License
This code is licensed under [Apache-2.0](LICENSE).

View File

@ -3,6 +3,43 @@
// //
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
//! Application framework and utilities for Nydus.
//!
//! The `nydus-app` crates provides common helpers and utilities to support Nydus application:
//! - Application Building Information: [`struct BuildTimeInfo`](struct.BuildTimeInfo.html) and
//! [`fn dump_program_info()`](fn.dump_program_info.html).
//! - Logging helpers: [`fn setup_logging()`](fn.setup_logging.html) and
//! [`fn log_level_to_verbosity()`](fn.log_level_to_verbosity.html).
//! - Signal handling: [`fn register_signal_handler()`](signal/fn.register_signal_handler.html).
//!
//! ```rust,ignore
//! #[macro_use(crate_authors, crate_version)]
//! extern crate clap;
//!
//! use clap::App;
//! use nydus_app::{BuildTimeInfo, setup_logging};
//! # use std::io::Result;
//!
//! fn main() -> Result<()> {
//! let level = cmd.value_of("log-level").unwrap().parse().unwrap();
//! let (bti_string, build_info) = BuildTimeInfo::dump();
//! let _cmd = App::new("")
//! .version(bti_string.as_str())
//! .author(crate_authors!())
//! .get_matches();
//!
//! setup_logging(None, level, 0)?;
//! print!("{}", build_info);
//!
//! Ok(())
//! }
//! ```
#[macro_use]
extern crate log;
#[macro_use]
extern crate nydus_error;
use std::env::current_dir; use std::env::current_dir;
use std::io::Result; use std::io::Result;
use std::path::PathBuf; use std::path::PathBuf;
@ -13,6 +50,8 @@ use flexi_logger::{
}; };
use log::{Level, LevelFilter, Record}; use log::{Level, LevelFilter, Record};
pub mod signal;
pub fn log_level_to_verbosity(level: log::LevelFilter) -> usize { pub fn log_level_to_verbosity(level: log::LevelFilter) -> usize {
if level == log::LevelFilter::Off { if level == log::LevelFilter::Off {
0 0
@ -21,6 +60,26 @@ pub fn log_level_to_verbosity(level: log::LevelFilter) -> usize {
} }
} }
pub mod built_info {
pub const PROFILE: &str = env!("PROFILE");
pub const RUSTC_VERSION: &str = env!("RUSTC_VERSION");
pub const BUILT_TIME_UTC: &str = env!("BUILT_TIME_UTC");
pub const GIT_COMMIT_VERSION: &str = env!("GIT_COMMIT_VERSION");
pub const GIT_COMMIT_HASH: &str = env!("GIT_COMMIT_HASH");
}
/// Dump program build and version information.
pub fn dump_program_info() {
info!(
"Program Version: {}, Git Commit: {:?}, Build Time: {:?}, Profile: {:?}, Rustc Version: {:?}",
built_info::GIT_COMMIT_VERSION,
built_info::GIT_COMMIT_HASH,
built_info::BUILT_TIME_UTC,
built_info::PROFILE,
built_info::RUSTC_VERSION,
);
}
fn get_file_name<'a>(record: &'a Record) -> Option<&'a str> { fn get_file_name<'a>(record: &'a Record) -> Option<&'a str> {
record.file().map(|v| match v.rfind("/src/") { record.file().map(|v| match v.rfind("/src/") {
None => v, None => v,
@ -70,7 +129,7 @@ fn colored_opt_format(
"[{}] {} {}", "[{}] {} {}",
style(level).paint(now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK).to_string()), style(level).paint(now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK).to_string()),
style(level).paint(level.to_string()), style(level).paint(level.to_string()),
style(level).paint(record.args().to_string()) style(level).paint(&record.args().to_string())
) )
} else { } else {
write!( write!(
@ -80,7 +139,7 @@ fn colored_opt_format(
style(level).paint(level.to_string()), style(level).paint(level.to_string()),
get_file_name(record).unwrap_or("<unnamed>"), get_file_name(record).unwrap_or("<unnamed>"),
record.line().unwrap_or(0), record.line().unwrap_or(0),
style(level).paint(record.args().to_string()) style(level).paint(&record.args().to_string())
) )
} }
} }
@ -117,7 +176,7 @@ pub fn setup_logging(
})?; })?;
spec = spec.basename(basename); spec = spec.basename(basename);
// `flexi_logger` automatically add `.log` suffix if the file name has no extension. // `flexi_logger` automatically add `.log` suffix if the file name has not extension.
if let Some(suffix) = path.extension() { if let Some(suffix) = path.extension() {
let suffix = suffix.to_str().ok_or_else(|| { let suffix = suffix.to_str().ok_or_else(|| {
eprintln!("invalid file extension {:?}", suffix); eprintln!("invalid file extension {:?}", suffix);

34
blobfs/Cargo.toml Normal file
View File

@ -0,0 +1,34 @@
[package]
name = "nydus-blobfs"
version = "0.2.0"
description = "Blob object file system for Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
[dependencies]
fuse-backend-rs = "0.10"
libc = "0.2"
log = "0.4.8"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
vm-memory = { version = "0.9" }
nydus-error = { version = "0.2", path = "../error" }
nydus-api = { version = "0.2", path = "../api" }
nydus-rafs = { version = "0.2", path = "../rafs" }
nydus-storage = { version = "0.6", path = "../storage", features = [
"backend-localfs",
] }
[dev-dependencies]
nydus-app = { version = "0.3", path = "../app" }
[features]
virtiofs = ["fuse-backend-rs/virtiofs", "nydus-rafs/virtio-fs"]
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "x86_64-apple-darwin"]

510
blobfs/src/lib.rs Normal file
View File

@ -0,0 +1,510 @@
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Fuse blob passthrough file system, mirroring an existing FS hierarchy.
//!
//! This file system mirrors the existing file system hierarchy of the system, starting at the
//! root file system. This is implemented by just "passing through" all requests to the
//! corresponding underlying file system.
//!
//! The code is derived from the
//! [CrosVM](https://chromium.googlesource.com/chromiumos/platform/crosvm/) project,
//! with heavy modification/enhancements from Alibaba Cloud OS team.
#[macro_use]
extern crate log;
use fuse_backend_rs::{
api::{filesystem::*, BackendFileSystem, VFS_MAX_INO},
passthrough::Config as PassthroughConfig,
passthrough::PassthroughFs,
};
use nydus_api::ConfigV2;
use nydus_error::einval;
use nydus_rafs::fs::Rafs;
use serde::Deserialize;
use std::any::Any;
#[cfg(feature = "virtiofs")]
use std::ffi::CStr;
use std::ffi::CString;
use std::fs::create_dir_all;
#[cfg(feature = "virtiofs")]
use std::fs::File;
use std::io;
#[cfg(feature = "virtiofs")]
use std::mem::MaybeUninit;
#[cfg(feature = "virtiofs")]
use std::os::unix::ffi::OsStrExt;
#[cfg(feature = "virtiofs")]
use std::os::unix::io::{AsRawFd, FromRawFd};
use std::path::Path;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::thread;
#[cfg(feature = "virtiofs")]
use nydus_storage::device::BlobPrefetchRequest;
use vm_memory::ByteValued;
mod sync_io;
#[cfg(feature = "virtiofs")]
const EMPTY_CSTR: &[u8] = b"\0";
type Inode = u64;
type Handle = u64;
#[repr(C, packed)]
#[derive(Clone, Copy, Debug, Default)]
struct LinuxDirent64 {
d_ino: libc::ino64_t,
d_off: libc::off64_t,
d_reclen: libc::c_ushort,
d_ty: libc::c_uchar,
}
unsafe impl ByteValued for LinuxDirent64 {}
/// Options that configure xxx
#[derive(Clone, Default, Deserialize)]
pub struct BlobOndemandConfig {
/// The rafs config used to set up rafs device for the purpose of
/// `on demand read`.
pub rafs_conf: ConfigV2,
/// THe path of bootstrap of an container image (for rafs in
/// kernel).
///
/// The default is ``.
#[serde(default)]
pub bootstrap_path: String,
/// The path of blob cache directory.
#[serde(default)]
pub blob_cache_dir: String,
}
impl FromStr for BlobOndemandConfig {
type Err = io::Error;
fn from_str(s: &str) -> io::Result<BlobOndemandConfig> {
serde_json::from_str(s).map_err(|e| einval!(e))
}
}
/// Options that configure the behavior of the blobfs fuse file system.
#[derive(Default, Debug, Clone, PartialEq)]
pub struct Config {
/// Blobfs config is embedded with passthrough config
pub ps_config: PassthroughConfig,
/// This provides on demand config of blob management.
pub blob_ondemand_cfg: String,
}
#[allow(dead_code)]
struct RafsHandle {
rafs: Arc<Mutex<Option<Rafs>>>,
handle: Arc<Mutex<Option<thread::JoinHandle<Option<Rafs>>>>>,
}
#[allow(dead_code)]
struct BootstrapArgs {
rafs_handle: RafsHandle,
blob_cache_dir: String,
}
// Safe to Send/Sync because the underlying data structures are readonly
unsafe impl Sync for BootstrapArgs {}
unsafe impl Send for BootstrapArgs {}
#[cfg(feature = "virtiofs")]
impl BootstrapArgs {
fn get_rafs_handle(&self) -> io::Result<()> {
let mut c = self.rafs_handle.rafs.lock().unwrap();
match (*self.rafs_handle.handle.lock().unwrap()).take() {
Some(handle) => {
let rafs = handle.join().unwrap().ok_or_else(|| {
error!("blobfs: get rafs failed.");
einval!("create rafs failed in thread.")
})?;
debug!("blobfs: async create Rafs finish!");
*c = Some(rafs);
Ok(())
}
None => Err(einval!("create rafs failed in thread.")),
}
}
fn fetch_range_sync(&self, prefetches: &[BlobPrefetchRequest]) -> io::Result<()> {
let c = self.rafs_handle.rafs.lock().unwrap();
match &*c {
Some(rafs) => rafs.fetch_range_synchronous(prefetches),
None => Err(einval!("create rafs failed in thread.")),
}
}
}
/// A file system that simply "passes through" all requests it receives to the underlying file
/// system.
///
/// To keep the implementation simple it servers the contents of its root directory. Users
/// that wish to serve only a specific directory should set up the environment so that that
/// directory ends up as the root of the file system process. One way to accomplish this is via a
/// combination of mount namespaces and the pivot_root system call.
pub struct BlobFs {
pfs: PassthroughFs,
#[allow(dead_code)]
bootstrap_args: BootstrapArgs,
}
impl BlobFs {
fn ensure_path_exist(path: &Path) -> io::Result<()> {
if path.as_os_str().is_empty() {
return Err(einval!("path is empty"));
}
if !path.exists() {
create_dir_all(path).map_err(|e| {
error!(
"create dir error. directory is {:?}. {}:{}",
path,
file!(),
line!()
);
e
})?;
}
Ok(())
}
/// Create a Blob file system instance.
pub fn new(cfg: Config) -> io::Result<BlobFs> {
trace!("BlobFs config is: {:?}", cfg);
let bootstrap_args = Self::load_bootstrap(&cfg)?;
let pfs = PassthroughFs::new(cfg.ps_config)?;
Ok(BlobFs {
pfs,
bootstrap_args,
})
}
fn load_bootstrap(cfg: &Config) -> io::Result<BootstrapArgs> {
let blob_ondemand_conf = BlobOndemandConfig::from_str(&cfg.blob_ondemand_cfg)?;
if !blob_ondemand_conf.rafs_conf.validate() {
return Err(einval!("invlidate configuration for blobfs"));
}
let rafs_cfg = blob_ondemand_conf.rafs_conf.get_rafs_config()?;
if rafs_cfg.mode != "direct" {
return Err(einval!("blobfs only supports RAFS 'direct' mode"));
}
// check if blob cache dir exists.
let path = Path::new(blob_ondemand_conf.blob_cache_dir.as_str());
Self::ensure_path_exist(path).map_err(|e| {
error!("blob_cache_dir not exist");
e
})?;
let path = Path::new(blob_ondemand_conf.bootstrap_path.as_str());
if !path.exists() || blob_ondemand_conf.bootstrap_path.is_empty() {
return Err(einval!("no valid bootstrap"));
}
let bootstrap_path = blob_ondemand_conf.bootstrap_path.clone();
let config = Arc::new(blob_ondemand_conf.rafs_conf.clone());
trace!("blobfs: async create Rafs start!");
let rafs_join_handle = std::thread::spawn(move || {
let (mut rafs, reader) = match Rafs::new(&config, "blobfs", Path::new(&bootstrap_path))
{
Ok(rafs) => rafs,
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
};
match rafs.import(reader, None) {
Ok(_) => {}
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
}
Some(rafs)
});
let rafs_handle = RafsHandle {
rafs: Arc::new(Mutex::new(None)),
handle: Arc::new(Mutex::new(Some(rafs_join_handle))),
};
Ok(BootstrapArgs {
rafs_handle,
blob_cache_dir: blob_ondemand_conf.blob_cache_dir,
})
}
#[cfg(feature = "virtiofs")]
fn stat(f: &File) -> io::Result<libc::stat64> {
// Safe because this is a constant value and a valid C string.
let pathname = unsafe { CStr::from_bytes_with_nul_unchecked(EMPTY_CSTR) };
let mut st = MaybeUninit::<libc::stat64>::zeroed();
// Safe because the kernel will only write data in `st` and we check the return value.
let res = unsafe {
libc::fstatat64(
f.as_raw_fd(),
pathname.as_ptr(),
st.as_mut_ptr(),
libc::AT_EMPTY_PATH | libc::AT_SYMLINK_NOFOLLOW,
)
};
if res >= 0 {
// Safe because the kernel guarantees that the struct is now fully initialized.
Ok(unsafe { st.assume_init() })
} else {
Err(io::Error::last_os_error())
}
}
/// Initialize the PassthroughFs
pub fn import(&self) -> io::Result<()> {
self.pfs.import()
}
#[cfg(feature = "virtiofs")]
fn open_file(dfd: i32, pathname: &Path, flags: i32, mode: u32) -> io::Result<File> {
let pathname = CString::new(pathname.as_os_str().as_bytes())
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?;
let fd = if flags & libc::O_CREAT == libc::O_CREAT {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags, mode) }
} else {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags) }
};
if fd < 0 {
return Err(io::Error::last_os_error());
}
// Safe because we just opened this fd.
Ok(unsafe { File::from_raw_fd(fd) })
}
}
impl BackendFileSystem for BlobFs {
fn mount(&self) -> io::Result<(Entry, u64)> {
let ctx = &Context::default();
let entry = self.lookup(ctx, ROOT_ID, &CString::new(".").unwrap())?;
Ok((entry, VFS_MAX_INO))
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test2)]
mod tests {
use super::*;
use fuse_backend_rs::abi::virtio_fs;
use fuse_backend_rs::transport::FsCacheReqHandler;
use nydus_app::setup_logging;
use std::os::unix::prelude::RawFd;
struct DummyCacheReq {}
impl FsCacheReqHandler for DummyCacheReq {
fn map(
&mut self,
_foffset: u64,
_moffset: u64,
_len: u64,
_flags: u64,
_fd: RawFd,
) -> io::Result<()> {
Ok(())
}
fn unmap(&mut self, _requests: Vec<virtio_fs::RemovemappingOne>) -> io::Result<()> {
Ok(())
}
}
// #[test]
// #[cfg(feature = "virtiofs")]
// fn test_blobfs_new() {
// setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
// let config = r#"
// {
// "device": {
// "backend": {
// "type": "localfs",
// "config": {
// "dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/test4k"
// }
// },
// "cache": {
// "type": "blobcache",
// "compressed": false,
// "config": {
// "work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
// }
// }
// },
// "mode": "direct",
// "digest_validate": true,
// "enable_xattr": false,
// "fs_prefetch": {
// "enable": false,
// "threads_count": 10,
// "merging_size": 131072,
// "bandwidth_rate": 10485760
// }
// }"#;
// // let rafs_conf = RafsConfig::from_str(config).unwrap();
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// // blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache1".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// // bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-foo".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_ok());
// }
#[test]
fn test_blobfs_setupmapping() {
setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
let config = r#"
{
"rafs_conf": {
"device": {
"backend": {
"type": "localfs",
"config": {
"blob_file": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/nydus-rs/myblob1/v6/blob-btrfs"
}
},
"cache": {
"type": "blobcache",
"compressed": false,
"config": {
"work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}
}
},
"mode": "direct",
"digest_validate": false,
"enable_xattr": false,
"fs_prefetch": {
"enable": false,
"threads_count": 10,
"merging_size": 131072,
"bandwidth_rate": 10485760
}
},
"bootstrap_path": "nydus-rs/myblob1/v6/bootstrap-btrfs",
"blob_cache_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}"#;
// let rafs_conf = RafsConfig::from_str(config).unwrap();
let ps_config = PassthroughConfig {
root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
.to_string(),
do_import: false,
no_open: true,
..Default::default()
};
let fs_cfg = Config {
ps_config,
blob_ondemand_cfg: config.to_string(),
};
let fs = BlobFs::new(fs_cfg).unwrap();
fs.import().unwrap();
fs.mount().unwrap();
let ctx = &Context::default();
// read bootstrap first, should return err as it's not in blobcache dir.
// let bootstrap = CString::new("foo").unwrap();
// let entry = fs.lookup(ctx, ROOT_ID, &bootstrap).unwrap();
// let mut req = DummyCacheReq {};
// fs.setupmapping(ctx, entry.inode, 0, 0, 4096, 0, 0, &mut req)
// .unwrap();
// FIXME: use a real blob id under test4k.
let blob_cache_dir = CString::new("blobcache").unwrap();
let parent_entry = fs.lookup(ctx, ROOT_ID, &blob_cache_dir).unwrap();
let blob_id = CString::new("80da976ee69d68af6bb9170395f71b4ef1e235e815e2").unwrap();
let entry = fs.lookup(ctx, parent_entry.inode, &blob_id).unwrap();
let foffset = 0;
let len = 1 << 21;
let mut req = DummyCacheReq {};
fs.setupmapping(ctx, entry.inode, 0, foffset, len, 0, 0, &mut req)
.unwrap();
// FIXME: release fs
fs.destroy();
}
}

View File

@ -3,58 +3,98 @@
// Use of this source code is governed by a BSD-style license that can be // Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE-BSD-3-Clause file. // found in the LICENSE-BSD-3-Clause file.
use std::ffi::CStr; //! Fuse passthrough file system, mirroring an existing FS hierarchy.
use std::io;
use std::time::Duration;
use fuse_backend_rs::abi::fuse_abi::{CreateIn, FsOptions, OpenOptions, SetattrValid};
use fuse_backend_rs::abi::virtio_fs;
use fuse_backend_rs::api::filesystem::{
Context, DirEntry, Entry, FileSystem, GetxattrReply, ListxattrReply, ZeroCopyReader,
ZeroCopyWriter,
};
use fuse_backend_rs::transport::FsCacheReqHandler;
use nydus_api::eacces;
use nydus_utils::{round_down, round_up};
use super::*; use super::*;
use crate::fs::Handle; use fuse_backend_rs::abi::fuse_abi::CreateIn;
use crate::metadata::Inode; #[cfg(feature = "virtiofs")]
use fuse_backend_rs::abi::virtio_fs;
const MAPPING_UNIT_SIZE: u64 = 0x200000; #[cfg(feature = "virtiofs")]
use fuse_backend_rs::transport::FsCacheReqHandler;
impl BlobfsState { use nydus_error::eacces;
fn fetch_range_sync(&self, prefetches: &[BlobPrefetchRequest]) -> io::Result<()> { #[cfg(feature = "virtiofs")]
let rafs_handle = self.rafs_handle.read().unwrap(); use nydus_storage::device::BlobPrefetchRequest;
match rafs_handle.rafs.as_ref() { #[cfg(feature = "virtiofs")]
Some(rafs) => rafs.fetch_range_synchronous(prefetches), use std::cmp::min;
None => Err(einval!("blobfs: failed to initialize RAFS filesystem.")), use std::ffi::CStr;
} use std::io;
} #[cfg(feature = "virtiofs")]
} use std::path::Path;
use std::time::Duration;
impl BlobFs { impl BlobFs {
// prepare BlobPrefetchRequest and call device.prefetch(). #[cfg(feature = "virtiofs")]
// Make sure prefetch doesn't use delay_persist as we need the data immediately. fn check_st_size(blob_id: &Path, size: i64) -> io::Result<()> {
fn load_chunks_on_demand(&self, inode: Inode, offset: u64, len: u64) -> io::Result<()> { if size < 0 {
let (blob_id, size) = self.get_blob_id_and_size(inode)?;
if size <= offset || offset.checked_add(len).is_none() {
return Err(einval!(format!( return Err(einval!(format!(
"blobfs: blob_id {:?}, offset {:?} is larger than size {:?}", "load_chunks_on_demand: blob_id {:?}, size: {:?} is less than 0",
blob_id, size
)));
}
Ok(())
}
#[cfg(feature = "virtiofs")]
fn get_blob_id_and_size(&self, inode: Inode) -> io::Result<(String, u64)> {
// locate blob file that the inode refers to
let blob_id_full_path = self.pfs.readlinkat_proc_file(inode)?;
let parent = blob_id_full_path
.parent()
.ok_or_else(|| einval!("blobfs: failed to find parent"))?;
trace!(
"parent: {:?}, blob id path: {:?}",
parent,
blob_id_full_path
);
let blob_file = Self::open_file(
libc::AT_FDCWD,
blob_id_full_path.as_path(),
libc::O_PATH | libc::O_NOFOLLOW | libc::O_CLOEXEC,
0,
)
.map_err(|e| einval!(e))?;
let st = Self::stat(&blob_file).map_err(|e| {
error!("get_blob_id_and_size: stat failed {:?}", e);
e
})?;
let blob_id = blob_id_full_path
.file_name()
.ok_or_else(|| einval!("blobfs: failed to find blob file"))?;
trace!("load_chunks_on_demand: blob_id {:?}", blob_id);
Self::check_st_size(blob_id_full_path.as_path(), st.st_size)?;
Ok((
blob_id.to_os_string().into_string().unwrap(),
st.st_size as u64,
))
}
#[cfg(feature = "virtiofs")]
fn load_chunks_on_demand(&self, inode: Inode, offset: u64) -> io::Result<()> {
// prepare BlobPrefetchRequest and call device.prefetch().
// Make sure prefetch doesn't use delay_persist as we need the
// data immediately.
let (blob_id, size) = self.get_blob_id_and_size(inode)?;
if size <= offset {
return Err(einval!(format!(
"load_chunks_on_demand: blob_id {:?}, offset {:?} is larger than size {:?}",
blob_id, offset, size blob_id, offset, size
))); )));
} }
let end = std::cmp::min(offset + len, size); let len = size - offset;
let len = end - offset;
let req = BlobPrefetchRequest { let req = BlobPrefetchRequest {
blob_id, blob_id,
offset, offset,
len, len: min(len, 0x0020_0000_u64), // 2M range
}; };
self.state.fetch_range_sync(&[req]).map_err(|e| { self.bootstrap_args.fetch_range_sync(&[req]).map_err(|e| {
warn!("blobfs: failed to load data, {:?}", e); warn!("load chunks: error, {:?}", e);
e e
}) })
} }
@ -65,7 +105,8 @@ impl FileSystem for BlobFs {
type Handle = Handle; type Handle = Handle;
fn init(&self, capable: FsOptions) -> io::Result<FsOptions> { fn init(&self, capable: FsOptions) -> io::Result<FsOptions> {
self.state.get_rafs_handle()?; #[cfg(feature = "virtiofs")]
self.bootstrap_args.get_rafs_handle()?;
self.pfs.init(capable) self.pfs.init(capable)
} }
@ -73,6 +114,10 @@ impl FileSystem for BlobFs {
self.pfs.destroy() self.pfs.destroy()
} }
fn statfs(&self, _ctx: &Context, inode: Inode) -> io::Result<libc::statvfs64> {
self.pfs.statfs(_ctx, inode)
}
fn lookup(&self, _ctx: &Context, parent: Inode, name: &CStr) -> io::Result<Entry> { fn lookup(&self, _ctx: &Context, parent: Inode, name: &CStr) -> io::Result<Entry> {
self.pfs.lookup(_ctx, parent, name) self.pfs.lookup(_ctx, parent, name)
} }
@ -85,52 +130,26 @@ impl FileSystem for BlobFs {
self.pfs.batch_forget(_ctx, requests) self.pfs.batch_forget(_ctx, requests)
} }
fn getattr( fn opendir(
&self, &self,
_ctx: &Context, _ctx: &Context,
inode: Inode, inode: Inode,
_handle: Option<Handle>, flags: u32,
) -> io::Result<(libc::stat64, Duration)> { ) -> io::Result<(Option<Handle>, OpenOptions)> {
self.pfs.getattr(_ctx, inode, _handle) self.pfs.opendir(_ctx, inode, flags)
} }
fn setattr( fn releasedir(
&self, &self,
_ctx: &Context, _ctx: &Context,
_inode: Inode, inode: Inode,
_attr: libc::stat64, _flags: u32,
_handle: Option<Handle>, handle: Handle,
_valid: SetattrValid, ) -> io::Result<()> {
) -> io::Result<(libc::stat64, Duration)> { self.pfs.releasedir(_ctx, inode, _flags, handle)
Err(eacces!("Setattr request is not allowed in blobfs"))
}
fn readlink(&self, _ctx: &Context, inode: Inode) -> io::Result<Vec<u8>> {
self.pfs.readlink(_ctx, inode)
}
fn symlink(
&self,
_ctx: &Context,
_linkname: &CStr,
_parent: Inode,
_name: &CStr,
) -> io::Result<Entry> {
Err(eacces!("Symlink request is not allowed in blobfs"))
}
fn mknod(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_mode: u32,
_rdev: u32,
_umask: u32,
) -> io::Result<Entry> {
Err(eacces!("Mknod request is not allowed in blobfs"))
} }
#[allow(unused)]
fn mkdir( fn mkdir(
&self, &self,
_ctx: &Context, _ctx: &Context,
@ -139,186 +158,16 @@ impl FileSystem for BlobFs {
_mode: u32, _mode: u32,
_umask: u32, _umask: u32,
) -> io::Result<Entry> { ) -> io::Result<Entry> {
error!("do mkdir req error: blob file can not be written.");
Err(eacces!("Mkdir request is not allowed in blobfs")) Err(eacces!("Mkdir request is not allowed in blobfs"))
} }
fn unlink(&self, _ctx: &Context, _parent: Inode, _name: &CStr) -> io::Result<()> { #[allow(unused)]
Err(eacces!("Unlink request is not allowed in blobfs"))
}
fn rmdir(&self, _ctx: &Context, _parent: Inode, _name: &CStr) -> io::Result<()> { fn rmdir(&self, _ctx: &Context, _parent: Inode, _name: &CStr) -> io::Result<()> {
error!("do rmdir req error: blob file can not be written.");
Err(eacces!("Rmdir request is not allowed in blobfs")) Err(eacces!("Rmdir request is not allowed in blobfs"))
} }
fn rename(
&self,
_ctx: &Context,
_olddir: Inode,
_oldname: &CStr,
_newdir: Inode,
_newname: &CStr,
_flags: u32,
) -> io::Result<()> {
Err(eacces!("Rename request is not allowed in blobfs"))
}
fn link(
&self,
_ctx: &Context,
_inode: Inode,
_newparent: Inode,
_newname: &CStr,
) -> io::Result<Entry> {
Err(eacces!("Link request is not allowed in blobfs"))
}
fn open(
&self,
_ctx: &Context,
inode: Inode,
flags: u32,
_fuse_flags: u32,
) -> io::Result<(Option<Handle>, OpenOptions, Option<u32>)> {
self.pfs.open(_ctx, inode, flags, _fuse_flags)
}
fn create(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_args: CreateIn,
) -> io::Result<(Entry, Option<Handle>, OpenOptions, Option<u32>)> {
Err(eacces!("Create request is not allowed in blobfs"))
}
fn read(
&self,
ctx: &Context,
inode: Inode,
handle: Handle,
w: &mut dyn ZeroCopyWriter,
size: u32,
offset: u64,
lock_owner: Option<u64>,
flags: u32,
) -> io::Result<usize> {
self.load_chunks_on_demand(inode, offset, size as u64)?;
self.pfs
.read(ctx, inode, handle, w, size, offset, lock_owner, flags)
}
fn write(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_r: &mut dyn ZeroCopyReader,
_size: u32,
_offset: u64,
_lock_owner: Option<u64>,
_delayed_write: bool,
_flags: u32,
_fuse_flags: u32,
) -> io::Result<usize> {
Err(eacces!("Write request is not allowed in blobfs"))
}
fn flush(
&self,
_ctx: &Context,
inode: Inode,
handle: Handle,
_lock_owner: u64,
) -> io::Result<()> {
self.pfs.flush(_ctx, inode, handle, _lock_owner)
}
fn fsync(
&self,
_ctx: &Context,
inode: Inode,
datasync: bool,
handle: Handle,
) -> io::Result<()> {
self.pfs.fsync(_ctx, inode, datasync, handle)
}
fn fallocate(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_mode: u32,
_offset: u64,
_length: u64,
) -> io::Result<()> {
Err(eacces!("Fallocate request is not allowed in blobfs"))
}
fn release(
&self,
_ctx: &Context,
inode: Inode,
_flags: u32,
handle: Handle,
_flush: bool,
_flock_release: bool,
_lock_owner: Option<u64>,
) -> io::Result<()> {
self.pfs.release(
_ctx,
inode,
_flags,
handle,
_flush,
_flock_release,
_lock_owner,
)
}
fn statfs(&self, _ctx: &Context, inode: Inode) -> io::Result<libc::statvfs64> {
self.pfs.statfs(_ctx, inode)
}
fn setxattr(
&self,
_ctx: &Context,
_inode: Inode,
_name: &CStr,
_value: &[u8],
_flags: u32,
) -> io::Result<()> {
Err(eacces!("Setxattr request is not allowed in blobfs"))
}
fn getxattr(
&self,
_ctx: &Context,
inode: Inode,
name: &CStr,
size: u32,
) -> io::Result<GetxattrReply> {
self.pfs.getxattr(_ctx, inode, name, size)
}
fn listxattr(&self, _ctx: &Context, inode: Inode, size: u32) -> io::Result<ListxattrReply> {
self.pfs.listxattr(_ctx, inode, size)
}
fn removexattr(&self, _ctx: &Context, _inode: Inode, _name: &CStr) -> io::Result<()> {
Err(eacces!("Removexattr request is not allowed in blobfs"))
}
fn opendir(
&self,
_ctx: &Context,
inode: Inode,
flags: u32,
) -> io::Result<(Option<Handle>, OpenOptions)> {
self.pfs.opendir(_ctx, inode, flags)
}
fn readdir( fn readdir(
&self, &self,
_ctx: &Context, _ctx: &Context,
@ -345,26 +194,56 @@ impl FileSystem for BlobFs {
.readdirplus(_ctx, inode, handle, size, offset, add_entry) .readdirplus(_ctx, inode, handle, size, offset, add_entry)
} }
fn fsyncdir( fn open(
&self, &self,
ctx: &Context, _ctx: &Context,
inode: Inode, inode: Inode,
datasync: bool, flags: u32,
handle: Handle, _fuse_flags: u32,
) -> io::Result<()> { ) -> io::Result<(Option<Handle>, OpenOptions)> {
self.pfs.fsyncdir(ctx, inode, datasync, handle) self.pfs.open(_ctx, inode, flags, _fuse_flags)
} }
fn releasedir( fn release(
&self, &self,
_ctx: &Context, _ctx: &Context,
inode: Inode, inode: Inode,
_flags: u32, _flags: u32,
handle: Handle, handle: Handle,
_flush: bool,
_flock_release: bool,
_lock_owner: Option<u64>,
) -> io::Result<()> { ) -> io::Result<()> {
self.pfs.releasedir(_ctx, inode, _flags, handle) self.pfs.release(
_ctx,
inode,
_flags,
handle,
_flush,
_flock_release,
_lock_owner,
)
} }
#[allow(unused)]
fn create(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_args: CreateIn,
) -> io::Result<(Entry, Option<Handle>, OpenOptions)> {
error!("do create req error: blob file cannot write.");
Err(eacces!("Create request is not allowed in blobfs"))
}
#[allow(unused)]
fn unlink(&self, _ctx: &Context, _parent: Inode, _name: &CStr) -> io::Result<()> {
error!("do unlink req error: blob file cannot write.");
Err(eacces!("Unlink request is not allowed in blobfs"))
}
#[cfg(feature = "virtiofs")]
fn setupmapping( fn setupmapping(
&self, &self,
_ctx: &Context, _ctx: &Context,
@ -376,25 +255,20 @@ impl FileSystem for BlobFs {
moffset: u64, moffset: u64,
vu_req: &mut dyn FsCacheReqHandler, vu_req: &mut dyn FsCacheReqHandler,
) -> io::Result<()> { ) -> io::Result<()> {
debug!(
"blobfs: setupmapping ino {:?} foffset {} len {} flags {} moffset {}",
inode, foffset, len, flags, moffset
);
if (flags & virtio_fs::SetupmappingFlags::WRITE.bits()) != 0 { if (flags & virtio_fs::SetupmappingFlags::WRITE.bits()) != 0 {
return Err(eacces!("blob file cannot write in dax")); return Err(eacces!("blob file cannot write in dax"));
} }
if foffset.checked_add(len).is_none() || foffset + len > u64::MAX - MAPPING_UNIT_SIZE { self.load_chunks_on_demand(inode, foffset)?;
return Err(einval!(format!(
"blobfs: invalid offset 0x{:x} and len 0x{:x}",
foffset, len
)));
}
let end = round_up(foffset + len, MAPPING_UNIT_SIZE);
let offset = round_down(foffset, MAPPING_UNIT_SIZE);
let len = end - offset;
self.load_chunks_on_demand(inode, offset, len)?;
self.pfs self.pfs
.setupmapping(_ctx, inode, _handle, foffset, len, flags, moffset, vu_req) .setupmapping(_ctx, inode, _handle, foffset, len, flags, moffset, vu_req)
} }
#[cfg(feature = "virtiofs")]
fn removemapping( fn removemapping(
&self, &self,
_ctx: &Context, _ctx: &Context,
@ -405,10 +279,201 @@ impl FileSystem for BlobFs {
self.pfs.removemapping(_ctx, _inode, requests, vu_req) self.pfs.removemapping(_ctx, _inode, requests, vu_req)
} }
fn read(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_w: &mut dyn ZeroCopyWriter,
_size: u32,
_offset: u64,
_lock_owner: Option<u64>,
_flags: u32,
) -> io::Result<usize> {
error!(
"do Read req error: blob file cannot do nondax read, please check if dax is enabled"
);
Err(eacces!("Read request is not allowed in blobfs"))
}
#[allow(unused)]
fn write(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_r: &mut dyn ZeroCopyReader,
_size: u32,
_offset: u64,
_lock_owner: Option<u64>,
_delayed_write: bool,
_flags: u32,
_fuse_flags: u32,
) -> io::Result<usize> {
error!("do Write req error: blob file cannot write.");
Err(eacces!("Write request is not allowed in blobfs"))
}
fn getattr(
&self,
_ctx: &Context,
inode: Inode,
_handle: Option<Handle>,
) -> io::Result<(libc::stat64, Duration)> {
self.pfs.getattr(_ctx, inode, _handle)
}
#[allow(unused)]
fn setattr(
&self,
_ctx: &Context,
_inode: Inode,
_attr: libc::stat64,
_handle: Option<Handle>,
_valid: SetattrValid,
) -> io::Result<(libc::stat64, Duration)> {
error!("do setattr req error: blob file cannot write.");
Err(eacces!("Setattr request is not allowed in blobfs"))
}
#[allow(unused)]
fn rename(
&self,
_ctx: &Context,
_olddir: Inode,
_oldname: &CStr,
_newdir: Inode,
_newname: &CStr,
_flags: u32,
) -> io::Result<()> {
error!("do rename req error: blob file cannot write.");
Err(eacces!("Rename request is not allowed in blobfs"))
}
#[allow(unused)]
fn mknod(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_mode: u32,
_rdev: u32,
_umask: u32,
) -> io::Result<Entry> {
error!("do mknode req error: blob file cannot write.");
Err(eacces!("Mknod request is not allowed in blobfs"))
}
#[allow(unused)]
fn link(
&self,
_ctx: &Context,
_inode: Inode,
_newparent: Inode,
_newname: &CStr,
) -> io::Result<Entry> {
error!("do link req error: blob file cannot write.");
Err(eacces!("Link request is not allowed in blobfs"))
}
#[allow(unused)]
fn symlink(
&self,
_ctx: &Context,
_linkname: &CStr,
_parent: Inode,
_name: &CStr,
) -> io::Result<Entry> {
error!("do symlink req error: blob file cannot write.");
Err(eacces!("Symlink request is not allowed in blobfs"))
}
fn readlink(&self, _ctx: &Context, inode: Inode) -> io::Result<Vec<u8>> {
self.pfs.readlink(_ctx, inode)
}
fn flush(
&self,
_ctx: &Context,
inode: Inode,
handle: Handle,
_lock_owner: u64,
) -> io::Result<()> {
self.pfs.flush(_ctx, inode, handle, _lock_owner)
}
fn fsync(
&self,
_ctx: &Context,
inode: Inode,
datasync: bool,
handle: Handle,
) -> io::Result<()> {
self.pfs.fsync(_ctx, inode, datasync, handle)
}
fn fsyncdir(
&self,
ctx: &Context,
inode: Inode,
datasync: bool,
handle: Handle,
) -> io::Result<()> {
self.pfs.fsyncdir(ctx, inode, datasync, handle)
}
fn access(&self, ctx: &Context, inode: Inode, mask: u32) -> io::Result<()> { fn access(&self, ctx: &Context, inode: Inode, mask: u32) -> io::Result<()> {
self.pfs.access(ctx, inode, mask) self.pfs.access(ctx, inode, mask)
} }
#[allow(unused)]
fn setxattr(
&self,
_ctx: &Context,
_inode: Inode,
_name: &CStr,
_value: &[u8],
_flags: u32,
) -> io::Result<()> {
error!("do setxattr req error: blob file cannot write.");
Err(eacces!("Setxattr request is not allowed in blobfs"))
}
fn getxattr(
&self,
_ctx: &Context,
inode: Inode,
name: &CStr,
size: u32,
) -> io::Result<GetxattrReply> {
self.pfs.getxattr(_ctx, inode, name, size)
}
fn listxattr(&self, _ctx: &Context, inode: Inode, size: u32) -> io::Result<ListxattrReply> {
self.pfs.listxattr(_ctx, inode, size)
}
#[allow(unused)]
fn removexattr(&self, _ctx: &Context, _inode: Inode, _name: &CStr) -> io::Result<()> {
error!("do removexattr req error: blob file cannot write.");
Err(eacces!("Removexattr request is not allowed in blobfs"))
}
#[allow(unused)]
fn fallocate(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_mode: u32,
_offset: u64,
_length: u64,
) -> io::Result<()> {
error!("do fallocate req error: blob file cannot write.");
Err(eacces!("Fallocate request is not allowed in blobfs"))
}
#[allow(unused)]
fn lseek( fn lseek(
&self, &self,
_ctx: &Context, _ctx: &Context,

View File

@ -1,18 +1,18 @@
[package] [package]
name = "nydus-builder" name = "nydus-builder"
version = "0.2.0" version = "0.1.0"
description = "Nydus Image Builder" description = "Nydus Image Builder"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus" repository = "https://github.com/dragonflyoss/image-service"
edition = "2021" edition = "2018"
[dependencies] [dependencies]
anyhow = "1.0.35" anyhow = "1.0.35"
base64 = "0.21" base64 = "0.21"
hex = "0.4.3" hex = "0.4.3"
indexmap = "2" indexmap = "1"
libc = "0.2" libc = "0.2"
log = "0.4" log = "0.4"
nix = "0.24" nix = "0.24"
@ -20,15 +20,13 @@ serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53" serde_json = "1.0.53"
sha2 = "0.10.2" sha2 = "0.10.2"
tar = "0.4.40" tar = "0.4.40"
vmm-sys-util = "0.12.1" vmm-sys-util = "0.10.0"
xattr = "1.0.1" xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.4.0", path = "../api" } nydus-api = { version = "0.3", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" } nydus-rafs = { version = "0.3", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] } nydus-storage = { version = "0.6", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.5.0", path = "../utils" } nydus-utils = { version = "0.4", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

View File

@ -1,189 +0,0 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -1,283 +0,0 @@
// Copyright (C) 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate Chunkdict RAFS bootstrap.
//! -------------------------------------------------------------------------------------------------
//! Bug 1: Inconsistent Chunk Size Leading to Blob Size Less Than 4K(v6_block_size)
//! Description: The size of chunks is not consistent, which results in the possibility that a blob,
//! composed of a group of these chunks, may be less than 4K(v6_block_size) in size.
//! This inconsistency leads to a failure in passing the size check.
//! -------------------------------------------------------------------------------------------------
//! Bug 2: Incorrect Chunk Number Calculation Due to Premature Check Logic
//! Description: The current logic for calculating the chunk number is based on the formula size/chunk size.
//! However, this approach is flawed as it precedes the actual check which accounts for chunk statistics.
//! Consequently, this leads to inaccurate counting of chunk numbers.
use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node;
use crate::NodeChunk;
use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest;
use std::mem::size_of;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ChunkdictChunkInfo {
pub image_reference: String,
pub version: String,
pub chunk_blob_id: String,
pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64,
pub chunk_uncompressed_offset: u64,
}
pub struct ChunkdictBlobInfo {
pub blob_id: String,
pub blob_compressed_size: u64,
pub blob_uncompressed_size: u64,
pub blob_compressor: String,
pub blob_meta_ci_compressed_size: u64,
pub blob_meta_ci_uncompressed_size: u64,
pub blob_meta_ci_offset: u64,
}
/// Struct to generate chunkdict RAFS bootstrap.
pub struct Generator {}
impl Generator {
// Generate chunkdict RAFS bootstrap.
pub fn generate(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
chunkdict_chunks_origin: Vec<ChunkdictChunkInfo>,
chunkdict_blobs: Vec<ChunkdictBlobInfo>,
) -> Result<BuildOutput> {
// Validate and remove chunks whose belonged blob sizes are smaller than a block.
let mut chunkdict_chunks = chunkdict_chunks_origin.to_vec();
Self::validate_and_remove_chunks(ctx, &mut chunkdict_chunks);
// Build root tree.
let mut tree = Self::build_root_tree(ctx)?;
// Build child tree.
let child = Self::build_child_tree(ctx, blob_mgr, &chunkdict_chunks, &chunkdict_blobs)?;
let result = vec![child];
tree.children = result;
Self::validate_tree(&tree)?;
// Build bootstrap.
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
/// Validate tree.
fn validate_tree(tree: &Tree) -> Result<()> {
let pre = &mut |t: &Tree| -> Result<()> {
let node = t.borrow_mut_node();
debug!("chunkdict tree: ");
debug!("inode: {}", node);
for chunk in &node.chunks {
debug!("\t chunk: {}", chunk);
}
Ok(())
};
tree.walk_dfs_pre(pre)?;
debug!("chunkdict tree is valid.");
Ok(())
}
/// Validates and removes chunks with a total uncompressed size smaller than the block size limit.
fn validate_and_remove_chunks(ctx: &mut BuildContext, chunkdict: &mut Vec<ChunkdictChunkInfo>) {
let mut chunk_sizes = std::collections::HashMap::new();
// Accumulate the uncompressed size for each chunk_blob_id.
for chunk in chunkdict.iter() {
*chunk_sizes.entry(chunk.chunk_blob_id.clone()).or_insert(0) +=
chunk.chunk_uncompressed_size as u64;
}
// Find all chunk_blob_ids with a total uncompressed size > v6_block_size.
let small_chunks: Vec<String> = chunk_sizes
.into_iter()
.filter(|&(_, size)| size < ctx.v6_block_size())
.inspect(|(id, _)| {
eprintln!(
"Warning: Blob with id '{}' is smaller than {} bytes.",
id,
ctx.v6_block_size()
)
})
.map(|(id, _)| id)
.collect();
// Retain only chunks with chunk_blob_id that has a total uncompressed size > v6_block_size.
chunkdict.retain(|chunk| !small_chunks.contains(&chunk.chunk_blob_id));
}
/// Build the root tree.
pub fn build_root_tree(ctx: &mut BuildContext) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(1);
inode.set_uid(1000);
inode.set_gid(1000);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFDIR as u32);
inode.set_nlink(3);
inode.set_name_size("/".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 0,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/"),
target: PathBuf::from("/"),
target_vec: vec![OsString::from("/")],
symlink: None,
xattrs: RafsXAttrs::default(),
v6_force_extended_inode: true,
};
let root_node = Node::new(inode, node_info, 0);
let tree = Tree::new(root_node);
Ok(tree)
}
/// Build the child tree.
fn build_child_tree(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(2);
inode.set_uid(0);
inode.set_gid(0);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFREG as u32);
inode.set_nlink(1);
inode.set_name_size("chunkdict".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 1,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/chunkdict"),
target: PathBuf::from("/chunkdict"),
target_vec: vec![OsString::from("/"), OsString::from("/chunkdict")],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: true,
};
let mut node = Node::new(inode, node_info, 0);
// Insert chunks.
Self::insert_chunks(ctx, blob_mgr, &mut node, chunkdict_chunks, chunkdict_blobs)?;
let node_size: u64 = node
.chunks
.iter()
.map(|chunk| chunk.inner.uncompressed_size() as u64)
.sum();
node.inode.set_size(node_size);
// Update child count.
node.inode.set_child_count(node.chunks.len() as u32);
let child = Tree::new(node);
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
Ok(child)
}
/// Insert chunks.
fn insert_chunks(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
node: &mut Node,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<()> {
for (index, chunk_info) in chunkdict_chunks.iter().enumerate() {
let chunk_size: u32 = chunk_info.chunk_compressed_size;
let file_offset = index as u64 * chunk_size as u64;
let mut chunk = ChunkWrapper::new(ctx.fs_version);
// Update blob context.
let (blob_index, blob_ctx) =
blob_mgr.get_or_cerate_blob_for_chunkdict(ctx, &chunk_info.chunk_blob_id)?;
let chunk_uncompressed_size = chunk_info.chunk_uncompressed_size;
let pre_d_offset = blob_ctx.current_uncompressed_offset;
blob_ctx.uncompressed_blob_size = pre_d_offset + chunk_uncompressed_size as u64;
blob_ctx.current_uncompressed_offset += chunk_uncompressed_size as u64;
blob_ctx.blob_meta_header.set_ci_uncompressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
blob_ctx.blob_meta_header.set_ci_compressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
let chunkdict_blob_info = chunkdict_blobs
.iter()
.find(|blob| blob.blob_id == chunk_info.chunk_blob_id)
.unwrap();
blob_ctx.blob_compressor =
Algorithm::from_str(chunkdict_blob_info.blob_compressor.as_str())?;
blob_ctx
.blob_meta_header
.set_ci_uncompressed_size(chunkdict_blob_info.blob_meta_ci_uncompressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_size(chunkdict_blob_info.blob_meta_ci_compressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_offset(chunkdict_blob_info.blob_meta_ci_offset);
blob_ctx.blob_meta_header.set_ci_compressor(Algorithm::Zstd);
// Update chunk context.
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
chunk.set_compressed_size(chunk_info.chunk_compressed_size);
chunk.set_compressed_offset(chunk_info.chunk_compressed_offset);
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk.clone()),
});
}
Ok(())
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,214 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::{Context, Error, Result};
use nydus_utils::digest::{self, RafsDigest};
use std::ops::Deref;
use nydus_rafs::metadata::layout::{RafsBlobTable, RAFS_V5_ROOT_INODE};
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig, RafsSuperFlags};
use crate::{ArtifactStorage, BlobManager, BootstrapContext, BootstrapManager, BuildContext, Tree};
/// RAFS bootstrap/meta builder.
pub struct Bootstrap {
pub(crate) tree: Tree,
}
impl Bootstrap {
/// Create a new instance of [Bootstrap].
pub fn new(tree: Tree) -> Result<Self> {
Ok(Self { tree })
}
/// Build the final view of the RAFS filesystem meta from the hierarchy `tree`.
pub fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
) -> Result<()> {
// Special handling of the root inode
let mut root_node = self.tree.borrow_mut_node();
assert!(root_node.is_dir());
let index = bootstrap_ctx.generate_next_ino();
// 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE.
assert_eq!(index, RAFS_V5_ROOT_INODE);
root_node.index = index;
root_node.inode.set_ino(index);
ctx.prefetch.insert(&self.tree.node, root_node.deref());
bootstrap_ctx.inode_map.insert(
(
root_node.layer_idx,
root_node.info.src_ino,
root_node.info.src_dev,
),
vec![self.tree.node.clone()],
);
drop(root_node);
Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?;
if ctx.fs_version.is_v6() {
let root_offset = self.tree.node.borrow().v6_offset;
Self::v6_update_dirents(&self.tree, root_offset);
}
Ok(())
}
/// Dump the RAFS filesystem meta information to meta blob.
pub fn dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_storage: &mut Option<ArtifactStorage>,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsBlobTable,
) -> Result<()> {
match blob_table {
RafsBlobTable::V5(table) => self.v5_dump(ctx, bootstrap_ctx, table)?,
RafsBlobTable::V6(table) => self.v6_dump(ctx, bootstrap_ctx, table)?,
}
if let Some(ArtifactStorage::FileDir(p)) = bootstrap_storage {
let bootstrap_data = bootstrap_ctx.writer.as_bytes()?;
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?;
let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(())
} else {
bootstrap_ctx.writer.finalize(Some(String::default()))
}
}
/// Traverse node tree, set inode index, ino, child_index and child_count etc according to the
/// RAFS metadata format, then store to nodes collection.
fn build_rafs(
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
tree: &mut Tree,
) -> Result<()> {
let parent_node = tree.node.clone();
let mut parent_node = parent_node.borrow_mut();
let parent_ino = parent_node.inode.ino();
let block_size = ctx.v6_block_size();
// In case of multi-layer building, it's possible that the parent node is not a directory.
if parent_node.is_dir() {
parent_node
.inode
.set_child_count(tree.children.len() as u32);
if ctx.fs_version.is_v5() {
parent_node
.inode
.set_child_index(bootstrap_ctx.get_next_ino() as u32);
} else if ctx.fs_version.is_v6() {
// Layout directory entries for v6.
let d_size = parent_node.v6_dirent_size(ctx, tree)?;
parent_node.v6_set_dir_offset(bootstrap_ctx, d_size, block_size)?;
}
}
let mut dirs: Vec<&mut Tree> = Vec::new();
for child in tree.children.iter_mut() {
let child_node = child.node.clone();
let mut child_node = child_node.borrow_mut();
let index = bootstrap_ctx.generate_next_ino();
child_node.index = index;
if ctx.fs_version.is_v5() {
child_node.inode.set_parent(parent_ino);
}
// Handle hardlink.
// All hardlink nodes' ino and nlink should be the same.
// We need to find hardlink node index list in the layer where the node is located
// because the real_ino may be different among different layers,
let mut v6_hardlink_offset: Option<u64> = None;
let key = (
child_node.layer_idx,
child_node.info.src_ino,
child_node.info.src_dev,
);
if let Some(indexes) = bootstrap_ctx.inode_map.get_mut(&key) {
let nlink = indexes.len() as u32 + 1;
// Update nlink for previous hardlink inodes
for n in indexes.iter() {
n.borrow_mut().inode.set_nlink(nlink);
}
let (first_ino, first_offset) = {
let first_node = indexes[0].borrow_mut();
(first_node.inode.ino(), first_node.v6_offset)
};
// set offset for rafs v6 hardlinks
v6_hardlink_offset = Some(first_offset);
child_node.inode.set_nlink(nlink);
child_node.inode.set_ino(first_ino);
indexes.push(child.node.clone());
} else {
child_node.inode.set_ino(index);
child_node.inode.set_nlink(1);
// Store inode real ino
bootstrap_ctx
.inode_map
.insert(key, vec![child.node.clone()]);
}
// update bootstrap_ctx.offset for rafs v6 non-dir nodes.
if !child_node.is_dir() && ctx.fs_version.is_v6() {
child_node.v6_set_offset(bootstrap_ctx, v6_hardlink_offset, block_size)?;
}
ctx.prefetch.insert(&child.node, child_node.deref());
if child_node.is_dir() {
dirs.push(child);
}
}
// According to filesystem semantics, a parent directory should have nlink equal to
// the number of its child directories plus 2.
if parent_node.is_dir() {
parent_node.inode.set_nlink((2 + dirs.len()) as u32);
}
for dir in dirs {
Self::build_rafs(ctx, bootstrap_ctx, dir)?;
}
Ok(())
}
/// Load a parent RAFS bootstrap and return the `Tree` object representing the filesystem.
pub fn load_parent_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<Tree> {
let rs = if let Some(path) = bootstrap_mgr.f_parent_path.as_ref() {
RafsSuper::load_from_file(path, ctx.configuration.clone(), false).map(|(rs, _)| rs)?
} else {
return Err(Error::msg("bootstrap context's parent bootstrap is null"));
};
let config = RafsSuperConfig {
compressor: ctx.compressor,
digester: ctx.digester,
chunk_size: ctx.chunk_size,
batch_size: ctx.batch_size,
explicit_uidgid: ctx.explicit_uidgid,
version: ctx.fs_version,
is_tarfs_mode: rs.meta.flags.contains(RafsSuperFlags::TARTFS_MODE),
};
config.check_compatibility(&rs.meta)?;
// Reuse lower layer blob table,
// we need to append the blob entry of upper layer to the table
blob_mgr.extend_from_blob_table(ctx, rs.superblock.get_blob_infos())?;
// Build node tree of lower layer from a bootstrap file, and add chunks
// of lower node to layered_chunk_dict for chunk deduplication on next.
Tree::from_bootstrap(&rs, &mut blob_mgr.layered_chunk_dict)
.context("failed to build tree from bootstrap")
}
}

View File

@ -1,94 +0,0 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashSet;
use std::convert::TryFrom;
use anyhow::{bail, Result};
const ERR_UNSUPPORTED_FEATURE: &str = "unsupported feature";
/// Feature flags to control behavior of RAFS filesystem builder.
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum Feature {
/// Append a Table Of Content footer to RAFS v6 data blob, to help locate data sections.
BlobToc,
}
impl TryFrom<&str> for Feature {
type Error = anyhow::Error;
fn try_from(f: &str) -> Result<Self> {
match f {
"blob-toc" => Ok(Self::BlobToc),
_ => bail!(
"{} `{}`, please try upgrading to the latest nydus-image",
ERR_UNSUPPORTED_FEATURE,
f,
),
}
}
}
/// A set of enabled feature flags to control behavior of RAFS filesystem builder
#[derive(Clone, Debug)]
pub struct Features(HashSet<Feature>);
impl Default for Features {
fn default() -> Self {
Self::new()
}
}
impl Features {
/// Create a new instance of [Features].
pub fn new() -> Self {
Self(HashSet::new())
}
/// Check whether a feature is enabled or not.
pub fn is_enabled(&self, feature: Feature) -> bool {
self.0.contains(&feature)
}
}
impl TryFrom<&str> for Features {
type Error = anyhow::Error;
fn try_from(features: &str) -> Result<Self> {
let mut list = Features::new();
for feat in features.trim().split(',') {
if !feat.is_empty() {
let feature = Feature::try_from(feat.trim())?;
list.0.insert(feature);
}
}
Ok(list)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_feature() {
assert_eq!(Feature::try_from("blob-toc").unwrap(), Feature::BlobToc);
Feature::try_from("unknown-feature-bit").unwrap_err();
}
#[test]
fn test_features() {
let features = Features::try_from("blob-toc").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc,").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc, ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from(" blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
}
}

View File

@ -1,62 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::Result;
use std::ops::Deref;
use super::node::Node;
use crate::{Overlay, Prefetch, TreeNode};
#[derive(Clone)]
pub struct BlobLayout {}
impl BlobLayout {
pub fn layout_blob_simple(prefetch: &Prefetch) -> Result<(Vec<TreeNode>, usize)> {
let (pre, non_pre) = prefetch.get_file_nodes();
let mut inodes: Vec<TreeNode> = pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let mut non_prefetch_inodes: Vec<TreeNode> = non_pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let prefetch_entries = inodes.len();
inodes.append(&mut non_prefetch_inodes);
Ok((inodes, prefetch_entries))
}
#[inline]
fn should_dump_node(node: &Node) -> bool {
node.overlay == Overlay::UpperAddition || node.overlay == Overlay::UpperModification
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{core::node::NodeInfo, Tree};
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
#[test]
fn test_layout_blob_simple() {
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let mut node1 = Node::new(inode.clone(), NodeInfo::default(), 1);
node1.overlay = Overlay::UpperAddition;
let tree = Tree::new(node1);
let mut prefetch = Prefetch::default();
prefetch.insert(&tree.node, tree.node.borrow().deref());
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap();
assert_eq!(inodes.len(), 1);
assert_eq!(prefetch_entries, 0);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,361 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021-2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Execute file/directory whiteout rules when merging multiple RAFS filesystems
//! according to the OCI or Overlayfs specifications.
use std::ffi::{OsStr, OsString};
use std::fmt::{self, Display, Formatter};
use std::os::unix::ffi::OsStrExt;
use std::str::FromStr;
use anyhow::{anyhow, Error, Result};
use super::node::Node;
/// Prefix for OCI whiteout file.
pub const OCISPEC_WHITEOUT_PREFIX: &str = ".wh.";
/// Prefix for OCI whiteout opaque.
pub const OCISPEC_WHITEOUT_OPAQUE: &str = ".wh..wh..opq";
/// Extended attribute key for Overlayfs whiteout opaque.
pub const OVERLAYFS_WHITEOUT_OPAQUE: &str = "trusted.overlay.opaque";
/// RAFS filesystem overlay specifications.
///
/// When merging multiple RAFS filesystems into one, special rules are needed to white out
/// files/directories in lower/parent filesystems. The whiteout specification defined by the
/// OCI image specification and Linux Overlayfs are widely adopted, so both of them are supported
/// by RAFS filesystem.
///
/// # Overlayfs Whiteout
///
/// In order to support rm and rmdir without changing the lower filesystem, an overlay filesystem
/// needs to record in the upper filesystem that files have been removed. This is done using
/// whiteouts and opaque directories (non-directories are always opaque).
///
/// A whiteout is created as a character device with 0/0 device number. When a whiteout is found
/// in the upper level of a merged directory, any matching name in the lower level is ignored,
/// and the whiteout itself is also hidden.
///
/// A directory is made opaque by setting the xattr “trusted.overlay.opaque” to “y”. Where the upper
/// filesystem contains an opaque directory, any directory in the lower filesystem with the same
/// name is ignored.
///
/// # OCI Image Whiteout
/// - A whiteout file is an empty file with a special filename that signifies a path should be
/// deleted.
/// - A whiteout filename consists of the prefix .wh. plus the basename of the path to be deleted.
/// - As files prefixed with .wh. are special whiteout markers, it is not possible to create a
/// filesystem which has a file or directory with a name beginning with .wh..
/// - Once a whiteout is applied, the whiteout itself MUST also be hidden.
/// - Whiteout files MUST only apply to resources in lower/parent layers.
/// - Files that are present in the same layer as a whiteout file can only be hidden by whiteout
/// files in subsequent layers.
/// - In addition to expressing that a single entry should be removed from a lower layer, layers
/// may remove all of the children using an opaque whiteout entry.
/// - An opaque whiteout entry is a file with the name .wh..wh..opq indicating that all siblings
/// are hidden in the lower layer.
#[derive(Clone, Copy, PartialEq)]
pub enum WhiteoutSpec {
/// Overlay whiteout rules according to the OCI image specification.
///
/// https://github.com/opencontainers/image-spec/blob/master/layer.md#whiteouts
Oci,
/// Overlay whiteout rules according to the Linux Overlayfs specification.
///
/// "whiteouts and opaque directories" in https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt
Overlayfs,
/// No whiteout, keep all content from lower/parent filesystems.
None,
}
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec {
fn default() -> Self {
Self::Oci
}
}
impl FromStr for WhiteoutSpec {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.to_lowercase().as_str() {
"oci" => Ok(Self::Oci),
"overlayfs" => Ok(Self::Overlayfs),
"none" => Ok(Self::None),
_ => Err(anyhow!("invalid whiteout spec")),
}
}
}
/// RAFS filesystem overlay operation types.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WhiteoutType {
OciOpaque,
OciRemoval,
OverlayFsOpaque,
OverlayFsRemoval,
}
impl WhiteoutType {
pub fn is_removal(&self) -> bool {
*self == WhiteoutType::OciRemoval || *self == WhiteoutType::OverlayFsRemoval
}
}
/// RAFS filesystem node overlay state.
#[allow(dead_code)]
#[derive(Clone, Debug, PartialEq)]
pub enum Overlay {
Lower,
UpperAddition,
UpperModification,
}
impl Overlay {
pub fn is_lower_layer(&self) -> bool {
self == &Overlay::Lower
}
}
impl Display for Overlay {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
match self {
Overlay::Lower => write!(f, "LOWER"),
Overlay::UpperAddition => write!(f, "ADDED"),
Overlay::UpperModification => write!(f, "MODIFIED"),
}
}
}
impl Node {
/// Check whether the inode is a special overlayfs whiteout file.
pub fn is_overlayfs_whiteout(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs {
return false;
}
self.inode.is_chrdev()
&& nydus_utils::compact::major_dev(self.info.rdev) == 0
&& nydus_utils::compact::minor_dev(self.info.rdev) == 0
}
/// Check whether the inode (directory) is a overlayfs whiteout opaque.
pub fn is_overlayfs_opaque(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs || !self.is_dir() {
return false;
}
// A directory is made opaque by setting the xattr "trusted.overlay.opaque" to "y".
if let Some(v) = self
.info
.xattrs
.get(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE))
{
if let Ok(v) = std::str::from_utf8(v.as_slice()) {
return v == "y";
}
}
false
}
/// Get whiteout type to process the inode.
pub fn whiteout_type(&self, spec: WhiteoutSpec) -> Option<WhiteoutType> {
if self.overlay == Overlay::Lower {
return None;
}
match spec {
WhiteoutSpec::Oci => {
if let Some(name) = self.name().to_str() {
if name == OCISPEC_WHITEOUT_OPAQUE {
return Some(WhiteoutType::OciOpaque);
} else if name.starts_with(OCISPEC_WHITEOUT_PREFIX) {
return Some(WhiteoutType::OciRemoval);
}
}
}
WhiteoutSpec::Overlayfs => {
if self.is_overlayfs_whiteout(spec) {
return Some(WhiteoutType::OverlayFsRemoval);
} else if self.is_overlayfs_opaque(spec) {
return Some(WhiteoutType::OverlayFsOpaque);
}
}
WhiteoutSpec::None => {
return None;
}
}
None
}
/// Get original filename from a whiteout filename.
pub fn origin_name(&self, t: WhiteoutType) -> Option<&OsStr> {
if let Some(name) = self.name().to_str() {
if t == WhiteoutType::OciRemoval {
// the whiteout filename prefixes the basename of the path to be deleted with ".wh.".
return Some(OsStr::from_bytes(
name[OCISPEC_WHITEOUT_PREFIX.len()..].as_bytes(),
));
} else if t == WhiteoutType::OverlayFsRemoval {
// the whiteout file has the same name as the file to be deleted.
return Some(name.as_ref());
}
}
None
}
}
#[cfg(test)]
mod tests {
use nydus_rafs::metadata::{inode::InodeWrapper, layout::v5::RafsV5Inode};
use crate::core::node::NodeInfo;
use super::*;
#[test]
fn test_white_spec_from_str() {
let spec = WhiteoutSpec::default();
assert!(matches!(spec, WhiteoutSpec::Oci));
assert!(WhiteoutSpec::from_str("oci").is_ok());
assert!(WhiteoutSpec::from_str("overlayfs").is_ok());
assert!(WhiteoutSpec::from_str("none").is_ok());
assert!(WhiteoutSpec::from_str("foo").is_err());
}
#[test]
fn test_white_type_removal_check() {
let t1 = WhiteoutType::OciOpaque;
let t2 = WhiteoutType::OciRemoval;
let t3 = WhiteoutType::OverlayFsOpaque;
let t4 = WhiteoutType::OverlayFsRemoval;
assert!(!t1.is_removal());
assert!(t2.is_removal());
assert!(!t3.is_removal());
assert!(t4.is_removal());
}
#[test]
fn test_overlay_low_layer_check() {
let t1 = Overlay::Lower;
let t2 = Overlay::UpperAddition;
let t3 = Overlay::UpperModification;
assert!(t1.is_lower_layer());
assert!(!t2.is_lower_layer());
assert!(!t3.is_lower_layer());
}
#[test]
fn test_node() {
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, NodeInfo::default(), 0);
assert!(!node.is_overlayfs_whiteout(WhiteoutSpec::None));
assert!(node.is_overlayfs_whiteout(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsRemoval
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info: NodeInfo = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsOpaque
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let inode = InodeWrapper::V5(RafsV5Inode::default());
let info = NodeInfo::default();
let mut node = Node::new(inode, info, 0);
assert_eq!(node.whiteout_type(WhiteoutSpec::None), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Oci), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
node.overlay = Overlay::Lower;
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
let name = OCISPEC_WHITEOUT_PREFIX.to_string() + "foo";
info.target_vec.push(name.clone().into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciRemoval
);
assert_eq!(node.origin_name(WhiteoutType::OciRemoval).unwrap(), "foo");
assert_eq!(node.origin_name(WhiteoutType::OciOpaque), None);
assert_eq!(
node.origin_name(WhiteoutType::OverlayFsRemoval).unwrap(),
OsStr::new(&name)
);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
info.target_vec.push(OCISPEC_WHITEOUT_OPAQUE.into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciOpaque
);
}
}

View File

@ -1,391 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::path::PathBuf;
use std::str::FromStr;
use anyhow::{anyhow, Context, Error, Result};
use indexmap::IndexMap;
use nydus_rafs::metadata::layout::v5::RafsV5PrefetchTable;
use nydus_rafs::metadata::layout::v6::{calculate_nid, RafsV6PrefetchTable};
use super::node::Node;
use crate::core::tree::TreeNode;
/// Filesystem data prefetch policy.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum PrefetchPolicy {
None,
/// Prefetch will be issued from Fs layer, which leverages inode/chunkinfo to prefetch data
/// from blob no matter where it resides(OSS/Localfs). Basically, it is willing to cache the
/// data into blobcache(if exists). It's more nimble. With this policy applied, image builder
/// currently puts prefetch files' data into a continuous region within blob which behaves very
/// similar to `Blob` policy.
Fs,
/// Prefetch will be issued directly from backend/blob layer
Blob,
}
impl Default for PrefetchPolicy {
fn default() -> Self {
Self::None
}
}
impl FromStr for PrefetchPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"none" => Ok(Self::None),
"fs" => Ok(Self::Fs),
"blob" => Ok(Self::Blob),
_ => Err(anyhow!("invalid prefetch policy")),
}
}
}
/// Gather prefetch patterns from STDIN line by line.
///
/// Input format:
/// printf "/relative/path/to/rootfs/1\n/relative/path/to/rootfs/2"
///
/// It does not guarantee that specified path exist in local filesystem because the specified path
/// may exist in parent image/layers.
fn get_patterns() -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let stdin = std::io::stdin();
let mut patterns = Vec::new();
loop {
let mut file = String::new();
let size = stdin
.read_line(&mut file)
.context("failed to read prefetch pattern")?;
if size == 0 {
return generate_patterns(patterns);
}
patterns.push(file);
}
}
fn generate_patterns(input: Vec<String>) -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let mut patterns = IndexMap::new();
for file in &input {
let file_trimmed: PathBuf = file.trim().into();
// Sanity check for the list format.
if !file_trimmed.is_absolute() {
warn!(
"Illegal file path {} specified, should be absolute path",
file
);
continue;
}
let mut current_path = file_trimmed.clone();
let mut skip = patterns.contains_key(&current_path);
while !skip && current_path.pop() {
if patterns.contains_key(&current_path) {
skip = true;
break;
}
}
if skip {
warn!(
"prefetch pattern {} is covered by previous pattern and thus omitted",
file
);
} else {
debug!(
"prefetch pattern: {}, trimmed file name {:?}",
file, file_trimmed
);
patterns.insert(file_trimmed, None);
}
}
Ok(patterns)
}
/// Manage filesystem data prefetch configuration and state for builder.
#[derive(Default, Clone)]
pub struct Prefetch {
pub policy: PrefetchPolicy,
pub disabled: bool,
// Patterns to generate prefetch inode array, which will be put into the prefetch array
// in the RAFS bootstrap. It may access directory or file inodes.
patterns: IndexMap<PathBuf, Option<TreeNode>>,
// File list to help optimizing layout of data blobs.
// Files from this list may be put at the head of data blob for better prefetch performance,
// The index of matched prefetch pattern is stored in `usize`,
// which will help to sort the prefetch files in the final layout.
// It only stores regular files.
files_prefetch: Vec<(TreeNode, usize)>,
// It stores all non-prefetch files that is not stored in `prefetch_files`,
// including regular files, dirs, symlinks, etc.,
// with the same order of BFS traversal of file tree.
files_non_prefetch: Vec<TreeNode>,
}
impl Prefetch {
/// Create a new instance of [Prefetch].
pub fn new(policy: PrefetchPolicy) -> Result<Self> {
let patterns = if policy != PrefetchPolicy::None {
get_patterns().context("failed to get prefetch patterns")?
} else {
IndexMap::new()
};
Ok(Self {
policy,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10000),
files_non_prefetch: Vec::with_capacity(10000),
})
}
/// Insert node into the prefetch Vector if it matches prefetch rules,
/// while recording the index of matched prefetch pattern,
/// or insert it into non-prefetch Vector.
pub fn insert(&mut self, obj: &TreeNode, node: &Node) {
// Newly created root inode of this rafs has zero size
if self.policy == PrefetchPolicy::None
|| self.disabled
|| (node.inode.is_reg() && node.inode.size() == 0)
{
self.files_non_prefetch.push(obj.clone());
return;
}
let mut path = node.target().clone();
let mut exact_match = true;
loop {
if let Some((idx, _, v)) = self.patterns.get_full_mut(&path) {
if exact_match {
*v = Some(obj.clone());
}
if node.is_reg() {
self.files_prefetch.push((obj.clone(), idx));
} else {
self.files_non_prefetch.push(obj.clone());
}
return;
}
// If no exact match, try to match parent dir until root.
if !path.pop() {
self.files_non_prefetch.push(obj.clone());
return;
}
exact_match = false;
}
}
/// Get node Vector of files in the prefetch list and non-prefetch list.
/// The order of prefetch files is the same as the order of prefetch patterns.
/// The order of non-prefetch files is the same as the order of BFS traversal of file tree.
pub fn get_file_nodes(&self) -> (Vec<TreeNode>, Vec<TreeNode>) {
let mut p_files = self.files_prefetch.clone();
p_files.sort_by_key(|k| k.1);
let p_files = p_files.into_iter().map(|(s, _)| s).collect();
(p_files, self.files_non_prefetch.clone())
}
/// Get the number of ``valid`` prefetch rules.
pub fn fs_prefetch_rule_count(&self) -> u32 {
if self.policy == PrefetchPolicy::Fs {
self.patterns.values().filter(|v| v.is_some()).count() as u32
} else {
0
}
}
/// Generate filesystem layer prefetch list for RAFS v5.
pub fn get_v5_prefetch_table(&mut self) -> Option<RafsV5PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV5PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
assert!(node.inode.ino() < u32::MAX as u64);
prefetch_table.add_entry(node.inode.ino() as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Generate filesystem layer prefetch list for RAFS v6.
pub fn get_v6_prefetch_table(&mut self, meta_addr: u64) -> Option<RafsV6PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV6PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
let ino = node.inode.ino();
debug_assert!(ino > 0);
let nid = calculate_nid(node.v6_offset, meta_addr);
// 32bit nid can represent 128GB bootstrap, it is large enough, no need
// to worry about casting here
assert!(nid < u32::MAX as u64);
trace!(
"v6 prefetch table: map node index {} to offset {} nid {} path {:?} name {:?}",
ino,
node.v6_offset,
nid,
node.path(),
node.name()
);
prefetch_table.add_entry(nid as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Disable filesystem data prefetch.
pub fn disable(&mut self) {
self.disabled = true;
}
/// Reset to initialization state.
pub fn clear(&mut self) {
self.disabled = false;
self.patterns.clear();
self.files_prefetch.clear();
self.files_non_prefetch.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::node::NodeInfo;
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
use std::cell::RefCell;
#[test]
fn test_generate_pattern() {
let input = vec![
"/a/b".to_string(),
"/a/b/c".to_string(),
"/a/b/d".to_string(),
"/a/b/d/e".to_string(),
"/f".to_string(),
"/h/i".to_string(),
];
let patterns = generate_patterns(input).unwrap();
assert_eq!(patterns.len(), 3);
assert!(patterns.contains_key(&PathBuf::from("/a/b")));
assert!(patterns.contains_key(&PathBuf::from("/f")));
assert!(patterns.contains_key(&PathBuf::from("/h/i")));
assert!(!patterns.contains_key(&PathBuf::from("/")));
assert!(!patterns.contains_key(&PathBuf::from("/a")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/c")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d/e")));
assert!(!patterns.contains_key(&PathBuf::from("/k")));
}
#[test]
fn test_prefetch_policy() {
let policy = PrefetchPolicy::from_str("fs").unwrap();
assert_eq!(policy, PrefetchPolicy::Fs);
let policy = PrefetchPolicy::from_str("blob").unwrap();
assert_eq!(policy, PrefetchPolicy::Blob);
let policy = PrefetchPolicy::from_str("none").unwrap();
assert_eq!(policy, PrefetchPolicy::None);
PrefetchPolicy::from_str("").unwrap_err();
PrefetchPolicy::from_str("invalid").unwrap_err();
}
#[test]
fn test_prefetch() {
let input = vec![
"/a/b".to_string(),
"/f".to_string(),
"/h/i".to_string(),
"/k".to_string(),
];
let patterns = generate_patterns(input).unwrap();
let mut prefetch = Prefetch {
policy: PrefetchPolicy::Fs,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10),
files_non_prefetch: Vec::with_capacity(10),
};
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let info = NodeInfo::default();
let mut info1 = info.clone();
info1.target = PathBuf::from("/f");
let node1 = Node::new(inode.clone(), info1, 1);
let node1 = TreeNode::new(RefCell::from(node1));
prefetch.insert(&node1, &node1.borrow());
let inode2 = inode.clone();
let mut info2 = info.clone();
info2.target = PathBuf::from("/a/b");
let node2 = Node::new(inode2, info2, 1);
let node2 = TreeNode::new(RefCell::from(node2));
prefetch.insert(&node2, &node2.borrow());
let inode3 = inode.clone();
let mut info3 = info.clone();
info3.target = PathBuf::from("/h/i/j");
let node3 = Node::new(inode3, info3, 1);
let node3 = TreeNode::new(RefCell::from(node3));
prefetch.insert(&node3, &node3.borrow());
let inode4 = inode.clone();
let mut info4 = info.clone();
info4.target = PathBuf::from("/z");
let node4 = Node::new(inode4, info4, 1);
let node4 = TreeNode::new(RefCell::from(node4));
prefetch.insert(&node4, &node4.borrow());
let inode5 = inode.clone();
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_size(0);
let mut info5 = info;
info5.target = PathBuf::from("/a/b/d");
let node5 = Node::new(inode5, info5, 1);
let node5 = TreeNode::new(RefCell::from(node5));
prefetch.insert(&node5, &node5.borrow());
// node1, node2
assert_eq!(prefetch.fs_prefetch_rule_count(), 2);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 4);
assert_eq!(non_pre.len(), 1);
let pre_str: Vec<String> = pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]);
let non_pre_str: Vec<String> = non_pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(non_pre_str, vec!["/z"]);
prefetch.clear();
assert_eq!(prefetch.fs_prefetch_rule_count(), 0);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 0);
assert_eq!(non_pre.len(), 0);
}
}

View File

@ -1,533 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! An in-memory tree structure to maintain information for filesystem metadata.
//!
//! Steps to build the first layer for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Traverse the upper tree (FileSystemTree) to dump bootstrap and data blobs.
//!
//! Steps to build the second and following on layers for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Load the lower tree (MetadataTree) from a metadata blob.
//! - Merge the final tree (OverlayTree) by applying the upper tree (FileSystemTree) to the
//! lower tree (MetadataTree).
//! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs.
use std::cell::{RefCell, RefMut};
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use anyhow::{bail, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::{bytes_to_os_str, RafsXAttrs};
use nydus_rafs::metadata::{Inode, RafsInodeExt, RafsSuper};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer};
use super::node::{ChunkSource, Node, NodeChunk, NodeInfo};
use super::overlay::{Overlay, WhiteoutType};
use crate::core::overlay::OVERLAYFS_WHITEOUT_OPAQUE;
use crate::{BuildContext, ChunkDict};
/// Type alias for tree internal node.
pub type TreeNode = Rc<RefCell<Node>>;
/// An in-memory tree structure to maintain information and topology of filesystem nodes.
#[derive(Clone)]
pub struct Tree {
/// Filesystem node.
pub node: TreeNode,
/// Cached base name.
name: Vec<u8>,
/// Children tree nodes.
pub children: Vec<Tree>,
}
impl Tree {
/// Create a new instance of `Tree` from a filesystem node.
pub fn new(node: Node) -> Self {
let name = node.name().as_bytes().to_vec();
Tree {
node: Rc::new(RefCell::new(node)),
name,
children: Vec::new(),
}
}
/// Load a `Tree` from a bootstrap file, and optionally caches chunk information.
pub fn from_bootstrap<T: ChunkDict>(rs: &RafsSuper, chunk_dict: &mut T) -> Result<Self> {
let tree_builder = MetadataTreeBuilder::new(rs);
let root_ino = rs.superblock.root_ino();
let root_inode = rs.get_extended_inode(root_ino, true)?;
let root_node = MetadataTreeBuilder::parse_node(rs, root_inode, PathBuf::from("/"))?;
let mut tree = Tree::new(root_node);
tree.children = timing_tracer!(
{ tree_builder.load_children(root_ino, Option::<PathBuf>::None, chunk_dict, true,) },
"load_tree_from_bootstrap"
)?;
Ok(tree)
}
/// Get name of the tree node.
pub fn name(&self) -> &[u8] {
&self.name
}
/// Set `Node` associated with the tree node.
pub fn set_node(&mut self, node: Node) {
self.node.replace(node);
}
/// Get mutably borrowed value to access the associated `Node` object.
pub fn borrow_mut_node(&self) -> RefMut<'_, Node> {
self.node.as_ref().borrow_mut()
}
/// Walk all nodes in DFS mode.
pub fn walk_dfs<F1, F2>(&self, pre: &mut F1, post: &mut F2) -> Result<()>
where
F1: FnMut(&Tree) -> Result<()>,
F2: FnMut(&Tree) -> Result<()>,
{
pre(self)?;
for child in &self.children {
child.walk_dfs(pre, post)?;
}
post(self)?;
Ok(())
}
/// Walk all nodes in pre DFS mode.
pub fn walk_dfs_pre<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(cb, &mut |_t| Ok(()))
}
/// Walk all nodes in post DFS mode.
pub fn walk_dfs_post<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(&mut |_t| Ok(()), cb)
}
/// Walk the tree in BFS mode.
pub fn walk_bfs<F>(&self, handle_self: bool, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
if handle_self {
cb(self)?;
}
let mut dirs = Vec::with_capacity(32);
for child in &self.children {
cb(child)?;
if child.borrow_mut_node().is_dir() {
dirs.push(child);
}
}
for dir in dirs {
dir.walk_bfs(false, cb)?;
}
Ok(())
}
/// Insert a new child node into the tree.
pub fn insert_child(&mut self, child: Tree) {
if let Err(idx) = self
.children
.binary_search_by_key(&&child.name, |n| &n.name)
{
self.children.insert(idx, child);
}
}
/// Get index of child node with specified `name`.
pub fn get_child_idx(&self, name: &[u8]) -> Option<usize> {
self.children.binary_search_by_key(&name, |n| &n.name).ok()
}
/// Get the tree node corresponding to the path.
pub fn get_node(&self, path: &Path) -> Option<&Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
for name in &target_vec[1..] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &tree.children[idx],
None => return None,
}
}
Some(tree)
}
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes());
assert_eq!(upper.name, "/".as_bytes());
// Handle the root node.
upper.borrow_mut_node().overlay = Overlay::UpperModification;
self.node = upper.node.clone();
self.merge_children(ctx, &upper)?;
lazy_drop(upper);
Ok(())
}
fn merge_children(&mut self, ctx: &BuildContext, upper: &Tree) -> Result<()> {
// Handle whiteout nodes in the first round, and handle other nodes in the second round.
let mut modified = Vec::with_capacity(upper.children.len());
for u in upper.children.iter() {
let mut u_node = u.borrow_mut_node();
match u_node.whiteout_type(ctx.whiteout_spec) {
Some(WhiteoutType::OciRemoval) => {
if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) {
if let Some(idx) = self.get_child_idx(origin_name.as_bytes()) {
self.children.remove(idx);
}
}
}
Some(WhiteoutType::OciOpaque) => {
self.children.clear();
}
Some(WhiteoutType::OverlayFsRemoval) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children.remove(idx);
}
}
Some(WhiteoutType::OverlayFsOpaque) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children[idx].children.clear();
}
u_node.remove_xattr(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE));
modified.push(u);
}
None => modified.push(u),
}
}
let mut dirs = Vec::new();
for u in modified {
let mut u_node = u.borrow_mut_node();
if let Some(idx) = self.get_child_idx(&u.name) {
u_node.overlay = Overlay::UpperModification;
self.children[idx].node = u.node.clone();
} else {
u_node.overlay = Overlay::UpperAddition;
self.insert_child(Tree {
node: u.node.clone(),
name: u.name.clone(),
children: vec![],
});
}
if u_node.is_dir() {
dirs.push(u);
}
}
for dir in dirs {
if let Some(idx) = self.get_child_idx(&dir.name) {
self.children[idx].merge_children(ctx, dir)?;
} else {
bail!("builder: can not find directory in merged tree");
}
}
Ok(())
}
}
pub struct MetadataTreeBuilder<'a> {
rs: &'a RafsSuper,
}
impl<'a> MetadataTreeBuilder<'a> {
fn new(rs: &'a RafsSuper) -> Self {
Self { rs }
}
/// Build node tree by loading bootstrap file
fn load_children<T: ChunkDict, P: AsRef<Path>>(
&self,
ino: Inode,
parent: Option<P>,
chunk_dict: &mut T,
validate_digest: bool,
) -> Result<Vec<Tree>> {
let inode = self.rs.get_extended_inode(ino, validate_digest)?;
if !inode.is_dir() {
return Ok(Vec::new());
}
let parent_path = if let Some(parent) = parent {
parent.as_ref().join(inode.name())
} else {
PathBuf::from("/")
};
let blobs = self.rs.superblock.get_blob_infos();
let child_count = inode.get_child_count();
let mut children = Vec::with_capacity(child_count as usize);
for idx in 0..child_count {
let child = inode.get_child_by_index(idx)?;
let child_path = parent_path.join(child.name());
let child = Self::parse_node(self.rs, child.clone(), child_path)?;
if child.is_reg() {
for chunk in &child.chunks {
let blob_idx = chunk.inner.blob_index();
if let Some(blob) = blobs.get(blob_idx as usize) {
chunk_dict.add_chunk(chunk.inner.clone(), blob.digester());
}
}
}
let child = Tree::new(child);
children.push(child);
}
children.sort_unstable_by(|a, b| a.name.cmp(&b.name));
for child in children.iter_mut() {
let child_node = child.borrow_mut_node();
if child_node.is_dir() {
let child_ino = child_node.inode.ino();
drop(child_node);
child.children =
self.load_children(child_ino, Some(&parent_path), chunk_dict, validate_digest)?;
}
}
Ok(children)
}
/// Convert a `RafsInode` object to an in-memory `Node` object.
pub fn parse_node(rs: &RafsSuper, inode: Arc<dyn RafsInodeExt>, path: PathBuf) -> Result<Node> {
let chunks = if inode.is_reg() {
let chunk_count = inode.get_chunk_count();
let mut chunks = Vec::with_capacity(chunk_count as usize);
for i in 0..chunk_count {
let cki = inode.get_chunk_info(i)?;
chunks.push(NodeChunk {
source: ChunkSource::Parent,
inner: Arc::new(ChunkWrapper::from_chunk_info(cki)),
});
}
chunks
} else {
Vec::new()
};
let symlink = if inode.is_symlink() {
Some(inode.get_symlink()?)
} else {
None
};
let mut xattrs = RafsXAttrs::new();
for name in inode.get_xattrs()? {
let name = bytes_to_os_str(&name);
let value = inode.get_xattr(name)?;
xattrs.add(name.to_os_string(), value.unwrap_or_default())?;
}
// Nodes loaded from bootstrap will only be used as `Overlay::Lower`, so make `dev` invalid
// to avoid breaking hardlink detecting logic.
let src_dev = u64::MAX;
let rdev = inode.rdev() as u64;
let inode = InodeWrapper::from_inode_info(inode.clone());
let source = PathBuf::from("/");
let target = Node::generate_target(&path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: rs.meta.explicit_uidgid(),
src_ino: inode.ino(),
src_dev,
rdev,
path,
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
Ok(Node {
info: Arc::new(info),
index: 0,
layer_idx: 0,
overlay: Overlay::Lower,
inode,
chunks,
v6_offset: 0,
v6_dirents: Vec::new(),
v6_datalayout: 0,
v6_compact_inode: false,
v6_dirents_offset: 0,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::RAFS_DEFAULT_CHUNK_SIZE;
use vmm_sys_util::tempdir::TempDir;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_set_lock_node() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes());
let node1 = tree.borrow_mut_node();
drop(node1);
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
tree.set_node(node);
let node2 = tree.borrow_mut_node();
assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap());
}
#[test]
fn test_walk_tree() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
let tmpfile2 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree2 = Tree::new(node);
tree.insert_child(tree2);
let tmpfile3 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree3 = Tree::new(node);
tree.insert_child(tree3);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 3);
let mut count = 0;
tree.walk_bfs(false, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 2);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
bail!("test")
})
.unwrap_err();
assert_eq!(count, 1);
let idx = tree
.get_child_idx(tmpfile2.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
let idx = tree
.get_child_idx(tmpfile3.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
}
}

View File

@ -1,266 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::convert::TryFrom;
use std::mem::size_of;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::v5::{
RafsV5BlobTable, RafsV5ChunkInfo, RafsV5InodeTable, RafsV5InodeWrapper, RafsV5SuperBlock,
RafsV5XAttrsTable,
};
use nydus_rafs::metadata::{RafsStore, RafsVersion};
use nydus_rafs::RafsIoWrite;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{div_round_up, root_tracer, timing_tracer, try_round_up_4k};
use super::node::Node;
use crate::{Bootstrap, BootstrapContext, BuildContext, Tree};
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for directory
// entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we don't
// have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5 directory
// inode.
const RAFS_V5_VIRTUAL_ENTRY_SIZE: u64 = 8;
impl Node {
/// Dump RAFS v5 inode metadata to meta blob.
pub fn dump_bootstrap_v5(
&self,
ctx: &mut BuildContext,
f_bootstrap: &mut dyn RafsIoWrite,
) -> Result<()> {
trace!("[{}]\t{}", self.overlay, self);
if let InodeWrapper::V5(raw_inode) = &self.inode {
// Dump inode info
let name = self.name();
let inode = RafsV5InodeWrapper {
name,
symlink: self.info.symlink.as_deref(),
inode: raw_inode,
};
inode
.store(f_bootstrap)
.context("failed to dump inode to bootstrap")?;
// Dump inode xattr
if !self.info.xattrs.is_empty() {
self.info
.xattrs
.store_v5(f_bootstrap)
.context("failed to dump xattr to bootstrap")?;
ctx.has_xattr = true;
}
// Dump chunk info
if self.is_reg() && self.inode.child_count() as usize != self.chunks.len() {
bail!("invalid chunk count {}: {}", self.chunks.len(), self);
}
for chunk in &self.chunks {
chunk
.inner
.store(f_bootstrap)
.context("failed to dump chunk info to bootstrap")?;
trace!("\t\tchunk: {} compressor {}", chunk, ctx.compressor,);
}
Ok(())
} else {
bail!("dump_bootstrap_v5() encounters non-v5-inode");
}
}
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for
// directory entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we
// don't have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5
// directory inode.
pub fn v5_set_dir_size(&mut self, fs_version: RafsVersion, children: &[Tree]) {
if !self.is_dir() || !fs_version.is_v5() {
return;
}
let mut d_size = 0u64;
for child in children.iter() {
d_size += child.borrow_mut_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
}
if d_size == 0 {
self.inode.set_size(4096);
} else {
// Safe to unwrap() because we have u32 for child count.
self.inode.set_size(try_round_up_4k(d_size).unwrap());
}
self.v5_set_inode_blocks();
}
/// Calculate and set `i_blocks` for inode.
///
/// In order to support repeatable build, we can't reuse `i_blocks` from source filesystems,
/// so let's calculate it by ourself for stable `i_block`.
///
/// Normal filesystem includes the space occupied by Xattr into the directory size,
/// let's follow the normal behavior.
pub fn v5_set_inode_blocks(&mut self) {
// Set inode blocks for RAFS v5 inode, v6 will calculate it at runtime.
if let InodeWrapper::V5(_) = self.inode {
self.inode.set_blocks(div_round_up(
self.inode.size() + self.info.xattrs.aligned_size_v5() as u64,
512,
));
}
}
}
impl Bootstrap {
/// Calculate inode digest for directory.
fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) {
let mut node = tree.borrow_mut_node();
// We have set digest for non-directory inode in the previous dump_blob workflow.
if node.is_dir() {
let mut inode_hasher = RafsDigest::hasher(ctx.digester);
for child in tree.children.iter() {
let child = child.borrow_mut_node();
inode_hasher.digest_update(child.inode.digest().as_ref());
}
node.inode.set_digest(inode_hasher.digest_finalize());
}
}
/// Dump bootstrap and blob file, return (Vec<blob_id>, blob_size)
pub(crate) fn v5_dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsV5BlobTable,
) -> Result<()> {
// Set inode digest, use reverse iteration order to reduce repeated digest calculations.
self.tree.walk_dfs_post(&mut |t| {
self.v5_digest_node(ctx, t);
Ok(())
})?;
// Set inode table
let super_block_size = size_of::<RafsV5SuperBlock>();
let inode_table_entries = bootstrap_ctx.get_next_ino() as u32 - 1;
let mut inode_table = RafsV5InodeTable::new(inode_table_entries as usize);
let inode_table_size = inode_table.size();
// Set prefetch table
let (prefetch_table_size, prefetch_table_entries) =
if let Some(prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
(prefetch_table.size(), prefetch_table.len() as u32)
} else {
(0, 0u32)
};
// Set blob table, use sha256 string (length 64) as blob id if not specified
let prefetch_table_offset = super_block_size + inode_table_size;
let blob_table_offset = prefetch_table_offset + prefetch_table_size;
let blob_table_size = blob_table.size();
let extended_blob_table_offset = blob_table_offset + blob_table_size;
let extended_blob_table_size = blob_table.extended.size();
let extended_blob_table_entries = blob_table.extended.entries();
// Set super block
let mut super_block = RafsV5SuperBlock::new();
let inodes_count = bootstrap_ctx.inode_map.len() as u64;
super_block.set_inodes_count(inodes_count);
super_block.set_inode_table_offset(super_block_size as u64);
super_block.set_inode_table_entries(inode_table_entries);
super_block.set_blob_table_offset(blob_table_offset as u64);
super_block.set_blob_table_size(blob_table_size as u32);
super_block.set_extended_blob_table_offset(extended_blob_table_offset as u64);
super_block.set_extended_blob_table_entries(u32::try_from(extended_blob_table_entries)?);
super_block.set_prefetch_table_offset(prefetch_table_offset as u64);
super_block.set_prefetch_table_entries(prefetch_table_entries);
super_block.set_compressor(ctx.compressor);
super_block.set_digester(ctx.digester);
super_block.set_chunk_size(ctx.chunk_size);
if ctx.explicit_uidgid {
super_block.set_explicit_uidgid();
}
// Set inodes and chunks
let mut inode_offset = (super_block_size
+ inode_table_size
+ prefetch_table_size
+ blob_table_size
+ extended_blob_table_size) as u32;
let mut has_xattr = false;
self.tree.walk_dfs_pre(&mut |t| {
let node = t.borrow_mut_node();
inode_table.set(node.index, inode_offset)?;
// Add inode size
inode_offset += node.inode.inode_size() as u32;
if node.inode.has_xattr() {
has_xattr = true;
if !node.info.xattrs.is_empty() {
inode_offset += (size_of::<RafsV5XAttrsTable>()
+ node.info.xattrs.aligned_size_v5())
as u32;
}
}
// Add chunks size
if node.is_reg() {
inode_offset += node.inode.child_count() * size_of::<RafsV5ChunkInfo>() as u32;
}
Ok(())
})?;
if has_xattr {
super_block.set_has_xattr();
}
// Dump super block
super_block
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store superblock")?;
// Dump inode table
inode_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store inode table")?;
// Dump prefetch table
if let Some(mut prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
prefetch_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store prefetch table")?;
}
// Dump blob table
blob_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store blob table")?;
// Dump extended blob table
blob_table
.store_extended(bootstrap_ctx.writer.as_mut())
.context("failed to store extended blob table")?;
// Dump inodes and chunks
timing_tracer!(
{
self.tree.walk_dfs_pre(&mut |t| {
t.borrow_mut_node()
.dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut())
.context("failed to dump bootstrap")
})
},
"dump_bootstrap"
)?;
Ok(())
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,267 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fs;
use std::fs::DirEntry;
use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
};
use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
struct FilesystemTreeBuilder {}
impl FilesystemTreeBuilder {
fn new() -> Self {
Self {}
}
#[allow(clippy::only_used_in_recursion)]
/// Walk directory to build node tree by DFS
fn load_children(
&self,
ctx: &mut BuildContext,
parent: &TreeNode,
layer_idx: u16,
) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut trees = Vec::new();
let mut external_trees = Vec::new();
let parent = parent.borrow();
if !parent.is_dir() {
return Ok((trees.clone(), external_trees));
}
let children = fs::read_dir(parent.path())
.with_context(|| format!("failed to read dir {:?}", parent.path()))?;
let children = children.collect::<Result<Vec<DirEntry>, std::io::Error>>()?;
event_tracer!("load_from_directory", +children.len());
for child in children {
let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
file_size,
parent.info.explicit_uidgid,
true,
)
.with_context(|| format!("failed to create node {:?}", path))?;
child.layer_idx = layer_idx;
// as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers.
if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec)
{
continue;
}
let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
}
trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok((trees, external_trees))
}
}
#[derive(Default)]
pub struct DirectoryBuilder {}
impl DirectoryBuilder {
pub fn new() -> Self {
Self {}
}
/// Build node tree from a filesystem directory
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
let node = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
ctx.source_path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
0,
ctx.explicit_uidgid,
true,
)?;
let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new();
let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory"
)?;
tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok((tree, external_tree))
}
fn one_build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> {
// Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
}
}

View File

@ -1,411 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Builder to create RAFS filesystems from directories and tarballs.
#[macro_use]
extern crate log;
use crate::core::context::Artifact;
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::meta::toc;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, digest, root_tracer, timing_tracer};
use sha2::Digest;
use self::core::node::{Node, NodeInfo};
pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{
ArtifactStorage, ArtifactWriter, BlobCacheGenerator, BlobContext, BlobManager,
BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
pub use self::core::feature::{Feature, Features};
pub use self::core::node::{ChunkSource, NodeChunk};
pub use self::core::overlay::{Overlay, WhiteoutSpec};
pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator;
mod compact;
mod core;
mod directory;
mod merge;
mod optimize_prefetch;
mod stargz;
mod tarball;
/// Trait to generate a RAFS filesystem from the source.
pub trait Builder {
fn build(
&mut self,
build_ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput>;
}
fn build_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
blob_mgr: &mut BlobManager,
mut tree: Tree,
) -> Result<Bootstrap> {
// For multi-layer build, merge the upper layer and lower layer with overlay whiteout applied.
if bootstrap_ctx.layered {
let mut parent = Bootstrap::load_parent_bootstrap(ctx, bootstrap_mgr, blob_mgr)?;
timing_tracer!({ parent.merge_overaly(ctx, tree) }, "merge_bootstrap")?;
tree = parent;
}
let mut bootstrap = Bootstrap::new(tree)?;
timing_tracer!({ bootstrap.build(ctx, bootstrap_ctx) }, "build_bootstrap")?;
Ok(bootstrap)
}
fn dump_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
bootstrap: &mut Bootstrap,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Make sure blob id is updated according to blob hash if not specified by user.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.blob_id.is_empty() {
// `Blob::dump()` should have set `blob_ctx.blob_id` to referenced OCI tarball for
// ref-type conversion.
assert!(!ctx.conversion_type.is_to_ref());
if ctx.blob_inline_meta {
// Set special blob id for blob with inlined meta.
blob_ctx.blob_id = "x".repeat(64);
} else {
blob_ctx.blob_id = format!("{:x}", blob_ctx.blob_hash.clone().finalize());
}
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
}
// Dump bootstrap file
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, bootstrap_ctx, &blob_table)?;
// Dump RAFS meta to data blob if inline meta is enabled.
if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob.
let blob_ctx = if blob_mgr.external {
&mut blob_mgr.new_blob_ctx(ctx)?
} else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len();
let uncompressed_digest =
RafsDigest::from_buf(&uncompressed_bootstrap, digest::Algorithm::Sha256);
// Output uncompressed data for backward compatibility and compressed data for new format.
let (bootstrap_data, compressor) = if ctx.features.is_enabled(Feature::BlobToc) {
let mut compressor = compress::Algorithm::Zstd;
let (compressed_data, compressed) =
compress::compress(&uncompressed_bootstrap, compressor)
.with_context(|| "failed to compress bootstrap".to_string())?;
blob_ctx.write_data(blob_writer, &compressed_data)?;
if !compressed {
compressor = compress::Algorithm::None;
}
(compressed_data, compressor)
} else {
blob_ctx.write_data(blob_writer, &uncompressed_bootstrap)?;
(uncompressed_bootstrap, compress::Algorithm::None)
};
let compressed_size = bootstrap_data.len();
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BOOTSTRAP,
compressed_size as u64,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BOOTSTRAP,
compressor,
uncompressed_digest,
bootstrap_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
}
}
Ok(())
}
fn dump_toc(
ctx: &mut BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if ctx.features.is_enabled(Feature::BlobToc) {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let data = blob_ctx.entry_list.as_bytes().to_vec();
let toc_size = data.len() as u64;
blob_ctx.write_data(blob_writer, &data)?;
hasher.digest_update(&data);
let header = blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_TOC, toc_size)?;
hasher.digest_update(header.as_bytes());
blob_ctx.blob_toc_digest = hasher.digest_finalize().data;
blob_ctx.blob_toc_size = toc_size as u32 + header.as_bytes().len() as u32;
}
Ok(())
}
fn finalize_blob(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let is_tarfs = ctx.conversion_type == ConversionType::TarToTarfs;
if !is_tarfs {
dump_toc(ctx, blob_ctx, blob_writer)?;
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
if ctx.blob_inline_meta && blob_ctx.blob_id == "x".repeat(64) {
blob_ctx.blob_id = String::new();
}
let hash = blob_ctx.blob_hash.clone().finalize();
let blob_meta_id = if ctx.blob_id.is_empty() {
format!("{:x}", hash)
} else {
assert!(!ctx.conversion_type.is_to_ref() || is_tarfs);
ctx.blob_id.clone()
};
if ctx.conversion_type.is_to_ref() {
if blob_ctx.blob_id.is_empty() {
// Use `sha256(tarball)` as `blob_id`. A tarball without files will fall through
// this path because `Blob::dump()` hasn't generated `blob_ctx.blob_id`.
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
// Tarfs mode only has tar stream and meta blob, there's no data blob.
if !ctx.blob_inline_meta && !is_tarfs {
blob_ctx.blob_meta_digest = hash.into();
blob_ctx.blob_meta_size = blob_writer.pos()?;
}
} else if blob_ctx.blob_id.is_empty() {
// `blob_ctx.blob_id` should be RAFS blob id.
blob_ctx.blob_id = blob_meta_id.clone();
}
// Tarfs mode directly use the tar file as RAFS data blob, so no need to generate the data
// blob file.
if !is_tarfs {
blob_writer.finalize(Some(blob_meta_id))?;
}
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.finalize(&blob_ctx.blob_id)?;
}
}
Ok(())
}
/// Helper for TarballBuilder/StargzBuilder to build the filesystem tree.
pub struct TarBuilder {
pub explicit_uidgid: bool,
pub layer_idx: u16,
pub version: RafsVersion,
next_ino: Inode,
}
impl TarBuilder {
/// Create a new instance of [TarBuilder].
pub fn new(explicit_uidgid: bool, layer_idx: u16, version: RafsVersion) -> Self {
TarBuilder {
explicit_uidgid,
layer_idx,
next_ino: 0,
version,
}
}
/// Allocate an inode number.
pub fn next_ino(&mut self) -> Inode {
self.next_ino += 1;
self.next_ino
}
/// Insert a node into the tree, creating any missing intermediate directories.
pub fn insert_into_tree(&mut self, tree: &mut Tree, node: Node) -> Result<()> {
let target_paths = node.target_vec();
let target_paths_len = target_paths.len();
if target_paths_len == 1 {
// Handle root node modification
assert_eq!(node.path(), Path::new("/"));
tree.set_node(node);
} else {
let mut tmp_tree = tree;
for idx in 1..target_paths.len() {
match tmp_tree.get_child_idx(target_paths[idx].as_bytes()) {
Some(i) => {
if idx == target_paths_len - 1 {
tmp_tree.children[i].set_node(node);
break;
} else {
tmp_tree = &mut tmp_tree.children[i];
}
}
None => {
if idx == target_paths_len - 1 {
tmp_tree.insert_child(Tree::new(node));
break;
} else {
let node = self.create_directory(&target_paths[..=idx])?;
tmp_tree.insert_child(Tree::new(node));
let last_idx = tmp_tree.children.len() - 1;
tmp_tree = &mut tmp_tree.children[last_idx];
}
}
}
}
}
Ok(())
}
/// Create a new node for a directory.
pub fn create_directory(&mut self, target_paths: &[OsString]) -> Result<Node> {
let ino = self.next_ino();
let name = &target_paths[target_paths.len() - 1];
let mut inode = InodeWrapper::new(self.version);
inode.set_ino(ino);
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_nlink(2);
inode.set_name_size(name.len());
inode.set_rdev(u32::MAX);
let source = PathBuf::from("/");
let target_vec = target_paths.to_vec();
let mut target = PathBuf::new();
for name in target_paths.iter() {
target = target.join(name);
}
let info = NodeInfo {
explicit_uidgid: self.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: u64::MAX,
path: target.clone(),
source,
target,
target_vec,
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: false,
};
Ok(Node::new(inode, info, self.layer_idx))
}
/// Check whether the path is a eStargz special file.
pub fn is_stargz_special_files(&self, path: &Path) -> bool {
path == Path::new("/stargz.index.json")
|| path == Path::new("/.prefetch.landmark")
|| path == Path::new("/.no.prefetch.landmark")
}
}
#[cfg(test)]
mod tests {
use vmm_sys_util::tempdir::TempDir;
use super::*;
#[test]
fn test_tar_builder_is_stargz_special_files() {
let builder = TarBuilder::new(true, 0, RafsVersion::V6);
let path = Path::new("/stargz.index.json");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.no.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/no.prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/tar.index.json");
assert!(!builder.is_stargz_special_files(&path));
}
#[test]
fn test_tar_builder_create_directory() {
let tmp_dir = TempDir::new().unwrap();
let target_paths = [OsString::from(tmp_dir.as_path())];
let mut builder = TarBuilder::new(true, 0, RafsVersion::V6);
let node = builder.create_directory(&target_paths);
assert!(node.is_ok());
let node = node.unwrap();
println!("Node: {}", node);
assert_eq!(node.file_type(), "dir");
assert_eq!(node.target(), tmp_dir.as_path());
assert_eq!(builder.next_ino, 1);
assert_eq!(builder.next_ino(), 2);
}
}

View File

@ -1,302 +0,0 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

View File

@ -29,8 +29,6 @@ use nydus_utils::digest::{self, DigestData, RafsDigest};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer, try_round_up_4k, ByteSize}; use nydus_utils::{lazy_drop, root_tracer, timing_tracer, try_round_up_4k, ByteSize};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob; use super::core::blob::Blob;
use super::core::context::{ use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput, ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
@ -58,10 +56,10 @@ struct TocEntry {
/// - block: block device /// - block: block device
/// - fifo: fifo /// - fifo: fifo
/// - chunk: a chunk of regular file data As described in the above section, /// - chunk: a chunk of regular file data As described in the above section,
/// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk. /// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk.
/// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after /// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after
/// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize /// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize
/// properties. /// properties.
#[serde(rename = "type")] #[serde(rename = "type")]
pub toc_type: String, pub toc_type: String,
@ -456,7 +454,7 @@ impl StargzBuilder {
uncompressed_offset: self.uncompressed_offset, uncompressed_offset: self.uncompressed_offset,
file_offset: entry.chunk_offset as u64, file_offset: entry.chunk_offset as u64,
index: 0, index: 0,
crc32: 0, reserved: 0,
}); });
let chunk = NodeChunk { let chunk = NodeChunk {
source: ChunkSource::Build, source: ChunkSource::Build,
@ -601,7 +599,7 @@ impl StargzBuilder {
} }
} }
let mut tmp_node = tmp_tree.borrow_mut_node(); let mut tmp_node = tmp_tree.lock_node();
if !tmp_node.is_reg() { if !tmp_node.is_reg() {
bail!( bail!(
"stargz: target {} for hardlink {} is not a regular file", "stargz: target {} for hardlink {} is not a regular file",
@ -788,7 +786,7 @@ impl StargzBuilder {
bootstrap bootstrap
.tree .tree
.walk_bfs(true, &mut |n| { .walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node(); let mut node = n.lock_node();
let node_path = node.path(); let node_path = node.path();
if let Some((size, ref mut chunks)) = self.file_chunk_map.get_mut(node_path) { if let Some((size, ref mut chunks)) = self.file_chunk_map.get_mut(node_path) {
node.inode.set_size(*size); node.inode.set_size(*size);
@ -802,9 +800,9 @@ impl StargzBuilder {
for (k, v) in self.hardlink_map.iter() { for (k, v) in self.hardlink_map.iter() {
match bootstrap.tree.get_node(k) { match bootstrap.tree.get_node(k) {
Some(t) => { Some(n) => {
let mut node = t.borrow_mut_node(); let mut node = n.lock_node();
let target = v.borrow(); let target = v.lock().unwrap();
node.inode.set_size(target.inode.size()); node.inode.set_size(target.inode.size());
node.inode.set_child_count(target.inode.child_count()); node.inode.set_child_count(target.inode.child_count());
node.chunks = target.chunks.clone(); node.chunks = target.chunks.clone();
@ -838,10 +836,10 @@ impl Builder for StargzBuilder {
} else if ctx.digester != digest::Algorithm::Sha256 { } else if ctx.digester != digest::Algorithm::Sha256 {
bail!("stargz: invalid digest algorithm {:?}", ctx.digester); bail!("stargz: invalid digest algorithm {:?}", ctx.digester);
} }
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() { let mut blob_writer = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?) ArtifactWriter::new(blob_stor)?
} else { } else {
Box::<NoopArtifactWriter>::default() return Err(anyhow!("missing configuration for target path"));
}; };
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?; let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered); let layer_idx = u16::from(bootstrap_ctx.layered);
@ -860,13 +858,13 @@ impl Builder for StargzBuilder {
// Dump blob file // Dump blob file
timing_tracer!( timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) }, { Blob::dump(ctx, &bootstrap.tree, blob_mgr, &mut blob_writer) },
"dump_blob" "dump_blob"
)?; )?;
// Dump blob meta information // Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() { if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?; Blob::dump_meta_data(ctx, blob_ctx, &mut blob_writer)?;
} }
// Dump RAFS meta/bootstrap and finalize the data blob. // Dump RAFS meta/bootstrap and finalize the data blob.
@ -879,14 +877,14 @@ impl Builder for StargzBuilder {
&mut bootstrap_ctx, &mut bootstrap_ctx,
&mut bootstrap, &mut bootstrap,
blob_mgr, blob_mgr,
blob_writer.as_mut(), &mut blob_writer,
) )
}, },
"dump_bootstrap" "dump_bootstrap"
)?; )?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?; finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
} else { } else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?; finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
timing_tracer!( timing_tracer!(
{ {
dump_bootstrap( dump_bootstrap(
@ -895,7 +893,7 @@ impl Builder for StargzBuilder {
&mut bootstrap_ctx, &mut bootstrap_ctx,
&mut bootstrap, &mut bootstrap,
blob_mgr, blob_mgr,
blob_writer.as_mut(), &mut blob_writer,
) )
}, },
"dump_bootstrap" "dump_bootstrap"
@ -904,21 +902,20 @@ impl Builder for StargzBuilder {
lazy_drop(bootstrap_ctx); lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None) BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::{ use crate::{ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec};
attributes::Attributes, ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec,
};
#[ignore]
#[test] #[test]
fn test_build_stargz_toc() { fn test_build_stargz_toc() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap(); let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let mut tmp_dir = tmp_dir.as_path().to_path_buf(); let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR"); let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = let source_path =
PathBuf::from(root_dir).join("../tests/texture/stargz/estargz_sample.json"); PathBuf::from(root_dir).join("../tests/texture/stargz/estargz_sample.json");
@ -934,126 +931,18 @@ mod tests {
ConversionType::EStargzIndexToRef, ConversionType::EStargzIndexToRef,
source_path, source_path,
prefetch, prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))), Some(ArtifactStorage::FileDir(tmp_dir.clone())),
None,
false, false,
Features::new(), Features::new(),
false,
Attributes::default(),
); );
ctx.fs_version = RafsVersion::V6; ctx.fs_version = RafsVersion::V6;
ctx.conversion_type = ConversionType::EStargzToRafs; let mut bootstrap_mgr =
let mut bootstrap_mgr = BootstrapManager::new( BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None);
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))), let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = StargzBuilder::new(0x1000000, &ctx); let mut builder = StargzBuilder::new(0x1000000, &ctx);
let builder = builder.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr); builder
assert!(builder.is_ok()); .build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
let builder = builder.unwrap(); .unwrap();
assert_eq!(
builder.blobs,
vec![String::from(
"bd4eff3fe6f5a352457c076d2133583e43db895b4af08d717b3fbcaeca89834e"
)]
);
assert_eq!(builder.blob_size, Some(4128));
tmp_dir.push("e60676aef5cc0d5caca9f4c8031f5b0c8392a0611d44c8e1bbc46dbf7fe7bfef");
assert_eq!(
builder.bootstrap_path.unwrap(),
tmp_dir.to_str().unwrap().to_string()
)
}
#[test]
fn test_toc_entry() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let mut entry = TocEntry {
name: source_path,
toc_type: "".to_string(),
size: 0x10,
link_name: PathBuf::from("link_name"),
mode: 0,
uid: 1,
gid: 1,
uname: "user_name".to_string(),
gname: "group_name".to_string(),
dev_major: 255,
dev_minor: 33,
xattrs: Default::default(),
digest: Default::default(),
offset: 0,
chunk_offset: 0,
chunk_size: 0,
chunk_digest: "sha256:".to_owned(),
inner_offset: 0,
};
entry.chunk_digest.extend(vec!['a'; 64].iter());
entry.toc_type = "dir".to_owned();
assert!(entry.is_dir());
assert!(entry.is_supported());
assert_eq!(entry.mode(), libc::S_IFDIR as u32);
assert_eq!(entry.rdev(), u32::MAX);
entry.toc_type = "req".to_owned();
assert!(!entry.is_reg());
entry.toc_type = "reg".to_owned();
assert!(entry.is_reg());
assert!(entry.is_supported());
assert_eq!(entry.mode(), libc::S_IFREG as u32);
assert_eq!(entry.size(), 0x10);
entry.toc_type = "symlink".to_owned();
assert!(entry.is_symlink());
assert!(entry.is_supported());
assert_eq!(entry.mode(), libc::S_IFLNK as u32);
assert_eq!(entry.symlink_link_path(), Path::new("link_name"));
assert!(entry.normalize().is_ok());
entry.toc_type = "hardlink".to_owned();
assert!(entry.is_supported());
assert!(entry.is_hardlink());
assert_eq!(entry.mode(), libc::S_IFREG as u32);
assert_eq!(entry.hardlink_link_path(), Path::new("link_name"));
assert!(entry.normalize().is_ok());
entry.toc_type = "chunk".to_owned();
assert!(entry.is_supported());
assert!(entry.is_chunk());
assert_eq!(entry.mode(), 0);
assert_eq!(entry.size(), 0);
assert!(entry.normalize().is_err());
entry.toc_type = "block".to_owned();
assert!(entry.is_special());
assert!(entry.is_blockdev());
assert_eq!(entry.mode(), libc::S_IFBLK as u32);
entry.toc_type = "char".to_owned();
assert!(entry.is_special());
assert!(entry.is_chardev());
assert_eq!(entry.mode(), libc::S_IFCHR as u32);
assert_ne!(entry.size(), 0x10);
entry.toc_type = "fifo".to_owned();
assert!(entry.is_fifo());
assert!(entry.is_special());
assert_eq!(entry.mode(), libc::S_IFIFO as u32);
assert_eq!(entry.rdev(), 65313);
assert_eq!(entry.name().unwrap().to_str(), Some("all-entry-type.tar"));
entry.name = PathBuf::from("/");
assert_eq!(entry.name().unwrap().to_str(), Some("/"));
assert_ne!(entry.path(), Path::new("all-entry-type.tar"));
assert_eq!(entry.block_id().unwrap().data, [0xaa as u8; 32]);
entry.name = PathBuf::from("");
assert!(entry.normalize().is_err());
} }
} }

View File

@ -1,744 +0,0 @@
// Copyright 2022 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate RAFS filesystem from a tarball.
//!
//! It support generating RAFS filesystem from a tar/targz/stargz file with or without data blob.
//!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as:
//! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufReader, Read, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::sync::Mutex;
use anyhow::{anyhow, bail, Context, Result};
use tar::{Archive, Entry, EntryType, Header};
use nydus_api::enosys;
use nydus_rafs::metadata::inode::{InodeWrapper, RafsInodeFlags, RafsV6Inode};
use nydus_rafs::metadata::layout::v5::RafsV5Inode;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::ZranContextGenerator;
use nydus_storage::RAFS_MAX_CHUNKS_PER_BLOB;
use nydus_utils::compact::makedev;
use nydus_utils::compress::zlib_random::{ZranReader, ZRAN_READER_BUF_SIZE};
use nydus_utils::compress::ZlibDecoder;
use nydus_utils::digest::RafsDigest;
use nydus_utils::{div_round_up, lazy_drop, root_tracer, timing_tracer, BufReaderInfo, ByteSize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
use super::core::node::{Node, NodeInfo};
use super::core::tree::Tree;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, TarBuilder};
enum CompressionType {
None,
Gzip,
}
enum TarReader {
File(File),
BufReader(BufReader<File>),
BufReaderInfo(BufReaderInfo<File>),
BufReaderInfoSeekable(BufReaderInfo<File>),
TarGzFile(Box<ZlibDecoder<File>>),
TarGzBufReader(Box<ZlibDecoder<BufReader<File>>>),
ZranReader(ZranReader<File>),
}
impl Read for TarReader {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
match self {
TarReader::File(f) => f.read(buf),
TarReader::BufReader(f) => f.read(buf),
TarReader::BufReaderInfo(b) => b.read(buf),
TarReader::BufReaderInfoSeekable(b) => b.read(buf),
TarReader::TarGzFile(f) => f.read(buf),
TarReader::TarGzBufReader(b) => b.read(buf),
TarReader::ZranReader(f) => f.read(buf),
}
}
}
impl TarReader {
fn seekable(&self) -> bool {
matches!(
self,
TarReader::File(_) | TarReader::BufReaderInfoSeekable(_)
)
}
}
impl Seek for TarReader {
fn seek(&mut self, pos: SeekFrom) -> std::io::Result<u64> {
match self {
TarReader::File(f) => f.seek(pos),
TarReader::BufReaderInfoSeekable(b) => b.seek(pos),
_ => Err(enosys!("seek() not supported!")),
}
}
}
struct TarballTreeBuilder<'a> {
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
buf: Vec<u8>,
builder: TarBuilder,
}
impl<'a> TarballTreeBuilder<'a> {
/// Create a new instance of `TarballBuilder`.
pub fn new(
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
layer_idx: u16,
) -> Self {
let builder = TarBuilder::new(ctx.explicit_uidgid, layer_idx, ctx.fs_version);
Self {
ty,
ctx,
blob_mgr,
buf: Vec::new(),
blob_writer,
builder,
}
}
fn build_tree(&mut self) -> Result<Tree> {
let file = OpenOptions::new()
.read(true)
.open(self.ctx.source_path.clone())
.context("tarball: can not open source file for conversion")?;
let mut is_file = match file.metadata() {
Ok(md) => md.file_type().is_file(),
Err(_) => false,
};
let reader = match self.ty {
ConversionType::EStargzToRef
| ConversionType::TargzToRef
| ConversionType::TarToRef => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
let generator = ZranContextGenerator::from_buf_reader(buf_reader)?;
let reader = generator.reader();
self.ctx.blob_zran_generator = Some(Mutex::new(generator));
self.ctx.blob_features.insert(BlobFeatures::ZRAN);
TarReader::ZranReader(reader)
}
(CompressionType::None, buf_reader) => {
self.ty = ConversionType::TarToRef;
let reader = BufReaderInfo::from_buf_reader(buf_reader);
self.ctx.blob_tar_reader = Some(reader.clone());
TarReader::BufReaderInfo(reader)
}
},
ConversionType::EStargzToRafs
| ConversionType::TargzToRafs
| ConversionType::TarToRafs => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::TarGzFile(Box::new(ZlibDecoder::new(file)))
} else {
TarReader::TarGzBufReader(Box::new(ZlibDecoder::new(buf_reader)))
}
}
(CompressionType::None, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::File(file)
} else {
TarReader::BufReader(buf_reader)
}
}
},
ConversionType::TarToTarfs => {
let mut reader = BufReaderInfo::from_buf_reader(BufReader::new(file));
self.ctx.blob_tar_reader = Some(reader.clone());
if !self.ctx.blob_id.is_empty() {
reader.enable_digest_calculation(false);
} else {
// Disable seek when need to calculate hash value.
is_file = false;
}
// only enable seek when hash computing is disabled.
if is_file {
TarReader::BufReaderInfoSeekable(reader)
} else {
TarReader::BufReaderInfo(reader)
}
}
_ => return Err(anyhow!("tarball: unsupported image conversion type")),
};
let is_seekable = reader.seekable();
let mut tar = Archive::new(reader);
tar.set_ignore_zeros(true);
tar.set_preserve_mtime(true);
tar.set_preserve_permissions(true);
tar.set_unpack_xattrs(true);
// Prepare scratch buffer for dumping file data.
if self.buf.len() < self.ctx.chunk_size as usize {
self.buf = vec![0u8; self.ctx.chunk_size as usize];
}
// Generate the root node in advance, it may be overwritten by entries from the tar stream.
let root = self.builder.create_directory(&[OsString::from("/")])?;
let mut tree = Tree::new(root);
// Generate RAFS node for each tar entry, and optionally adding missing parents.
let entries = if is_seekable {
tar.entries_with_seek()
.context("tarball: failed to read entries from tar")?
} else {
tar.entries()
.context("tarball: failed to read entries from tar")?
};
for entry in entries {
let mut entry = entry.context("tarball: failed to read entry from tar")?;
let path = entry
.path()
.context("tarball: failed to to get path from tar entry")?;
let path = PathBuf::from("/").join(path);
let path = path.components().as_path();
if !self.builder.is_stargz_special_files(path) {
self.parse_entry(&mut tree, &mut entry, path)?;
}
}
// Update directory size for RAFS V5 after generating the tree.
if self.ctx.fs_version.is_v5() {
Self::set_v5_dir_size(&mut tree);
}
Ok(tree)
}
fn parse_entry<R: Read>(
&mut self,
tree: &mut Tree,
entry: &mut Entry<R>,
path: &Path,
) -> Result<()> {
let header = entry.header();
let entry_type = header.entry_type();
if entry_type.is_gnu_longname() {
return Err(anyhow!("tarball: unsupported gnu_longname from tar header"));
} else if entry_type.is_gnu_longlink() {
return Err(anyhow!("tarball: unsupported gnu_longlink from tar header"));
} else if entry_type.is_pax_local_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_local_extensions from tar header"
));
} else if entry_type.is_pax_global_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_global_extensions from tar header"
));
} else if entry_type.is_contiguous() {
return Err(anyhow!(
"tarball: unsupported contiguous entry type from tar header"
));
} else if entry_type.is_gnu_sparse() {
return Err(anyhow!(
"tarball: unsupported gnu sparse file extension from tar header"
));
}
let mut file_size = entry.size();
let name = Self::get_file_name(path)?;
let mode = Self::get_mode(header)?;
let (uid, gid) = Self::get_uid_gid(self.ctx, header)?;
let mtime = header.mtime().unwrap_or_default();
let mut flags = match self.ctx.fs_version {
RafsVersion::V5 => RafsInodeFlags::default(),
RafsVersion::V6 => RafsInodeFlags::default(),
};
// Parse special files
let rdev = if entry_type.is_block_special()
|| entry_type.is_character_special()
|| entry_type.is_fifo()
{
let major = header
.device_major()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get major device from tar entry"))?;
let minor = header
.device_minor()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get minor device from tar entry"))?;
makedev(major as u64, minor as u64) as u32
} else {
u32::MAX
};
// Parse symlink
let (symlink, symlink_size) = if entry_type.is_symlink() {
let symlink_link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let symlink_size = symlink_link_path.as_os_str().byte_size();
if symlink_size > u16::MAX as usize {
bail!("tarball: symlink target from tar entry is too big");
}
file_size = symlink_size as u64;
flags |= RafsInodeFlags::SYMLINK;
(
Some(symlink_link_path.as_os_str().to_owned()),
symlink_size as u16,
)
} else {
(None, 0)
};
let mut child_count = 0;
if entry_type.is_file() {
child_count = div_round_up(file_size, self.ctx.chunk_size as u64);
if child_count > RAFS_MAX_CHUNKS_PER_BLOB as u64 {
bail!("tarball: file size 0x{:x} is too big", file_size);
}
}
// Handle hardlink ino
let mut hardlink_target = None;
let ino = if entry_type.is_hard_link() {
let link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let link_path = PathBuf::from("/").join(link_path);
let link_path = link_path.components().as_path();
let targets = Node::generate_target_vec(link_path);
assert!(!targets.is_empty());
let mut tmp_tree: &Tree = tree;
for name in &targets[1..] {
match tmp_tree.get_child_idx(name.as_bytes()) {
Some(idx) => tmp_tree = &tmp_tree.children[idx],
None => {
bail!(
"tarball: unknown target {} for hardlink {}",
link_path.display(),
path.display()
);
}
}
}
let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() {
bail!(
"tarball: target {} for hardlink {} is not a regular file",
link_path.display(),
path.display()
);
}
hardlink_target = Some(tmp_tree);
flags |= RafsInodeFlags::HARDLINK;
tmp_node.inode.set_has_hardlink(true);
tmp_node.inode.ino()
} else {
self.builder.next_ino()
};
// Parse xattrs
let mut xattrs = RafsXAttrs::new();
if let Some(exts) = entry.pax_extensions()? {
for p in exts {
match p {
Ok(pax) => {
let prefix = b"SCHILY.xattr.";
let key = pax.key_bytes();
if key.starts_with(prefix) {
let x_key = OsStr::from_bytes(&key[prefix.len()..]);
xattrs.add(x_key.to_os_string(), pax.value_bytes().to_vec())?;
}
}
Err(e) => {
return Err(anyhow!(
"tarball: failed to parse PaxExtension from tar header, {}",
e
))
}
}
}
}
let mut inode = match self.ctx.fs_version {
RafsVersion::V5 => InodeWrapper::V5(RafsV5Inode {
i_digest: RafsDigest::default(),
i_parent: 0,
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_index: 0,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
i_reserved: [0; 8],
}),
RafsVersion::V6 => InodeWrapper::V6(RafsV6Inode {
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
}),
};
inode.set_has_xattr(!xattrs.is_empty());
let source = PathBuf::from("/");
let target = Node::generate_target(path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: self.ctx.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: rdev as u64,
path: path.to_path_buf(),
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
let mut node = Node::new(inode, info, self.builder.layer_idx);
// Special handling of hardlink.
// Tar hardlink header has zero file size and no file data associated, so copy value from
// the associated regular file.
if let Some(t) = hardlink_target {
let n = t.borrow_mut_node();
if n.inode.is_v5() {
node.inode.set_digest(n.inode.digest().to_owned());
}
node.inode.set_size(n.inode.size());
node.inode.set_child_count(n.inode.child_count());
node.chunks = n.chunks.clone();
node.set_xattr(n.info.xattrs.clone());
} else {
node.dump_node_data_with_reader(
self.ctx,
self.blob_mgr,
self.blob_writer,
Some(entry),
&mut self.buf,
)?;
}
// Update inode.i_blocks for RAFS v5.
if self.ctx.fs_version == RafsVersion::V5 && !entry_type.is_dir() {
node.v5_set_inode_blocks();
}
self.builder.insert_into_tree(tree, node)
}
fn get_uid_gid(ctx: &BuildContext, header: &Header) -> Result<(u32, u32)> {
let uid = if ctx.explicit_uidgid {
header.uid().unwrap_or_default()
} else {
0
};
let gid = if ctx.explicit_uidgid {
header.gid().unwrap_or_default()
} else {
0
};
if uid > u32::MAX as u64 || gid > u32::MAX as u64 {
bail!(
"tarball: uid {:x} or gid {:x} from tar entry is out of range",
uid,
gid
);
}
Ok((uid as u32, gid as u32))
}
fn get_mode(header: &Header) -> Result<u32> {
let mode = header
.mode()
.context("tarball: failed to get permission/mode from tar entry")?;
let ty = match header.entry_type() {
EntryType::Regular | EntryType::Link => libc::S_IFREG,
EntryType::Directory => libc::S_IFDIR,
EntryType::Symlink => libc::S_IFLNK,
EntryType::Block => libc::S_IFBLK,
EntryType::Char => libc::S_IFCHR,
EntryType::Fifo => libc::S_IFIFO,
_ => bail!("tarball: unsupported tar entry type"),
};
Ok((mode & !libc::S_IFMT as u32) | ty as u32)
}
fn get_file_name(path: &Path) -> Result<&OsStr> {
let name = if path == Path::new("/") {
path.as_os_str()
} else {
path.file_name().ok_or_else(|| {
anyhow!(
"tarball: failed to get file name from tar entry with path {}",
path.display()
)
})?
};
if name.len() > u16::MAX as usize {
bail!(
"tarball: file name {} from tar entry is too long",
name.to_str().unwrap_or_default()
);
}
Ok(name)
}
fn set_v5_dir_size(tree: &mut Tree) {
for c in &mut tree.children {
Self::set_v5_dir_size(c);
}
let mut node = tree.borrow_mut_node();
node.v5_set_dir_size(RafsVersion::V5, &tree.children);
}
fn detect_compression_algo(file: File) -> Result<(CompressionType, BufReader<File>)> {
// Use 64K buffer to keep consistence with zlib-random.
let mut buf_reader = BufReader::with_capacity(ZRAN_READER_BUF_SIZE, file);
let mut buf = [0u8; 3];
buf_reader.read_exact(&mut buf)?;
if buf[0] == 0x1f && buf[1] == 0x8b && buf[2] == 0x08 {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::Gzip, buf_reader))
} else {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::None, buf_reader))
}
}
}
/// Builder to create RAFS filesystems from tarballs.
pub struct TarballBuilder {
ty: ConversionType,
}
impl TarballBuilder {
/// Create a new instance of [TarballBuilder] to build a RAFS filesystem from a tarball.
pub fn new(conversion_type: ConversionType) -> Self {
Self {
ty: conversion_type,
}
}
}
impl Builder for TarballBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer: Box<dyn Artifact> = match self.ty {
ConversionType::EStargzToRafs
| ConversionType::EStargzToRef
| ConversionType::TargzToRafs
| ConversionType::TargzToRef
| ConversionType::TarToRafs
| ConversionType::TarToTarfs => {
if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
}
}
_ => {
return Err(anyhow!(
"tarball: unsupported image conversion type '{}'",
self.ty
))
}
};
let mut tree_builder =
TarballTreeBuilder::new(self.ty, ctx, blob_mgr, blob_writer.as_mut(), layer_idx);
let tree = timing_tracer!({ tree_builder.build_tree() }, "build_tree")?;
// Build bootstrap
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest};
#[test]
fn test_build_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
#[test]
fn test_build_encrypted_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
true,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
}

View File

@ -5,7 +5,7 @@ description = "C wrapper library for Nydus SDK"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus" repository = "https://github.com/dragonflyoss/image-service"
edition = "2021" edition = "2021"
[lib] [lib]
@ -15,10 +15,10 @@ crate-type = ["cdylib", "staticlib"]
[dependencies] [dependencies]
libc = "0.2.137" libc = "0.2.137"
log = "0.4.17" log = "0.4.17"
fuse-backend-rs = "^0.12.0" fuse-backend-rs = "0.10.0"
nydus-api = { version = "0.4.0", path = "../api" } nydus-api = { version = "0.2", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" } nydus-rafs = { version = "0.2.2", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage" } nydus-storage = { version = "0.6.2", path = "../storage" }
[features] [features]
baekend-s3 = ["nydus-storage/backend-s3"] baekend-s3 = ["nydus-storage/backend-s3"]

1
contrib/ctr-remote/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
bin/

View File

@ -0,0 +1,21 @@
# https://golangci-lint.run/usage/configuration#config-file
linters:
enable:
- staticcheck
- unconvert
- gofmt
- goimports
- revive
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
- misc

View File

@ -0,0 +1,27 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/ctr-remote ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/ctr-remote ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -0,0 +1,65 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"os"
"github.com/containerd/containerd/cmd/ctr/app"
"github.com/containerd/containerd/pkg/seed"
"github.com/dragonflyoss/image-service/contrib/ctr-remote/commands"
"github.com/urfave/cli"
)
func init() {
seed.WithTimeAndRand()
}
func main() {
customCommands := []cli.Command{commands.RpullCommand}
app := app.New()
app.Description = "NOTE: Enhanced for nydus-snapshotter\n" + app.Description
for i := range app.Commands {
if app.Commands[i].Name == "images" {
sc := map[string]cli.Command{}
for _, subcmd := range customCommands {
sc[subcmd.Name] = subcmd
}
// First, replace duplicated subcommands
for j := range app.Commands[i].Subcommands {
for name, subcmd := range sc {
if name == app.Commands[i].Subcommands[j].Name {
app.Commands[i].Subcommands[j] = subcmd
delete(sc, name)
}
}
}
// Next, append all new sub commands
for _, subcmd := range sc {
app.Commands[i].Subcommands = append(app.Commands[i].Subcommands, subcmd)
}
break
}
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "ctr-remote: %v\n", err)
os.Exit(1)
}
}

View File

@ -0,0 +1,103 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"context"
"fmt"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cmd/ctr/commands"
"github.com/containerd/containerd/cmd/ctr/commands/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/log"
"github.com/containerd/nydus-snapshotter/pkg/label"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
const (
remoteSnapshotterName = "nydus"
)
var RpullCommand = cli.Command{
Name: "rpull",
Usage: "pull an image from a registry leveraging nydus-snapshotter",
ArgsUsage: "[flags] <ref>",
Description: `Fetch and prepare an image for use in containerd leveraging nydus-snapshotter.
After pulling an image, it should be ready to use the same reference in a run command.`,
Flags: append(commands.RegistryFlags, commands.LabelFlag),
Action: func(context *cli.Context) error {
var (
ref = context.Args().First()
config = &rPullConfig{}
)
if ref == "" {
return fmt.Errorf("please provide an image reference to pull")
}
client, ctx, cancel, err := commands.NewClient(context)
if err != nil {
return err
}
defer cancel()
ctx, done, err := client.WithLease(ctx)
if err != nil {
return err
}
defer done(ctx)
fc, err := content.NewFetchConfig(ctx, context)
if err != nil {
return err
}
config.FetchConfig = fc
return pull(ctx, client, ref, config)
},
}
type rPullConfig struct {
*content.FetchConfig
}
func pull(ctx context.Context, client *containerd.Client, ref string, config *rPullConfig) error {
pCtx := ctx
h := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.MediaType != images.MediaTypeDockerSchema1Manifest {
fmt.Printf("fetching %v... %v\n", desc.Digest.String()[:15], desc.MediaType)
}
return nil, nil
})
log.G(pCtx).WithField("image", ref).Debug("fetching")
configLabels := commands.LabelArgs(config.Labels)
if _, err := client.Pull(pCtx, ref, []containerd.RemoteOpt{
containerd.WithPullLabels(configLabels),
containerd.WithResolver(config.Resolver),
containerd.WithImageHandler(h),
containerd.WithSchema1Conversion,
containerd.WithPullUnpack,
containerd.WithPullSnapshotter(remoteSnapshotterName),
containerd.WithImageHandlerWrapper(label.AppendLabelsHandlerWrapper(ref)),
}...); err != nil {
return err
}
return nil
}

62
contrib/ctr-remote/go.mod Normal file
View File

@ -0,0 +1,62 @@
module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.18
require (
github.com/containerd/containerd v1.6.20
github.com/containerd/nydus-snapshotter v0.6.1
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b
github.com/urfave/cli v1.22.12
)
require (
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.7 // indirect
github.com/cilium/ebpf v0.10.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/containerd/fifo v1.1.0 // indirect
github.com/containerd/go-cni v1.1.9 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/ttrpc v1.2.1 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/containernetworking/cni v1.1.2 // indirect
github.com/containernetworking/plugins v1.2.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/klauspost/compress v1.16.3 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/onsi/ginkgo/v2 v2.4.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sirupsen/logrus v1.9.0 // indirect
github.com/stretchr/testify v1.8.2 // indirect
go.opencensus.io v0.24.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.8.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.6.0 // indirect
golang.org/x/text v0.8.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/genproto v0.0.0-20230330200707-38013875ee22 // indirect
google.golang.org/grpc v1.54.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
)

330
contrib/ctr-remote/go.sum Normal file
View File

@ -0,0 +1,330 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/Microsoft/go-winio v0.6.0 h1:slsWYD/zyx7lCXoZVlvQrj0hPTM1HI4+v1sIda2yDvg=
github.com/Microsoft/go-winio v0.6.0/go.mod h1:cTAf44im0RAYeL23bpB+fzCyDH2MJiz2BO69KH/soAE=
github.com/Microsoft/hcsshim v0.10.0-rc.7 h1:HBytQPxcv8Oy4244zbQbe6hnOnx544eL5QPUqhJldz8=
github.com/Microsoft/hcsshim v0.10.0-rc.7/go.mod h1:ILuwjA+kNW+MrN/w5un7n3mTqkwsFu4Bp05/okFUZlE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/cilium/ebpf v0.10.0 h1:nk5HPMeoBXtOzbkZBWym+ZWq1GIiHUsBFXxwewXAHLQ=
github.com/cilium/ebpf v0.10.0/go.mod h1:DPiVdY/kT534dgc9ERmvP8mWA+9gvwgKfRvk4nNWnoE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
github.com/containerd/console v1.0.3 h1:lIr7SlA5PxZyMV30bDW0MGbiOPXwc63yRuCP0ARubLw=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/containerd v1.6.20 h1:+itjwpdqXpzHB/QAiWc/BZCjjVfcNgw69w/oIeF4Oy0=
github.com/containerd/containerd v1.6.20/go.mod h1:apei1/i5Ux2FzrK6+DM/suEsGuK/MeVOfy8tR2q7Wnw=
github.com/containerd/continuity v0.3.0 h1:nisirsYROK15TAMVukJOUyGJjz4BNQJBVsNvAXZJ/eg=
github.com/containerd/continuity v0.3.0/go.mod h1:wJEAIwKOm/pBZuBd0JmeTvnLquTB1Ag8espWhkykbPM=
github.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY=
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
github.com/containerd/go-cni v1.1.9 h1:ORi7P1dYzCwVM6XPN4n3CbkuOx/NZ2DOqy+SHRdo9rU=
github.com/containerd/go-cni v1.1.9/go.mod h1:XYrZJ1d5W6E2VOvjffL3IZq0Dz6bsVlERHbekNK90PM=
github.com/containerd/go-runc v1.0.0 h1:oU+lLv1ULm5taqgV/CJivypVODI4SUz1znWjv3nNYS0=
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
github.com/containerd/nydus-snapshotter v0.6.1 h1:G7k7EwnjFa1fUC3ywkldDt2BC3mtBPrt+omFke/Vdhk=
github.com/containerd/nydus-snapshotter v0.6.1/go.mod h1:U9m10GYZKisnSKOdgIjfkU8Ad0UTSYJ6CpP3I0SJBD0=
github.com/containerd/ttrpc v1.2.1 h1:VWv/Rzx023TBLv4WQ+9WPXlBG/s3rsRjY3i9AJ2BJdE=
github.com/containerd/ttrpc v1.2.1/go.mod h1:sIT6l32Ph/H9cvnJsfXM5drIVzTr5A2flTf1G5tYZak=
github.com/containerd/typeurl v1.0.2 h1:Chlt8zIieDbzQFzXzAeBEF92KhExuE4p9p92/QmY7aY=
github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
github.com/containernetworking/cni v1.1.2 h1:wtRGZVv7olUHMOqouPpn3cXJWpJgM6+EUl31EQbXALQ=
github.com/containernetworking/cni v1.1.2/go.mod h1:sDpYKmGVENF3s6uvMvGgldDWeG8dMxakj/u+i9ht9vw=
github.com/containernetworking/plugins v1.2.0 h1:SWgg3dQG1yzUo4d9iD8cwSVh1VqI+bP7mkPDoSfP9VU=
github.com/containernetworking/plugins v1.2.0/go.mod h1:/VjX4uHecW5vVimFa1wkG4s+r/s9qIfPdqlLF4TW8c4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/frankban/quicktest v1.14.4 h1:g2rn0vABPOOXmZUj+vbmUp0lPoXEMuhTpIluN0XL9UY=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/googleapis v1.4.0 h1:zgVt4UpGxcqVOw97aRGxT4svlcmdK35fynLNctY32zI=
github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.16.3 h1:XuJt9zzcnaz6a16/OU53ZjWp/v7/42WcR5t2a0PcNQY=
github.com/klauspost/compress v1.16.3/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78=
github.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/sys/signal v0.7.0 h1:25RW3d5TnQEoKvRbEKUGay6DCQ46IxAVTT9CUMgmsSI=
github.com/moby/sys/signal v0.7.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg=
github.com/moby/sys/symlink v0.2.0 h1:tk1rOM+Ljp0nFmfOIBtlV3rTDlWOwFRhjEeAhZB0nZc=
github.com/moby/sys/symlink v0.2.0/go.mod h1:7uZVF2dqJjG/NsClqul95CqKOBRQyYSNnJ6BMgR/gFs=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.4.0 h1:+Ig9nvqgS5OBSACXNk15PLdp0U9XPYROt9CFzVdFGIs=
github.com/onsi/ginkgo/v2 v2.4.0/go.mod h1:iHkDK1fKGcBoEHT5W7YBq4RFWaQulw+caOMkAt4OrFo=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.24.2 h1:J/tulyYK6JwBldPViHJReihxxZ+22FHs0piGjQAvoUE=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b h1:YWuSjZCQAPM8UUBLkYUk1e+rZcvWHJmFb6i6rM44Xs8=
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opencontainers/runc v1.1.5 h1:L44KXEpKmfWDcS02aeGm8QNTFXTo2D+8MYGDIJ/GDEs=
github.com/opencontainers/runc v1.1.5/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.1.0-rc.1 h1:wHa9jroFfKGQqFHj0I1fMRKLl0pfj+ynAqBxo3v6u9w=
github.com/opencontainers/runtime-spec v1.1.0-rc.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.12 h1:igJgVw1JdKH+trcLWLeLwZjU9fEfPesQ+9/e4MQ44S8=
github.com/urfave/cli v1.22.12/go.mod h1:sSBEIC79qR6OvcmsD4U3KABeOTxDqQtdDnaFuUN30b8=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.9.0 h1:KENHtAZL2y3NLMYZeHY9DW8HW8V+kQyJsY/V9JlKvCs=
golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.7.0 h1:W4OVu8VVOaIO0yzWMNdepAulS7YfoS3Zabrm8DOXXU4=
golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20230330200707-38013875ee22 h1:n3ThVoQnHbCbnkhZZ1fx3+3fBAisViSwrpbtLV7vydY=
google.golang.org/genproto v0.0.0-20230330200707-38013875ee22/go.mod h1:UUQDJDOlWu4KYeJZffbWgBkS1YFobzKbLVfK69pe0Ak=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.54.0 h1:EhTqbhiYeixwWQtAEZAxmV9MGqcjEU2mFx52xCzNyag=
google.golang.org/grpc v1.54.0/go.mod h1:PUSEXI6iWghWaB6lXM4knEgpJNu2qUcKfDtNci3EC2g=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@ -1,8 +0,0 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,19 @@
[package] [package]
name = "nydus-backend-proxy" name = "nydus-backend-proxy"
version = "0.2.0" version = "0.1.0"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
description = "A simple HTTP server to provide a fake container registry for nydusd" description = "A simple HTTP server to provide a fake container registry for nydusd"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus" repository = "https://github.com/dragonflyoss/image-service"
edition = "2021" edition = "2018"
license = "Apache-2.0" license = "Apache-2.0"
[dependencies] [dependencies]
rocket = "0.5.0" rocket = "0.5.0-rc"
http-range = "0.1.5" http-range = "0.1.3"
nix = { version = "0.28", features = ["uio"] } nix = ">=0.23.0"
clap = "4.4" clap = "2.33"
once_cell = "1.19.0" once_cell = "1.10.0"
lazy_static = "1.4" lazy_static = "1.4"
[workspace] [workspace]

View File

@ -2,22 +2,29 @@
// //
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause) // SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate lazy_static;
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use std::collections::HashMap; use std::collections::HashMap;
use std::env; use std::env;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::{fs, io}; use std::{fs, io};
use clap::*; use clap::{App, Arg};
use http_range::HttpRange; use http_range::HttpRange;
use lazy_static::lazy_static;
use nix::sys::uio; use nix::sys::uio;
use rocket::fs::{FileServer, NamedFile}; use rocket::fs::{FileServer, NamedFile};
use rocket::futures::lock::{Mutex, MutexGuard}; use rocket::futures::lock::{Mutex, MutexGuard};
use rocket::http::Status; use rocket::http::Status;
use rocket::request::{self, FromRequest, Outcome}; use rocket::request::{self, FromRequest, Outcome};
use rocket::response::{self, stream::ReaderStream, Responder}; use rocket::response::{self, stream::ReaderStream, Responder};
use rocket::*; use rocket::{Request, Response};
lazy_static! { lazy_static! {
static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend { static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend {
@ -158,12 +165,12 @@ impl<'r> Responder<'r, 'static> for RangeStream {
let mut read = 0u64; let mut read = 0u64;
let startpos = self.start as i64; let startpos = self.start as i64;
let size = self.len; let size = self.len;
let file = self.file.clone(); let raw_fd = self.file.as_raw_fd();
Response::build() Response::build()
.streamed_body(ReaderStream! { .streamed_body(ReaderStream! {
while read < size { while read < size {
match uio::pread(file.as_ref(), &mut buf, startpos + read as i64) { match uio::pread(raw_fd, &mut buf, startpos + read as i64) {
Ok(mut n) => { Ok(mut n) => {
n = std::cmp::min(n, (size - read) as usize); n = std::cmp::min(n, (size - read) as usize);
read += n as u64; read += n as u64;
@ -261,31 +268,20 @@ async fn fetch(
#[rocket::main] #[rocket::main]
async fn main() { async fn main() {
let cmd = Command::new("nydus-backend-proxy") let cmd = App::new("nydus-backend-proxy")
.author(env!("CARGO_PKG_AUTHORS")) .author(crate_authors!())
.version(env!("CARGO_PKG_VERSION")) .version(crate_version!())
.about("A simple HTTP server to provide a fake container registry for nydusd.") .about("A simple HTTP server to provide a fake container registry for nydusd.")
.arg( .arg(
Arg::new("blobsdir") Arg::with_name("blobsdir")
.short('b') .short("b")
.long("blobsdir") .long("blobsdir")
.required(true) .takes_value(true)
.help("path to directory hosting nydus blob files"), .help("path to directory hosting nydus blob files"),
) )
.help_template(
"\
{before-help}{name} {version}
{author-with-newline}{about-with-newline}
{usage-heading} {usage}
{all-args}{after-help}
",
)
.get_matches(); .get_matches();
// Safe to unwrap() because `blobsdir` takes a value. // Safe to unwrap() because `blobsdir` takes a value.
let path = cmd let path = cmd.value_of("blobsdir").unwrap();
.get_one::<String>("blobsdir")
.expect("required argument");
init_blob_backend(Path::new(path)).await; init_blob_backend(Path::new(path)).await;

View File

@ -8,14 +8,14 @@ linters:
- goimports - goimports
- revive - revive
- ineffassign - ineffassign
- govet - vet
- unused - unused
- misspell - misspell
disable: disable:
- errcheck - errcheck
run: run:
timeout: 5m deadline: 4m
issues: skip-dirs:
exclude-dirs: - misc
- misc

View File

@ -2,7 +2,7 @@ GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M) BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/) PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH) GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= GOPROXY ?= https://goproxy.io
ifdef GOPROXY ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY} PROXY := GOPROXY=${GOPROXY}
@ -13,17 +13,15 @@ endif
all: build all: build
build: build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go @CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
release: release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go @CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go
test: build test: build
go vet $(PACKAGES) go vet $(PACKAGES)
go test -v -cover ${PACKAGES}
lint:
golangci-lint run golangci-lint run
go test -v -cover ${PACKAGES}
clean: clean:
rm -f bin/* rm -f bin/*

View File

@ -8,16 +8,12 @@ import (
"syscall" "syscall"
"github.com/pkg/errors" "github.com/pkg/errors"
cli "github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
) )
const ( const (
// Extra mount option to pass Nydus specific information from snapshotter to runtime through containerd.
extraOptionKey = "extraoption=" extraOptionKey = "extraoption="
// Kata virtual volume infmation passed from snapshotter to runtime through containerd, superset of `extraOptionKey`.
// Please refer to `KataVirtualVolume` in https://github.com/kata-containers/kata-containers/blob/main/src/libs/kata-types/src/mount.rs
kataVolumeOptionKey = "io.katacontainers.volume="
) )
var ( var (
@ -48,7 +44,7 @@ func parseArgs(args []string) (*mountArgs, error) {
} }
if args[2] == "-o" && len(args[3]) != 0 { if args[2] == "-o" && len(args[3]) != 0 {
for _, opt := range strings.Split(args[3], ",") { for _, opt := range strings.Split(args[3], ",") {
if strings.HasPrefix(opt, extraOptionKey) || strings.HasPrefix(opt, kataVolumeOptionKey) { if strings.HasPrefix(opt, extraOptionKey) {
// filter extraoption // filter extraoption
continue continue
} }

View File

@ -1,15 +1,15 @@
module github.com/dragonflyoss/nydus/contrib/nydus-overlayfs module github.com/dragonflyoss/image-service/contrib/nydus-overlayfs
go 1.21 go 1.18
require ( require (
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/urfave/cli/v2 v2.27.1 github.com/urfave/cli/v2 v2.3.0
golang.org/x/sys v0.15.0 golang.org/x/sys v0.1.0
) )
require ( require (
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/russross/blackfriday/v2 v2.0.1 // indirect
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e // indirect github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect
) )

View File

@ -1,10 +1,17 @@
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho= github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=
github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e h1:+SOyEddqYF09QP7vr7CgJ1eti3pY9Fn3LHO1M1r/0sI= github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc= github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

View File

@ -0,0 +1,150 @@
# Nydus Functional Test
## Introduction
Nydus functional test a.k.a nydus-test is built on top of [pytest](https://docs.pytest.org/en/stable/).
It basically includes two parts:
* Specific test cases located at sub-directory functional-test
* Test framework located at sub-directory framework
## Prerequisites
Debian/Ubuntu
```bash
sudo apt update && sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3-pip libpython3.7-dev libffi-dev
python3 -m pip install --upgrade pip
# Ensure you install below modules as root user
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
```
## Getting Started
### Configure framework
Nydus-test is controlled and configured by `anchor_conf.json`. Nydus-test will try to find it from its root directory before executing all tests.
```json
{
"workspace": "/path/to/where/nydus-test/stores/intermediates",
"nydus_project": "/path/to/image-service/repo",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "127.0.0.1:5000",
"registry_namespace": "nydus",
"registry_auth": "YourRegistryAuth",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/path/to/where/backend/simulator/stores/blobs"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd"
},
"logging_file": "stderr",
"target": "gnu"
}
```
### Compile Nydus components
Before running nydus-test, please compile nydus components.
`nydusd` and `nydus-image`
```bash
cd /path/to/image-service/repo
make release
```
`nydus-backend-proxy`
```bash
cd /path/to/image-service/repo
make -C contrib/nydus-backend-proxy
```
### Define target fs structure
```yaml
depth: 4
width: 6
layers:
- layer1:
- size: 10KB
type: regular
count: 5
- size: 4MB
type: regular
count: 30
- size: 128KB
type: regular
count: 100
- size: 90MB
type: regular
count: 1
- type: symlink
count: 100
```
### Generate your own original rootfs
The framework provides a tool to generate rootfs which will be the test target.
```text
$ sudo python3 nydus_test_config.py --dist fs_structure.yaml
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test layer, Takes time 0.857 seconds
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test parent layer, Takes time 0.760 seconds
```
## Run test
Please run tests as root user.
### Run All Test Cases
The whole nydus functional test suit works on top of pytest.
### Run a Specific Test Case
```bash
pytest -sv functional-test/test_nydus.py::test_basic
```
### Run a Set of Test Cases
```bash
pytest -sv functional-test/test_nydus.py
```
### Stop Once a Case Fails
```bash
pytest -sv functional-test/test_nydus.py::test_basic --pdb
```
### Run case Step by Step
```bash
pytest -sv functional-test/test_nydus.py::test_basic --trace
```

View File

@ -0,0 +1,220 @@
import sys
import os
import re
import shutil
import logging
import pytest
import docker
sys.path.append(os.path.realpath("framework"))
from nydus_anchor import NydusAnchor
from rafs import RafsImage, RafsConf
from backend_proxy import BackendProxy
import utils
ANCHOR = NydusAnchor()
utils.logging_setup(ANCHOR.logging_file)
os.environ["RUST_BACKTRACE"] = "1"
from tools import artifact
@pytest.fixture()
def nydus_anchor(request):
# TODO: check if nydusd executable exists and have a proper version
# TODO: check if bootstrap exists
# TODO: check if blob cache file exists and try to clear it if it does
# TODO: check if blob file was put to oss
nyta = NydusAnchor()
nyta.check_prerequisites()
logging.info("*** Testing case %s ***", os.environ.get("PYTEST_CURRENT_TEST"))
yield nyta
nyta.clear_blobcache()
if hasattr(nyta, "scratch_dir"):
logging.info("Clean up scratch dir")
shutil.rmtree(nyta.scratch_dir)
if hasattr(nyta, "nydusd") and nyta.nydusd is not None:
nyta.nydusd.shutdown()
if hasattr(nyta, "overlayfs") and os.path.ismount(nyta.overlayfs):
nyta.umount_overlayfs()
# Check if nydusd is crashed.
# TODO: Where the core file is places is controlled by kernel.
# Check `/proc/sys/kernel/core_pattern`
files = os.listdir()
for one in files:
assert re.match("^core\..*", one) is None
try:
shutil.rmtree(nyta.localfs_workdir)
except FileNotFoundError:
pass
try:
nyta.cleanup_dustbin()
except FileNotFoundError:
pass
# All nydusd should stop.
assert not NydusAnchor.capture_running_nydusd()
@pytest.fixture()
def nydus_image(nydus_anchor: NydusAnchor, request):
"""
Create images using previous version nydus image tool.
This fixture provides rafs image file, case is not responsible for performing
creating image.
"""
image = RafsImage(
nydus_anchor, nydus_anchor.source_dir, "bootstrap", "blob", clear_from_oss=True
)
yield image
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_image(nydus_anchor: NydusAnchor):
"""No longer use source_dir but use scratch_dir,
Scratch image's creation is delayed until runtime of each case.
"""
nydus_anchor.prepare_scratch_dir()
# Scratch image is not made here since specific case decides how to
# scratch this dir
image = RafsImage(
nydus_anchor,
nydus_anchor.scratch_dir,
"bootstrap_scratched",
"blob_scratched",
clear_from_oss=True,
)
yield image
if not image.created:
return
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_parent_image(nydus_anchor: NydusAnchor):
parent_image = RafsImage(
nydus_anchor, nydus_anchor.parent_rootfs, "bootstrap_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_parent_image(nydus_anchor: NydusAnchor):
nydus_anchor.prepare_scratch_parent_dir()
parent_image = RafsImage(
nydus_anchor, nydus_anchor.scratch_parent_dir, "bs_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture(scope="session", autouse=False)
def collect_report(request):
"""
To enable code coverage report, let @autouse be True.
"""
build_dir = ANCHOR.build_dir
from coverage_collect import collect_coverage
def CC():
collect_coverage(build_dir)
request.addfinalizer(CC)
@pytest.fixture
def rafs_conf(nydus_anchor):
"""Generate conf file via libconf(https://pypi.org/project/libconf/)"""
rc = RafsConf(nydus_anchor)
rc.dump_rafs_conf()
yield rc
@pytest.fixture(scope="session")
def nydusify_converter():
# Can't access a `function` scope fixture.
os.environ["GOTRACEBACK"] = "crash"
nydusify_source_dir = os.path.join(ANCHOR.nydus_project, "contrib/nydusify")
with utils.pushd(nydusify_source_dir):
ret, _ = utils.execute(["make", "release"])
assert ret
@pytest.fixture(scope="session")
def nydus_snapshotter():
# Can't access a `function` scope fixture.
snapshotter_source = os.path.join(ANCHOR.nydus_project, "contrib/nydus-snapshotter")
with utils.pushd(snapshotter_source):
ret, _ = utils.execute(["make"])
assert ret
@pytest.fixture()
def local_registry():
docker_client = docker.from_env()
registry_container = docker_client.containers.run(
"registry:latest", detach=True, network_mode="host", remove=True
)
yield registry_container
try:
registry_container.stop()
except docker.errors.APIError:
assert False, "fail in stopping container"
try:
ANCHOR.backend_proxy_blobs_dir
@pytest.fixture(scope="module", autouse=True)
def nydus_backend_proxy():
backend_proxy = BackendProxy(
ANCHOR,
ANCHOR.backend_proxy_blobs_dir,
bin=os.path.join(
ANCHOR.nydus_project,
"contrib",
"nydus-backend-proxy",
"target",
"release",
"nydus-backend-proxy",
),
)
backend_proxy.start()
yield
backend_proxy.stop()
except AttributeError:
pass

View File

View File

@ -0,0 +1,24 @@
from os import PathLike
import utils
class BackendProxy:
def __init__(self, anchor, blobs_dir: PathLike, bin:PathLike):
self.__blobs_dir = blobs_dir
self.bin = bin
self.anchor = anchor
def start(self):
_, self.p = utils.run(
[self.bin, "-b", self.blobs_dir()],
wait=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
def stop(self):
self.p.terminate()
self.p.wait()
def blobs_dir(self):
return self.__blobs_dir

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,54 @@
import time
import hmac
import hashlib
import base64
import urllib.parse
import requests
import json
import sys
import os
from string import Template
sys.path.append(os.path.realpath("framework"))
BOT_SECRET = os.getenv("BOT_SECRET")
BOT_ACCESS_TOKEN = os.getenv("BOT_ACCESS_TOKEN")
SEND_CONTENT_TEMPLATE = """**nydus-bot**
${content}"""
class Bot:
def __init__(self):
if BOT_SECRET is None or BOT_ACCESS_TOKEN is None:
raise ValueError
timestamp = str(round(time.time() * 1000))
secret_enc = BOT_SECRET.encode("utf-8")
string_to_sign = "{}\n{}".format(timestamp, BOT_SECRET)
string_to_sign_enc = string_to_sign.encode("utf-8")
hmac_code = hmac.new(
secret_enc, string_to_sign_enc, digestmod=hashlib.sha256
).digest()
sign = urllib.parse.quote_plus(base64.b64encode(hmac_code))
self.url = f"https://oapi.dingtalk.com/robot/send?access_token={BOT_ACCESS_TOKEN}&sign={sign}&timestamp={timestamp}"
def send(self, content: str):
c = Template(SEND_CONTENT_TEMPLATE).substitute(content=content)
d = {
"msgtype": "markdown",
"markdown": {"title": "Nydus-bot", "text": c},
}
ret = requests.post(
self.url, headers={"Content-Type": "application/json"}, data=json.dumps(d)
)
print(ret.__dict__)
if __name__ == "__main__":
bot = Bot()
bot.send(sys.argv[1])

View File

@ -0,0 +1,5 @@
import os
ANCHOR_PATH = os.path.join(
os.getenv("ANCHOR_PATH", default=os.getcwd()), "anchor_conf.json"
)

View File

@ -0,0 +1,88 @@
import tempfile
import subprocess
import toml
import os
from snapshotter import Snapshotter
import utils
class Containerd(utils.ArtifactProcess):
state_dir = "/run/nydus-test_containerd"
def __init__(self, anchor, snapshotter: Snapshotter) -> None:
self.anchor = anchor
self.containerd_bin = anchor.containerd_bin
self.snapshotter = snapshotter
def gen_config(self):
_, p = utils.run(
[self.containerd_bin, "config", "default"], stdout=subprocess.PIPE
)
out, _ = p.communicate()
config = toml.loads(out.decode())
config["state"] = self.state_dir
self.__address = config["grpc"]["address"] = os.path.join(
self.state_dir, "containerd.sock"
)
config["plugins"]["io.containerd.grpc.v1.cri"]["containerd"][
"snapshotter"
] = "nydus"
config["plugins"]["io.containerd.grpc.v1.cri"]["sandbox_image"] = "google/pause"
config["plugins"]["io.containerd.grpc.v1.cri"]["containerd"][
"disable_snapshot_annotations"
] = False
config["plugins"]["io.containerd.runtime.v1.linux"]["no_shim"] = True
self.__root = tempfile.TemporaryDirectory(
dir=self.anchor.workspace, suffix="root"
)
config["root"] = self.__root.name
config["proxy_plugins"] = {
"nydus": {
"type": "snapshot",
"address": self.snapshotter.sock(),
}
}
self.config = tempfile.NamedTemporaryFile(mode="w", suffix="config.toml")
self.config.write(toml.dumps(config))
self.config.flush()
return self
@property
def root(self):
return self.__root.name
def run(self):
_, self.p = utils.run(
[self.containerd_bin, "--config", self.config.name],
wait=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
@property
def address(self):
return self.__address
def remove_image_sync(self, repo):
cmd = [
"ctr",
"-n",
"k8s.io",
"-a",
self.__address,
"images",
"rm",
repo,
"--sync",
]
ret, out = utils.execute(cmd)
assert ret
def shutdown(self):
self.p.terminate()
self.p.wait()

View File

@ -0,0 +1,32 @@
import utils
import os
import sys
from argparse import ArgumentParser
def collect_coverage(source_dir, target_dir, report):
"""
Example:
./target/debug/ -s . -t lcov --llvm --branch --ignore-not-existing -o ./target/debug/coverage/
"""
cmd = f"framework/bin/grcov {target_dir} -s {source_dir} -t html --llvm --branch \
--ignore-not-existing -o {report}/coverage_report"
utils.execute(cmd, shell=True)
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--source", help="path to source code", type=str)
parser.add_argument("--target", help="path to build target directory", type=str)
args = parser.parse_args()
source = args.source
target = args.target
report = "."
os.environ["RUSTFLAGS"] = "-Zinstrument-coverage"
collect_coverage(source, target, report)

View File

@ -0,0 +1,241 @@
import yaml
import tempfile
from string import Template
import json
import time
import uuid
import utils
POD_CONF = """
metadata:
attempt: 1
name: nydus-sandbox
namespace: default
uid: ${uid}
log_directory: /tmp
linux:
security_context:
namespace_options:
network: 2
"""
# annotations:
# "io.containerd.osfeature": "nydus.remoteimage.v1"
CONTAINER_CONF = """
metadata:
name: ${container_name}
image:
image: ${image}
log_path: container.1.log
command: ["sh"]
"""
class Cri:
def __init__(self, runtime_endpoint, image_endpoint) -> None:
config = dict()
config["runtime-endpoint"] = f"unix://{runtime_endpoint}"
config["image-endpoint"] = f"unix://{image_endpoint}"
config["timeout"] = 10
config["debug"] = False
self._config = tempfile.NamedTemporaryFile(
mode="w+", suffix="crictl.config", delete=False
)
yaml.dump(config, self._config)
def run_container(
self,
image,
container_name,
):
container_config = tempfile.NamedTemporaryFile(
mode="w+", suffix="container.config.yaml", delete=True
)
pod_config = tempfile.NamedTemporaryFile(
mode="w+", suffix="pod.config.yaml", delete=True
)
print(pod_config.read())
_s = Template(CONTAINER_CONF).substitute(
image=image, container_name=container_name
)
container_config.write(_s)
container_config.flush()
pod_config.write(
Template(POD_CONF).substitute(
uid=uuid.uuid4(),
)
)
pod_config.flush()
ret, _ = utils.execute(
[
"crictl",
"--config",
self._config.name,
"run",
container_config.name,
pod_config.name,
],
print_err=True,
)
assert ret
def stop_rm_container(self, id):
cmd = [
"crictl",
"--config",
self._config.name,
"stop",
id,
]
ret, _ = utils.execute(cmd)
assert ret
cmd = [
"crictl",
"--config",
self._config.name,
"rm",
id,
]
ret, _ = utils.execute(cmd)
assert ret
def list_images(self):
cmd = [
"crictl",
"--config",
self._config.name,
"images",
"--output",
"json",
]
ret, out = utils.execute(cmd)
assert ret
images = json.loads(out)
return images["images"]
def remove_image(self, repo):
images = self.list_images()
for i in images:
# Example:
# {'id': 'sha256:cc6e5af55020252510374deecb0168fc7170b5621e03317cb7c4192949becb9a',
# 'repoTags': ['reg.docker.alibaba-inc.com/chge-nydus-test/busybox:latest_converted'], 'repoDigests': ['reg.docker.alibaba-inc.com/chge-nydus-test/busybox@sha256:07592f0848a6752de1b58f06b8194dbeaff1cb3314ab3225b6ab698abac1185d'], 'size': '998569', 'uid': None, 'username': ''}
if i["repoTags"][0] == repo:
id = i["id"]
cmd = [
"crictl",
"--config",
self._config.name,
"rmi",
id,
]
ret, _ = utils.execute(cmd)
assert ret
return True
assert False
return False
def check_container_status(self, name, timeout):
"""
{
"containers": [
{
"id": "4098985ed96655dbd43eef2d6502197598b72fe40cfec4cb77466aedf755807f",
"podSandboxId": "2ae536d3481130d8a47a05fb6ffeb303cb3d57b29e8744d3ffcbbc27377ece3d",
"metadata": {
"name": "nydus-container",
"attempt": 0
},
"image": {
"image": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql:latest_converted"
},
"imageRef": "sha256:68e06967547192d5eaf406a21ea39b3131f86e9dc8fb8b75e2437a1bde8d0aad",
"state": "CONTAINER_EXITED",
"createdAt": "1610018967168325132",
"labels": {
},
"annotations": {
}
}
]
}
---
{
"status": {
"id": "4098985ed96655dbd43eef2d6502197598b72fe40cfec4cb77466aedf755807f",
"metadata": {
"attempt": 0,
"name": "nydus-container"
},
"state": "CONTAINER_EXITED",
"createdAt": "2021-01-07T19:29:27.168325132+08:00",
"startedAt": "2021-01-07T19:29:28.172706527+08:00",
"finishedAt": "2021-01-07T19:29:32.882263863+08:00",
"exitCode": 0,
"image": {
"image": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql:latest_converted"
},
"imageRef": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql@sha256:ebadc23a8b2cbd468cb86ab5002dc85848e252de71cdc4002481f63a1d3c90be",
"reason": "Completed",
"message": "",
"labels": {},
"annotations": {},
"mounts": [],
"logPath": "/tmp/container.1.log"
},
"""
elapsed = 0
while elapsed <= timeout:
ps_cmd = [
"crictl",
"--config",
self._config.name,
"ps",
"-a",
"--output",
"json",
]
ret, out = utils.execute(
ps_cmd,
print_err=True,
)
assert ret
containers = json.loads(out)
for c in containers["containers"]:
# The container is found, no need to wait anylonger
if c["metadata"]["name"] == name:
id = c["id"]
inspect_cmd = [
"crictl",
"--config",
self._config.name,
"inspect",
id,
]
ret, out = utils.execute(inspect_cmd)
assert ret
status = json.loads(out)
if status["status"]["exitCode"] == 0:
return id, True
else:
return None, False
time.sleep(1)
elapsed += 1
return None, False

View File

@ -0,0 +1,56 @@
from linux_command import LinuxCommand
import utils
import subprocess
class DdParam(LinuxCommand):
def __init__(self, command_name):
LinuxCommand.__init__(self, command_name)
self.param_name_prefix = ""
def bs(self, block_size):
return self.set_param("bs", block_size)
def input(self, input_file):
return self.set_param("if", input_file)
def output(self, output_file):
return self.set_param("of", output_file)
def count(self, count):
return self.set_param("count", count)
def iflag(self, iflag):
return self.set_param("iflag", iflag)
def skip(self, len):
return self.set_param("skip", len)
class DD:
"""
dd always tries to to copy the entire file.
"""
def __init__(self):
self.dd_params = DdParam("dd")
def create_command(self):
return self.dd_params
def extend_command(self):
return self.dd_params
def __str__(self):
return str(self.dd_params)
def run(self):
ret, _ = utils.run(
str(self),
verbose=False,
wait=True,
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
)
return ret

View File

@ -0,0 +1,313 @@
from utils import pushd
import os
from random import randint
import shutil
import logging
import random
import string
from fallocate import fallocate, FALLOC_FL_PUNCH_HOLE, FALLOC_FL_KEEP_SIZE
from utils import Size, Unit
import xattr
"""
Generate and distribute target files(regular, symlink, directory), link files.
File with holes(sparse file)
Hardlink
1. Generate directory tree structure firstly.
"""
CHINESE_TABLE = "搀掺蝉馋谗缠铲产阐颤昌猖场尝常长偿肠厂敞畅唱倡超抄钞朝嘲潮巢吵炒车扯撤掣彻澈郴臣辰尘晨忱沉\
愤粪丰封枫蜂峰锋风疯烽逢冯缝讽奉凤佛否夫敷肤孵扶拂辐幅氟符伏俘服浮涪福袱弗甫抚辅俯釜斧脯腑\
楔些歇蝎鞋协挟携邪斜胁谐写械卸蟹懈泄泻谢屑薪芯锌欣辛新忻心信衅星腥猩惺兴刑型形邢行醒幸杏性\
寅饮尹引隐印英樱婴鹰应缨莹萤营荧蝇迎赢盈影颖硬映哟拥佣臃痈庸雍踊蛹咏泳涌永恿勇用幽优悠忧尤\
庥庠庹庵庾庳赓廒廑廛廨廪膺忄忉忖忏怃忮怄忡忤忾怅怆忪忭忸怙怵怦怛怏怍怩怫怊怿怡恸恹恻恺恂恪"
def gb2312(length):
for i in range(0, length):
c = random.choice(CHINESE_TABLE)
yield c.encode("gb2312")
class Distributor:
def __init__(self, top_dir: str, levels: int, max_sub_directories: int):
self.top_dir = top_dir
self.levels = levels
self.max_sub_directories = max_sub_directories
# All files generated by this distributor, no matter `_put_single_file()`
# or `put_multiple_files` wll be recorded in this list.
self.files = []
self.symlinks = []
self.dirs = []
self.hardlinks = {}
def _relative_path_to_top(self, path: str) -> str:
return os.path.relpath(path, start=self.top_dir)
def _generate_one_level(self, level, cur_dir):
dirs = []
with pushd(cur_dir):
# At least, each level has a child directory
for index in range(0, randint(1, self.max_sub_directories)):
d_name = f"DIR.{level}.{index}"
try:
d = os.mkdir(d_name)
except FileExistsError:
pass
dirs.append(d_name)
if level >= self.levels:
return
for d in dirs:
self._generate_one_level(level + 1, d)
# This is top level planted tree.
return dirs
def generate_tree(self):
"""DIR.LEVEL.INDEX"""
dirs = self._generate_one_level(0, self.top_dir)
self.planted_tree_root = dirs[:]
def _random_pos_dir(self):
level = randint(0, self.levels)
with pushd(os.path.join(self.top_dir, random.choice(self.planted_tree_root))):
while level:
files = os.listdir()
level -= 1
files = [f for f in files if os.path.isdir(f)]
if len(files) != 0:
next_level = files[randint(0, len(files) - 1)]
else:
break
os.chdir(next_level)
return os.getcwd()
def put_hardlinks(self, count):
def _create_new_source():
source_file = os.path.join(
self._random_pos_dir(), Distributor.generate_random_name(60)
)
fd = os.open(source_file, os.O_CREAT | os.O_RDWR)
os.write(fd, os.urandom(randint(0, 1024 * 1024 + 7)))
os.close(fd)
return source_file
source_file = _create_new_source()
self.hardlinks[source_file] = []
self.hardlink_aliases = []
for i in range(0, count):
if randint(0, 16) % 4 == 0:
source_file = _create_new_source()
self.hardlinks[source_file] = []
link = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_name(50, suffix="hardlink"),
)
logging.debug(link)
# TODO: `link` may be too long to link, so better to change directory first!
os.link(source_file, link)
self.hardlinks[source_file].append(self._relative_path_to_top(link))
self.hardlink_aliases.append(self._relative_path_to_top(link))
return self.hardlink_aliases[-count:]
def put_symlinks(self, count, chinese=False):
"""
Generate symlinks pointing to regular files or directories.
"""
def _create_new_source():
this_path = ""
if randint(0, 123) % 4 == 0:
self.put_directories(1)
this_path = self.dirs[-1]
del self.dirs[-1]
else:
_, this_path = self._put_single_file(
self._random_pos_dir(),
Size(randint(0, 100), Unit.KB),
chinese=chinese,
)
del self.files[-1]
return this_path
source_file = _create_new_source()
for i in range(0, count):
if randint(0, 12) % 3 == 0:
source_file = _create_new_source()
symlink = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_length_name(20, suffix="symlink"),
)
# XFS limits symlink target path which is stored within symlink length at 1024bytes.
if len(source_file) >= 1024:
continue
if randint(0, 12) % 5 == 0:
source_file = os.path.relpath(source_file, start=self.top_dir)
try:
os.symlink(source_file, symlink)
except FileExistsError as e:
# Sometimes, symlink fails due to an existed symlink file met.
# This should rarely happen if `generate_random_length_name` truly randoms
logging.exception(e)
continue
if randint(0, 12) % 4 == 0:
try:
if os.path.isdir(source_file):
try:
os.rmdir(source_file)
except Exception:
pass
else:
os.unlink(source_file)
except FileNotFoundError:
pass
# Save symlink relative path so that we can tell which symlinks were put.
self.symlinks.append(self._relative_path_to_top(symlink))
return self.symlinks[-count:]
def put_directories(self, count):
for i in range(0, count):
dst_path = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_name(30, suffix="dir"),
)
# We might have a very long name of `dst_path`. So better to mkdir one by one
dst_relpath = os.path.relpath(dst_path, start=self.top_dir)
with pushd(self.top_dir):
for d in dst_relpath.split("/")[0:]:
try:
os.chdir(d)
except FileNotFoundError:
os.mkdir(d)
os.chdir(d)
self.dirs.append(os.path.relpath(dst_path, start=self.top_dir))
return self.dirs[-count:]
@staticmethod
def generate_random_name(length, suffix=None, chinese=False):
if chinese:
result_str = "".join([s.decode("gb2312") for s in gb2312(length)])
else:
letters = string.ascii_letters
result_str = "".join(random.choice(letters) for i in range(length))
if suffix is not None:
result_str += f".{suffix}"
return result_str
@staticmethod
def generate_random_length_name(max_length, suffix=None, chinese=False):
# Shrink the max_length since it has a suffix
# Use max_length - 9 as the minimum length to reduce name conflict.
len = randint((max_length - 9) // 2, max_length - 9)
return Distributor.generate_random_name(len, suffix, chinese)
def _put_single_file(
self,
parent_dir,
file_size: Size,
specified_name=None,
letters=False,
chinese=False,
name_len=32,
):
if specified_name is None:
name = Distributor.generate_random_length_name(
name_len, suffix="regular", chinese=chinese
)
else:
name = specified_name
this_path = os.path.join(parent_dir, name)
with pushd(parent_dir):
if chinese:
fd = os.open(name.encode("gb2312"), os.O_CREAT | os.O_RDWR)
else:
fd = os.open(name.encode("ascii"), os.O_CREAT | os.O_RDWR)
if file_size.B != 0:
left = file_size.B
logging.debug("Putting file %s", this_path)
while left:
length = Size(1, Unit.MB).B if Size(1, Unit.MB).B < left else left
if not letters:
left -= os.write(fd, os.urandom(length))
else:
picked_list = "".join(
random.choices(string.ascii_lowercase[1:4], k=length)
)
left -= os.write(fd, picked_list.encode())
os.close(fd)
self.files.append(self._relative_path_to_top(this_path))
return name, this_path
def put_single_file(self, file_size: Size, pos=None, name=None):
self._put_single_file(
self._random_pos_dir() if pos is None else pos,
file_size,
letters=True,
specified_name=name,
)
return self.files[-1]
def put_single_file_with_xattr(self, file_size: Size, kv, pos=None, name=None):
self._put_single_file(
self._random_pos_dir() if pos is None else pos,
file_size,
letters=True,
specified_name=name,
)
p = os.path.join(self.top_dir, self.files[-1])
xattr.setxattr(p, kv[0].encode(), kv[1].encode())
def put_multiple_files(self, count: int, max_size: Size):
for i in range(0, count):
cur_size = Size.from_B(randint(0, max_size.B))
self._put_single_file(self._random_pos_dir(), cur_size)
return self.files[-count:]
def put_multiple_chinese_files(self, count: int, max_size: Size):
for i in range(0, count):
cur_size = Size.from_B(randint(0, max_size.B))
self._put_single_file(self._random_pos_dir(), cur_size, chinese=True)
return self.files[-count:]
def put_multiple_empty_files(self, count):
for i in range(0, count):
self._put_single_file(self._random_pos_dir(), Size(0, Unit.Byte))
return self.files[-count:]
if __name__ == "__main__":
top_dir = "/mnt/gen_tree"
if os.path.exists(top_dir):
shutil.rmtree(top_dir)
try:
os.makedirs(top_dir, exist_ok=True)
except FileExistsError:
pass
dist = Distributor(top_dir, 2, 5)
dist.generate_tree()
print(dist._random_pos_dir())
dist.put_hardlinks(10)
Distributor.generate_random_name(2000, suffix="sym")
dist._put_single_file(top_dir, Size(100, Unit.MB))
dist.put_multiple_files(1000, Size(4, Unit.KB))

View File

@ -0,0 +1,17 @@
from utils import execute, logging_setup
class Erofs:
def __init__(self) -> None:
pass
def mount(self, fsid, mountpoint):
cmd = f"mount -t erofs -o fsid={fsid} none {mountpoint}"
self.mountpoint = mountpoint
r, _ = execute(cmd, shell=True)
assert r
def umount(self):
cmd = f"umount {self.mountpoint}"
r, _ = execute(cmd, shell=True)
assert r

View File

@ -0,0 +1,111 @@
import datetime
import utils
import json
import os
from types import SimpleNamespace as Namespace
from linux_command import LinuxCommand
class FioParam(LinuxCommand):
def __init__(self, fio, command_name):
LinuxCommand.__init__(self, command_name)
self.fio = fio
self.command_name = command_name
def block_size(self, size):
return self.set_param("blocksize", size)
def direct(self, value: bool = True):
return self.set_param("direct", value)
def size(self, size):
return self.set_param("size", size)
def io_mode(self, io_mode):
return self.set_param("io_mode", io_mode)
def ioengine(self, engine):
return self.set_param("ioengine", engine)
def filename(self, filename):
return self.set_param("filename", filename)
def read_write(self, readwrite):
return self.set_param("readwrite", readwrite)
def iodepth(self, iodepth):
return self.set_param("iodepth", iodepth)
def numjobs(self, jobs):
self.set_flags("group_reporting")
return self.set_param("numjobs", jobs)
class Fio:
def __init__(self):
self.jobs = []
self.base_cmd_params = FioParam(self, "fio")
self.global_cmd_params = FioParam(self, "fio")
def create_command(self, *pattern):
self.global_cmd_params.set_flags("group_reporting")
p = "_".join(pattern)
try:
os.mkdir("benchmark_reports")
except FileExistsError:
pass
self.fio_report_file = os.path.join(
"benchmark_reports",
f'fio_run_{p}_{datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%s")}',
)
self.base_cmd_params.set_param("output-format", "json").set_param(
"output", self.fio_report_file
)
return self.global_cmd_params
def expand_command(self):
return self.global_cmd_params
def __str__(self):
fio_prams = FioParam(self, "fio")
fio_prams.command_param_dict.update(self.base_cmd_params.command_param_dict)
fio_prams.command_param_dict.update(self.global_cmd_params.command_param_dict)
fio_prams.command_flags.extend(self.global_cmd_params.command_flags)
fio_prams.set_param("name", "fio")
command = str(fio_prams)
return command
def run(self):
ret, _ = utils.run(
str(self),
wait=True,
shell=True,
)
assert ret == 0
def get_result(self, title_line, *keys):
with open(self.fio_report_file) as f:
data = json.load(f, object_hook=lambda d: Namespace(**d))
if hasattr(data, "jobs"):
jobs = data.jobs
assert len(jobs) == 1
job = jobs[0]
print("")
result = f"""
{title_line}
block size: {getattr(data, 'global options').bs}
direct: {getattr(data, 'global options').direct}
ioengine: {getattr(data, 'global options').ioengine}
runtime: {job.read.runtime}
iops: {job.read.iops}
bw(KB/S): {job.read.bw}
latency/ms: min:{job.read.lat_ns.min/1e6}, max: {job.read.lat_ns.max/1e3}, mean: {job.read.lat_ns.mean/1e6}
"""
print(result)
return result

View File

@ -0,0 +1,45 @@
class LinuxCommand:
def __init__(self, command_name):
self.command_name = command_name
self.command_param_dict = {}
self.command_flags = []
self.command_name = command_name
self.param_name_prefix = "--"
self.param_separator = " "
self.param_value_prefix = " "
self.param_value_list_separator = ","
self.subcommand = None
def set_subcommand(self, subcommand):
self.subcommand = subcommand
return self
def set_param(self, key, val):
self.command_param_dict[key] = val
return self
def set_flags(self, *new_flag):
for f in new_flag:
self.command_flags.append(f)
return self
def remove_param(self, key):
try:
del self.command_param_dict[key]
except KeyError:
pass
def __str__(self):
if self.subcommand is not None:
command = self.command_name + " " + self.subcommand
else:
command = self.command_name
for key, value in self.command_param_dict.items():
command += (
f"{self.param_separator}{self.param_name_prefix}"
f"{key}{self.param_value_prefix}{value}"
)
for flag in self.command_flags:
command += f"{self.param_separator}{self.param_name_prefix}{flag}"
return command

View File

@ -0,0 +1,338 @@
import os
import shutil
from inspect import stack, getframeinfo
from containerd import Containerd
from snapshotter import Snapshotter
import utils
from stat import *
import time
import logging
import sys
import signal
import tempfile
import json
import platform
NYDUSD_BIN = "nydusd"
NYDUS_IMG_BIN = "nydus-image"
from conf import ANCHOR_PATH
class NydusAnchor:
"""
Test environment setup, like,
- location of test target executable
- path to directory for data verification by comparing digest.
- wrapper for test io engin.
"""
def __init__(self, path=None):
"""
:rootfs: An alias for bootstrap file.
:verify_dir: Source directory from which to create this test image.
"""
self.machine = platform.machine()
if path is None:
path = ANCHOR_PATH
try:
with open(path, "r") as f:
kwargs = json.load(f)
except FileNotFoundError:
logging.error("Please define your own anchor file! [anchor_conf.json]")
sys.exit(1)
self.workspace = kwargs.pop("workspace", ".")
# Path to be searched for nydus binaries
self.nydus_project = kwargs.pop("nydus_project")
# In case we want to build image on top an existed image.
# Create an image from this parent rootfs firstly.
# TODO: Better to specify a different file system thus to have same inode numbers.
registry_conf = kwargs.pop("registry")
self.registry_url = registry_conf["registry_url"]
self.registry_auth = registry_conf["registry_auth"]
self.registry_namespace = registry_conf["registry_namespace"]
try:
self.backend_proxy_url = registry_conf["backend_proxy_url"]
self.backend_proxy_blobs_dir = registry_conf["backend_proxy_blobs_dir"]
os.makedirs(self.backend_proxy_blobs_dir, exist_ok=True)
except KeyError:
pass
artifacts = kwargs.pop("artifacts")
self.containerd_bin = artifacts["containerd"]
try:
self.ossutil_bin = artifacts["ossutil_bin"]
except KeyError:
self.ossutil_bin = (
"framework/bin/ossutil64.x86"
if self.machine != "aarch64"
else "framework/bin/ossutil64.aarch64"
)
nydus_runtime_conf = kwargs.pop("nydus_runtime_conf")
self.log_level = nydus_runtime_conf["log_level"]
profile = nydus_runtime_conf["profile"]
self.fs_version = kwargs.pop("fs_version", 6)
try:
oss_conf = kwargs.pop("oss")
self.oss_ak_id = oss_conf["ak_id"]
self.oss_ak_secret = oss_conf["ak_secret"]
self.oss_bucket = oss_conf["bucket"]
self.oss_endpoint = oss_conf["endpoint"]
except KeyError:
pass
self.logging_file_path = kwargs.pop("logging_file")
self.logging_file = self.decide_logging_file()
self.dustbin = []
self.tmp_dirs = []
self.localfs_workdir = os.path.join(self.workspace, "localfs_workdir")
self.nydusify_work_dir = os.path.join(self.workspace, "nydusify_work_dir")
# Where to mount this rafs
self.mountpoint = os.path.join(self.workspace, "rafs_mnt")
# From which directory to build rafs image
self.blobcache_dir = os.path.join(self.workspace, "blobcache_dir")
self.overlayfs = os.path.join(self.workspace, "overlayfs_mnt")
self.source_dir = os.path.join(self.workspace, "gen_rootfs")
self.parent_rootfs = os.path.join(self.workspace, "parent_rootfs")
self.fscache_dir = os.path.join(self.workspace, "fscache")
os.makedirs(self.fscache_dir, exist_ok=True)
link_target = kwargs.pop("target")
if link_target == "gnu":
self.binary_release_dir = os.path.join(
self.nydus_project, "target/release"
)
elif link_target == "musl":
arch = platform.machine()
self.binary_release_dir = os.path.join(
self.nydus_project,
f"target/{arch}-unknown-linux-musl",
"release",
)
self.build_dir = os.path.join(self.nydus_project, "target/debug")
self.binary_debug_dir = os.path.join(self.nydus_project, "target/debug")
if profile == "release":
self.binary_dir = self.binary_release_dir
elif profile == "debug":
self.binary_dir = self.binary_debug_dir
else:
sys.exit()
self.nydusd_bin = os.path.join(self.binary_dir, NYDUSD_BIN)
self.image_bin = os.path.join(self.binary_dir, NYDUS_IMG_BIN)
self.nydusify_bin = os.path.join(
self.nydus_project, "contrib", "nydusify", "cmd", "nydusify"
)
self.snapshotter_bin = kwargs.pop(
"snapshotter",
os.path.join(
self.nydus_project,
"contrib",
"nydus-snapshotter",
"bin",
"containerd-nydus-grpc",
),
)
self.images_array = kwargs.pop("images")["images_array"]
try:
shutil.rmtree(self.blobcache_dir)
except FileNotFoundError:
pass
os.makedirs(self.blobcache_dir)
os.makedirs(self.mountpoint, exist_ok=True)
os.makedirs(self.overlayfs, exist_ok=True)
def put_dustbin(self, path):
self.dustbin.append(path)
def cleanup_dustbin(self):
for p in self.dustbin:
if isinstance(p, utils.ArtifactProcess):
p.shutdown()
else:
os.unlink(p)
def check_prerequisites(self):
assert os.path.exists(self.source_dir), "Verification directory not existed!"
assert os.path.exists(self.blobcache_dir), "Blobcache directory not existed!"
assert (
len(os.listdir(self.blobcache_dir)) == 0
), "Blobcache directory not empty!"
assert not os.path.ismount(self.mountpoint), "Mount point was already mounted"
def clear_blobcache(self):
try:
if os.listdir(self.blobcache_dir) == 0:
return
# Under some cases, blob cache dir is temporarily mounted.
if os.path.ismount(self.blobcache_dir):
utils.execute(["umount", self.blobcache_dir])
shutil.rmtree(self.blobcache_dir)
logging.info("Cleared cache %s", self.blobcache_dir)
os.mkdir(self.blobcache_dir)
except Exception as exc:
print(exc)
def prepare_scratch_dir(self):
self.scratch_dir = os.path.join(
self.workspace,
os.path.basename(os.path.normpath(self.source_dir)) + "_scratch",
)
# We don't delete the scratch dir because it helps to analyze prolems.
# But if another round of test trip begins, no need to keep it anymore.
if os.path.exists(self.scratch_dir):
shutil.rmtree(self.scratch_dir)
shutil.copytree(self.source_dir, self.scratch_dir, symlinks=True)
def prepare_scratch_parent_dir(self):
self.scratch_parent_dir = os.path.join(
self.workspace,
os.path.basename(os.path.normpath(self.parent_rootfs)) + "_scratch",
)
# We don't delete the scratch dir because it helps to analyze problems.
# But if another round of test trip begins, no need to keep it anymore.
if os.path.exists(self.scratch_parent_dir):
shutil.rmtree(self.scratch_parent_dir)
shutil.copytree(self.parent_rootfs, self.scratch_parent_dir, symlinks=True)
@staticmethod
def check_nydusd_health():
pid_list = utils.get_pid(NYDUSD_BIN)
if len(pid_list) == 1:
return True
else:
logging.error("Captured nydusd process %s", pid_list)
return False
@staticmethod
def capture_running_nydusd():
pid_list = utils.get_pid(NYDUSD_BIN)
if len(pid_list) != 0:
logging.info("Captured nydusd process %s", pid_list)
# Kill remaining nydusd thus not to affect following cases.
# utils.kill_all_processes(NYDUSD_BIN, signal.SIGINT)
time.sleep(2)
return True
else:
return False
def mount_overlayfs(self, layers, base=os.getcwd()):
"""
We usually use overlayfs to act as a verifying dir. Some cases may scratch
the original source dir.
:source_dir: A directory acts on a layer of overlayfs, from which to build the image
:layers: tail item from layers is the bottom layer.
Cited:
```
Multiple lower layers
---------------------
Multiple lower layers can now be given using the the colon (":") as a
separator character between the directory names. For example:
mount -t overlay overlay -o lowerdir=/lower1:/lower2:/lower3 /merged
As the example shows, "upperdir=" and "workdir=" may be omitted. In
that case the overlay will be read-only.
The specified lower directories will be stacked beginning from the
rightmost one and going left. In the above example lower1 will be the
top, lower2 the middle and lower3 the bottom layer.
```
"""
handled_layers = [l.replace(":", "\\:") for l in layers]
if len(handled_layers) == 1:
self.sticky_lower_dir = tempfile.TemporaryDirectory(dir=self.workspace)
handled_layers.append(self.sticky_lower_dir.name)
layers_set = ":".join(handled_layers)
with utils.pushd(base):
cmd = [
"mount",
"-t",
"overlay",
"-o",
f"lowerdir={layers_set}",
"rafs_ci_overlay",
self.overlayfs,
]
ret, _ = utils.execute(cmd)
assert ret
def umount_overlayfs(self):
cmd = ["umount", self.overlayfs]
ret, _ = utils.execute(cmd)
assert ret
def decide_logging_file(self):
try:
p = os.environ["LOG_FILE"]
return open(p, "w+")
except KeyError:
if self.logging_file_path == "stdin":
return sys.stdin
elif self.logging_file_path == "stderr":
return sys.stderr
else:
return open(self.logging_file_path, "w+")
def check_fuse_conn(func):
last_conn_id = 0
print("last conn id %d" % last_conn_id)
def wrapped():
conn_id = func()
if last_conn_id != 0:
assert last_conn_id == conn_id
else:
last_conn_id == conn_id
return conn_id
return wrapped
# @check_fuse_conn
def inspect_sys_fuse():
sys_fuse_path = "/sys/fs/fuse/connections"
try:
conns = os.listdir(sys_fuse_path)
frameinfo = getframeinfo(stack()[1][0])
logging.info(
"%d | %d fuse connections: %s" % (frameinfo.lineno, len(conns), conns)
)
conn_id = int(conns[0])
return conn_id
except Exception as exc:
logging.exception(exc)

View File

@ -0,0 +1,351 @@
import logging
import subprocess
import tempfile
import utils
from nydus_anchor import NydusAnchor
import os
import json
import posixpath
from linux_command import LinuxCommand
import shutil
import tarfile
import re
class NydusifyParam(LinuxCommand):
def __init__(self, command_name):
super().__init__(command_name)
self.param_name_prefix = "--"
def source(self, source):
return self.set_param("source", source)
def target(self, target):
return self.set_param("target", target)
def nydus_image(self, nydus_image):
return self.set_param("nydus-image", nydus_image)
def work_dir(self, work_dir):
return self.set_param("work-dir", work_dir)
def fs_version(self, fs_version):
return self.set_param("fs-version", str(fs_version))
class Nydusify(LinuxCommand):
def __init__(self, anchor: NydusAnchor):
self.image_builder = anchor.image_bin
self.nydusify_bin = anchor.nydusify_bin
self.registry_url = anchor.registry_url
self.work_dir = anchor.nydusify_work_dir
self.anchor = anchor
# self.generate_auth_config(self.registry_url, anchor.registry_auth)
# os.environ["DOCKER_CONFIG"] = self.__temp_auths_config_dir.name
super().__init__(self.image_builder)
self.cmd = NydusifyParam(self.nydusify_bin)
self.cmd.nydus_image(self.image_builder).work_dir(self.work_dir)
def convert(self, source, suffix="_converted", target_ref=None, fs_version=5):
"""
A reference to image looks like registry/namespace/repo:tag
Before conversion begins, split the reference into those parts.
"""
# Notice: localhost:5000/busybox:latest
self.__repo = posixpath.basename(source).split(":")[0]
self.__converted_image = (
posixpath.basename(source) + suffix if suffix is not None else ""
)
self.__source = source
self.cmd.set_subcommand("convert")
if target_ref is None:
target_ref = posixpath.join(
self.anchor.registry_url,
self.anchor.registry_namespace,
self.__converted_image,
)
self.cmd.source(source).target(target_ref).fs_version(fs_version)
self.target_ref = target_ref
cmd = str(self.cmd)
with utils.timer(
f"### Rafs V{fs_version} Image conversion time including Pull and Push ###"
):
_, p = utils.run(
cmd,
False,
shell=True,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
p.wait()
assert p.returncode == 0
def check(self, source, suffix="_converted", target_ref=None, fs_version=5):
"""
A reference to image looks like registry/namespace/repo:tag
Before conversion begins, split the reference into those parts.
"""
# Notice: localhost:5000/busybox:latest
self.__repo = posixpath.basename(source).split(":")[0]
self.__converted_image = (
posixpath.basename(source) + suffix if suffix is not None else ""
)
self.__source = source
self.cmd.set_subcommand("check")
self.cmd.set_param("nydusd", self.anchor.nydusd_bin)
self.cmd.set_param("nydus-image", self.anchor.image_bin)
if target_ref is None:
target_ref = posixpath.join(
self.anchor.registry_url,
self.anchor.registry_namespace,
self.__converted_image,
)
self.cmd.source(source).target(target_ref).fs_version(fs_version)
self.target_ref = target_ref
cmd = str(self.cmd)
with utils.timer("### Image Check Duration ###"):
_, p = utils.run(
cmd,
False,
shell=True,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
p.wait()
assert p.returncode == 0
def docker_v2(self):
self.cmd.set_flags("docker-v2-format")
return self
def force_push(self):
self.cmd.set_flags("backend-force-push")
return self
def platform(self, p):
self.cmd.set_param("platform", p)
return self
def chunk_dict(self, chunk_dict_arg):
self.cmd.set_param("chunk-dict", chunk_dict_arg)
return self
def with_new_work_dir(self, work_dir):
self.work_dir = work_dir
self.cmd.set_param("work-dir", work_dir)
return self
def enable_multiplatfrom(self, enable: bool):
if enable:
self.cmd.set_flags("multi-platform")
return self
def build_cache_ref(self, ref):
self.cmd.set_param("build-cache", ref)
return self
def backend_type(self, type, oss_object_prefix=None, filed=False):
config = {
"endpoint": self.anchor.oss_endpoint,
"access_key_id": self.anchor.oss_ak_id,
"access_key_secret": self.anchor.oss_ak_secret,
"bucket_name": self.anchor.oss_bucket,
}
if oss_object_prefix is not None:
config["object_prefix"] = oss_object_prefix
self.cmd.set_param("backend-type", type)
if filed:
with open("oss_conf.json", "w") as f:
json.dump(config, f)
self.cmd.set_param("backend-config-file", "oss_conf.json")
else:
self.cmd.set_param("backend-config", json.dumps(json.dumps(config)))
return self
def nydus_image_output(self):
with utils.pushd(os.path.join(self.work_dir, "bootstraps")):
outputs = [o for o in os.listdir() if re.match(r".*json$", o) is not None]
outputs.sort(key=lambda x: int(x.split("-")[0]))
with open(outputs[0], "r") as f:
return json.load(f)
@property
def original_repo(self):
return self.__repo
@property
def converted_repo(self):
return posixpath.join(self.anchor.registry_namespace, self.__repo)
@property
def converted_image(self):
return posixpath.join(
self.registry_url, self.anchor.registry_namespace, self.__converted_image
)
def locate_bootstrap(self):
bootstraps_dir = os.path.join(self.work_dir, "bootstraps")
with utils.pushd(bootstraps_dir):
each_layers = os.listdir()
if len(each_layers) == 0:
return None
each_layers = [l.split("-") for l in each_layers]
each_layers.sort(key=lambda x: int(x[0]))
return os.path.join(bootstraps_dir, "-".join(each_layers[-1]))
def generate_auth_config(self, registry_url, auth):
auths = {"auths": {registry_url: {"auth": auth}}}
self.__temp_auths_config_dir = tempfile.TemporaryDirectory()
self.auths_config = os.path.join(
self.__temp_auths_config_dir.name, "config.json"
)
with open(self.auths_config, "w+") as f:
json.dump(auths, f)
f.flush()
def extract_source_layers_names_and_download(self, arch="amd64"):
skopeo = utils.Skopeo()
manifest, digest = skopeo.inspect(self.__source, image_arch=arch)
layers = [l["digest"] for l in manifest["layers"]]
# trimmed_layers = [os.path.join(self.work_dir, self.__source, l) for l in layers]
# trimmed_layers.reverse()
layers.reverse()
skopeo.copy_to_local(
self.__source,
layers,
os.path.join(self.work_dir, self.__source),
resource_digest=digest,
)
return layers, os.path.join(self.work_dir, self.__source)
def extract_converted_layers_names(self, arch="amd64"):
skopeo = utils.Skopeo()
manifest, _ = skopeo.inspect(
self.target_ref,
tls_verify=False,
features="nydus.remoteimage.v1",
image_arch=arch,
)
layers = [l["digest"] for l in manifest["layers"]]
layers.reverse()
return layers
def pull_bootstrap(self, downloaded_dir, bootstrap_name, arch="amd64"):
"""
Nydusify converts oci to nydus format and push the nydus image manifest to registry,
which belongs to a manifest index.
"""
skopeo = utils.Skopeo()
nydus_manifest, _ = skopeo.inspect(
self.target_ref,
tls_verify=False,
features="nydus.remoteimage.v1",
image_arch=arch,
)
layers = nydus_manifest["layers"]
for l in layers:
if l["mediaType"] == "application/vnd.docker.image.rootfs.diff.tar.gzip":
bootstrap_digest = l["digest"]
import requests
# Currently, we can not handle auth
# OCI distribution spec: /v2/<name>/blobs/<digest>
os.makedirs(downloaded_dir, exist_ok=True)
reader = requests.get(
f"http://{self.registry_url}/v2/{self.anchor.registry_namespace}/{self.original_repo}/blobs/{bootstrap_digest}",
stream=True,
)
with utils.pushd(downloaded_dir):
with open("image.gzip", "wb") as w:
shutil.copyfileobj(reader.raw, w)
with tarfile.open("image.gzip", "r:gz") as tar_gz:
def is_within_directory(directory, target):
abs_directory = os.path.abspath(directory)
abs_target = os.path.abspath(target)
prefix = os.path.commonprefix([abs_directory, abs_target])
return prefix == abs_directory
def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
for member in tar.getmembers():
member_path = os.path.join(path, member.name)
if not is_within_directory(path, member_path):
raise Exception("Attempted Path Traversal in Tar File")
tar.extractall(path, members, numeric_owner=numeric_owner)
safe_extract(tar_gz)
os.rename("image/image.boot", bootstrap_name)
os.remove("image.gzip")
return os.path.join(downloaded_dir, bootstrap_name)
def pull_config(self, image, arch="amd64"):
"""
Nydusify converts oci to nydus format and push the nydus image manifest to registry,
which belongs to a manifest index.
"""
skopeo = utils.Skopeo()
nydus_manifest, digest = skopeo.inspect(
image, tls_verify=False, image_arch=arch
)
import requests
# Currently, we can handle auth
# OCI distribution spec: /v2/<name>/manifests/<digest>
reader = requests.get(
f"http://{self.registry_url}/v2/{self.original_repo}/manifests/{digest}",
stream=True,
)
manifest = json.load(reader.raw)
config_digest = manifest["config"]["digest"]
reader = requests.get(
f"http://{self.registry_url}/v2/{self.original_repo}/blobs/{config_digest}",
stream=True,
)
config = json.load(reader.raw)
return config
def find_nydus_image(self, image, arch):
skopeo = utils.Skopeo()
nydus_manifest, digest = skopeo.inspect(
image, tls_verify=False, image_arch=arch, features="nydus.remoteimage.v1"
)
assert nydus_manifest is not None
def get_build_cache_records(self, ref):
skopeo = utils.Skopeo()
build_cache_records, _ = skopeo.inspect(ref, tls_verify=False)
c = json.dumps(build_cache_records, indent=4, sort_keys=False)
logging.info("build cache: %s", c)
records = build_cache_records["layers"]
return records

View File

@ -0,0 +1,107 @@
import tempfile
from string import Template
import logging
import utils
OSS_CONFIG_TEMPLATE = """
[Credentials]
language=EN
endpoint=${endpoint}
accessKeyID=${ak}
accessKeySecret=${ak_secret}
"""
class OssHelper:
def __init__(self, util, endpoint, bucket, ak_id, ak_secret, prefix=None):
oss_conf = tempfile.NamedTemporaryFile(mode="w+", suffix="oss.conf")
items = {
"endpoint": endpoint,
"ak": ak_id,
"ak_secret": ak_secret,
}
template = Template(OSS_CONFIG_TEMPLATE)
_s = template.substitute(**items)
oss_conf.write(_s)
oss_conf.flush()
self.util = util
self.bucket = bucket
self.conf_wrapper = oss_conf
self.conf_file = oss_conf.name
self.prefix = prefix
self.path = (
f"oss://{self.bucket}/{self.prefix}"
if self.prefix is not None
else f"oss://{self.bucket}/"
)
def upload(self, src, dst, force=False):
if not self.stat(dst) or force:
cmd = [
self.util,
"--config-file",
self.conf_file,
"-f",
"cp",
src,
f"{self.path}{dst}",
]
ret, _ = utils.execute(cmd, print_output=True)
assert ret
if ret:
logging.info("Object %s is uploaded", dst)
def download(self, src, dst):
cmd = [
self.util,
"--config-file",
self.conf_file,
"cp",
"-f",
f"{self.path}{src}",
dst,
]
ret, _ = utils.execute(cmd, print_cmd=True)
if ret:
logging.info("Download %s ", src)
def rm(self, object):
cmd = [
self.util,
"rm",
"--config-file",
self.conf_file,
f"{self.path}{object}",
]
ret, _ = utils.execute(cmd, print_cmd=True, print_output=False)
assert ret
if ret:
logging.info("Object %s is removed from oss", object)
def stat(self, object):
cmd = [
self.util,
"--config-file",
self.conf_file,
"stat",
f"{self.path}{object}",
]
ret, _ = utils.execute(
cmd, print_cmd=False, print_output=False, print_err=False
)
if ret:
logging.info("Object %s already uploaded", object)
else:
logging.warning(
"Object %s was not uploaded yet",
object,
)
return ret
def list(self):
cmd = [self.util, "--config-file", self.conf_file, "ls", self.path]
ret, out = utils.execute(cmd, print_cmd=True, print_output=True, print_err=True)
print(out)

View File

@ -0,0 +1,816 @@
import shutil
import utils
import os
import time
import enum
import posixpath
from linux_command import LinuxCommand
import logging
from types import SimpleNamespace as Namespace
import json
import copy
import hashlib
import contextlib
import subprocess
import tempfile
import pytest
from nydus_anchor import NydusAnchor
from linux_command import LinuxCommand
from utils import Size, Unit
from whiteout import WhiteoutSpec
from oss import OssHelper
from backend_proxy import BackendProxy
class Backend(enum.Enum):
OSS = "oss"
REGISTRY = "registry"
LOCALFS = "localfs"
BACKEND_PROXY = "backend_proxy"
def __str__(self):
return self.value
class Compressor(enum.Enum):
NONE = "none"
LZ4_BLOCK = "lz4_block"
GZIP = "gzip"
ZSTD = "zstd"
def __str__(self):
return self.value
class RafsConf:
"""Generate nydusd working configuration file.
A `registry` backend example:
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "localhost:5000",
"repo": "busybox"
}
},'
"mode": "direct",
"digest_validate": false
}
}
"""
def __init__(self, anchor: NydusAnchor, image: "RafsImage" = None):
self.__conf_file_wrapper = tempfile.NamedTemporaryFile(
mode="w+", suffix="rafs.config"
)
self.anchor = anchor
self.rafs_image = image
self._rafs_conf_default = {
"device": {
"backend": {
"type": "oss",
"config": {},
}
},
"mode": os.getenv("PREFERRED_MODE", "direct"),
"iostats_files": False,
"fs_prefetch": {"enable": False},
}
self._device_conf = json.loads(
json.dumps(self._rafs_conf_default), object_hook=lambda d: Namespace(**d)
)
self.device_conf = utils.object_to_dict(copy.deepcopy(self._device_conf))
def path(self):
return self.__conf_file_wrapper.name
def set_rafs_backend(self, backend_type, **kwargs):
b = str(backend_type)
self._configure_rafs("device.backend.type", b)
if backend_type == Backend.REGISTRY:
# Manager like nydus-snapshotter can fill the repo field, so we do nothing here.
if "repo" in kwargs:
self._configure_rafs(
"device.backend.config.repo",
posixpath.join(self.anchor.registry_namespace, kwargs.pop("repo")),
)
self._configure_rafs(
"device.backend.config.scheme",
kwargs["scheme"] if "scheme" in kwargs else "http",
)
self._configure_rafs("device.backend.config.host", self.anchor.registry_url)
self._configure_rafs(
"device.backend.config.auth", self.anchor.registry_auth
)
if backend_type == Backend.OSS:
if "prefix" in kwargs:
self._configure_rafs(
"device.backend.config.object_prefix", kwargs.pop("prefix")
)
self._configure_rafs(
"device.backend.config.endpoint", self.anchor.oss_endpoint
)
self._configure_rafs(
"device.backend.config.access_key_id", self.anchor.oss_ak_id
)
self._configure_rafs(
"device.backend.config.access_key_secret", self.anchor.oss_ak_secret
)
self._configure_rafs(
"device.backend.config.bucket_name", self.anchor.oss_bucket
)
if backend_type == Backend.BACKEND_PROXY:
self._configure_rafs("device.backend.type", "registry")
self._configure_rafs(
"device.backend.config.scheme",
"http",
)
self._configure_rafs("device.backend.config.repo", "nydus")
self._configure_rafs(
"device.backend.config.host", self.anchor.backend_proxy_url
)
if backend_type == Backend.LOCALFS:
if "image" in kwargs:
self._configure_rafs(
"device.backend.config.blob_file", kwargs.pop("image").localfs_backing_blob
)
else:
self._configure_rafs(
"device.backend.config.dir", self.anchor.localfs_workdir
)
return self
def get_rafs_backend(self):
return self._device_conf.device.backend.type
def set_registry_repo(self, repo):
self._configure_rafs("device.backend.config.repo", repo)
def _configure_rafs(self, k: str, v):
exec("self._device_conf." + k + "=v")
def enable_files_iostats(self):
self._device_conf.iostats_files = True
return self
def enable_latest_read_files(self):
self._device_conf.latest_read_files = True
return self
def enable_access_pattern(self):
self._device_conf.access_pattern = True
return self
def enable_rafs_blobcache(self, is_compressed=False, work_dir=None):
self._device_conf.device.cache = Namespace(
type="blobcache",
config=Namespace(
work_dir=self.anchor.blobcache_dir if work_dir is None else work_dir
),
compressed=is_compressed,
)
return self
def enable_fs_prefetch(
self,
threads_count=8,
merging_size=128 * 1024,
bandwidth_rate=0,
prefetch_all=False,
):
self._configure_rafs("fs_prefetch.enable", True)
self._configure_rafs("fs_prefetch.threads_count", threads_count)
self._configure_rafs("fs_prefetch.merging_size", merging_size)
self._configure_rafs("fs_prefetch.bandwidth_rate", bandwidth_rate)
self._configure_rafs("fs_prefetch.prefetch_all", prefetch_all)
return self
def enable_validation(self):
if int(self.anchor.fs_version) == 6:
return self
self._configure_rafs("digest_validate", True)
return self
def amplify_io(self, size):
self._configure_rafs("amplify_io", size)
return self
def rafs_mem_mode(self, v):
self._configure_rafs("mode", v)
def enable_xattr(self):
self._configure_rafs("enable_xattr", True)
return self
def dump_rafs_conf(self):
# In case the conf is dumped more than once
if int(self.anchor.fs_version) == 6:
logging.warning("Rafs v6 must enable blobcache")
self.enable_rafs_blobcache()
self.__conf_file_wrapper.truncate(0)
self.__conf_file_wrapper.seek(0)
logging.info("Current rafs metadata mode *%s*", self._rafs_conf_default["mode"])
self.device_conf = utils.object_to_dict(copy.deepcopy(self._device_conf))
json.dump(self.device_conf, self.__conf_file_wrapper)
self.__conf_file_wrapper.flush()
class RafsImage(LinuxCommand):
def __init__(
self,
anchor: NydusAnchor,
source,
bootstrap_name=None,
blob_name=None,
compressor=None,
clear_from_oss=True,
):
"""
:rootfs: A plain directory from which to build rafs images(bootstrap and blob).
:bootstrap_name: Name the generated test purpose bootstrap file.
:blob_prefix: Generally, a sha256 string follows this prefix.
:opts: Specify extra build options.
:parent_image: Associate an parent image which will be created ahead of time if necessary.
A rebuilt image tries to reuse block mapping info from parent image(bootstrap) if
the same block resides in parent image, which means new blob file will not have the
same block.
"""
self.__rootfs = source
self.bootstrap_name = (
bootstrap_name
if bootstrap_name is not None
else tempfile.NamedTemporaryFile(suffix="bootstrap").name
)
# The file name of blob file locally.
self.blob_name = (
blob_name
if blob_name is not None
else tempfile.NamedTemporaryFile(suffix="blob").name
)
# blob_id is used to identify blobs residing in OSS and how a IO can access backend.
self.blob_id = None
self.opts = ""
self.test_dir = os.getcwd()
self.anchor = anchor
LinuxCommand.__init__(self, anchor.image_bin)
self.param_value_prefix = " "
self.clear_from_oss = False
self.created = False
self.compressor = compressor
self.clear_from_oss = clear_from_oss
self.backend_type = None
# self.blob_abs_path = tempfile.TemporaryDirectory(
# "blob", dir=self.anchor.workspace
# ).name
self.blob_abs_path = tempfile.NamedTemporaryFile(
prefix="blob", dir=self.anchor.workspace
).name
def rootfs(self):
return self.__rootfs
def _tweak_build_command(self):
"""
Add more options into command line per as different test case configuration.
"""
for key, value in self.command_param_dict.items():
self.opts += (
f"{self.param_separator}{self.param_name_prefix}"
f"{key}{self.param_value_prefix}{value}"
)
for flag in self.command_flags:
self.opts += f"{self.param_separator}{self.param_name_prefix}{flag}"
def set_backend(self, type: Backend, **kwargs):
self.backend_type = type
if type == Backend.LOCALFS:
if not os.path.exists(self.anchor.localfs_workdir):
os.mkdir(self.anchor.localfs_workdir)
self.set_param("blob-dir", self.anchor.localfs_workdir)
return self
elif type == Backend.OSS:
self.set_param("blob", self.blob_abs_path)
prefix = kwargs.pop("prefix", None)
self.oss_helper = OssHelper(
self.anchor.ossutil_bin,
self.anchor.oss_endpoint,
self.anchor.oss_bucket,
self.anchor.oss_ak_id,
self.anchor.oss_ak_secret,
prefix,
)
elif self.backend_type == Backend.BACKEND_PROXY:
self.set_param("blob", self.blob_abs_path)
elif type == Backend.REGISTRY:
# Let nydusify upload blob from the path, which is an intermediate file
self.set_param("blob", self.blob_abs_path)
pass
return self
def create_image(
self,
image_bin=None,
parent_image=None,
clear_from_oss=True,
oss_uploader="util",
compressor=None,
prefetch_policy=None,
prefetch_files="",
from_stargz=False,
fs_version=None,
disable_check=False,
chunk_size=None,
) -> "RafsImage":
"""
:layers: Create an image on top of an existed one
:oss_uploader: ['util', 'nydusify']. Let image builder itself upload blob to oss or use third-party oss util
"""
self.clear_from_oss = clear_from_oss
self.oss_uploader = oss_uploader
self.compressor = compressor
self.parent_image = parent_image
assert oss_uploader in ("util", "builder", "none")
if prefetch_policy is not None:
self.set_param("prefetch-policy", prefetch_policy)
self.set_param("log-level", self.anchor.log_level)
if disable_check:
self.set_flags("disable-check")
if fs_version is not None:
self.set_param("fs-version", fs_version)
else:
self.set_param("fs-version", str(self.anchor.fs_version))
if self.compressor is not None:
self.set_param("compressor", str(self.compressor))
if chunk_size is not None:
self.set_param("chunk-size", str(hex(chunk_size)))
builder_output_json = tempfile.NamedTemporaryFile("w+", suffix="output.json")
self.set_param("output-json", builder_output_json.name)
builder_output_json.flush()
# In order to support specify different versions of nydus image tool
if image_bin is None:
image_bin = self.anchor.image_bin
# Once it's a layered image test, create test parent layer first.
# TODO: Perhaps, should not create parent together so we can have
# images with different flags and opts
if self.parent_image is not None:
self.set_param("parent-bootstrap", self.parent_image.bootstrap_name)
if from_stargz:
self.set_param("source-type", "stargz_index")
# Just before beginning building image, tweak building parameters
self._tweak_build_command()
cmd = f"{image_bin} create --bootstrap {self.bootstrap_name} {self.opts} {self.__rootfs}"
with utils.timer("Basic rafs image creation time"):
_, p = utils.run(
cmd,
False,
shell=True,
stdin=subprocess.PIPE,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
if prefetch_policy is not None:
p.communicate(input=prefetch_files)
p.wait()
assert p.returncode == 0
assert os.path.exists(os.path.join(self.test_dir, self.bootstrap_name))
self.created = True
self.blob_id = json.load(builder_output_json)["blobs"][-1]
logging.info("Generated blob id %s", self.blob_id)
self.bootstrap_path = os.path.abspath(self.bootstrap_name)
if self.backend_type == Backend.OSS:
# self.blob_id = self.calc_blob_sha256(self.blob_abs_path)
# nydus-rs image builder can also upload image itself.
if self.oss_uploader == "util":
self.oss_helper.upload(self.blob_abs_path, self.blob_id)
elif self.backend_type == Backend.BACKEND_PROXY:
shutil.copy(
self.blob_abs_path,
os.path.join(self.anchor.backend_proxy_blobs_dir, self.blob_id),
)
elif self.backend_type == Backend.LOCALFS:
self.localfs_backing_blob = os.path.join(self.anchor.localfs_workdir, self.blob_id)
self.anchor.put_dustbin(self.bootstrap_name)
# Only oss has a temporary place to hold blob
try:
self.anchor.put_dustbin(self.blob_abs_path)
except AttributeError:
pass
try:
self.anchor.put_dustbin(self.localfs_backing_blob)
except AttributeError:
pass
if self.oss_uploader == "util":
self.dump_image_summary()
return self
def whiteout_spec(self, spec: WhiteoutSpec):
self.set_param("whiteout-spec", str(spec))
return self
def clean_up(self):
# In case image was not successfully created.
if hasattr(self, "bootstrap_path"):
os.unlink(self.bootstrap_path)
if hasattr(self, "oss_blob_abs_path"):
os.unlink(self.blob_abs_path)
if hasattr(self, "localfs_backing_blob"):
# Backing blob may already be put into dustbin.
try:
os.unlink(self.localfs_backing_blob)
except FileNotFoundError:
pass
try:
os.unlink(self.blob_abs_path)
except FileNotFoundError:
pass
except AttributeError:
# In case that test rootfs is not successfully scratched.
pass
try:
os.unlink(self.parent_blob)
os.unlink(self.parent_bootstrap)
except FileNotFoundError:
pass
except AttributeError:
pass
try:
if self.clear_from_oss and self.backend_type == Backend.OSS:
self.oss_helper.rm(self.blob_id)
except AttributeError:
pass
@staticmethod
def calc_blob_sha256(blob):
"""Example: blob id: sha256:a810724c8b2cc9bd2a6fa66d92ced9b429120017c7cf2ef61dfacdab45fa45ca"""
# We calculate the blob sha256 ourselves.
sha256 = hashlib.sha256()
with open(blob, "rb") as f:
for block in iter(lambda: f.read(4096), b""):
sha256.update(block)
return sha256.hexdigest()
def dump_image_summary(self):
return
logging.info(
f"""Image summary:\t
blob: {self.blob_name}\t
bootstrap: {self.bootstrap_name}\t
blob_sha256: {self.blob_id}\t
rootfs: {self.rootfs}\t
parent_rootfs: {self.parent_image.rootfs if self.__layers else 'Not layered image'}
compressor: {self.compressor}\t
blob_size: {os.stat(self.blob_abs_path).st_size//1024}KB, {os.stat(self.blob_abs_path).st_size}Bytes
"""
)
class RafsMountParam(LinuxCommand):
"""
Example:
nydusd --config config.json --bootstrap bs.test --sock \
vhost-user-fs.sock --apisock test_api --log-level trace
"""
def __init__(self, command_name):
LinuxCommand.__init__(self, command_name)
self.param_name_prefix = "--"
def bootstrap(self, bootstrap_file):
return self.set_param("bootstrap", bootstrap_file)
def config(self, config_file):
return self.set_param("config", config_file)
def sock(self, vhost_user_sock):
return self.set_param("sock", vhost_user_sock)
def log_level(self, log_level):
return self.set_param("log-level", log_level)
def mountpoint(self, path):
return self.set_param("mountpoint", path)
class NydusDaemon(utils.ArtifactProcess):
def __init__(
self,
anchor: NydusAnchor,
image: RafsImage,
conf: RafsConf,
with_defaults=True,
bin=None,
mode="fuse",
):
"""Start up nydusd and mount rafs.
:image: If image is `None`, then no `--metadata` will be passed to nydusd.
In this case, we have to use API to mount rafs.
"""
anchor.nydusd = self # So pytest has a chance to clean up dirties.
self.anchor = anchor
self.rafs_image = image # Associate with a rafs image to boot up.
self.conf: RafsConf = conf
self.mountpoint = anchor.mountpoint # To which point nydus will mount
self.param_value_prefix = " "
self.params = RafsMountParam(anchor.nydusd_bin if bin is None else bin)
self.params.set_subcommand(mode)
if with_defaults:
self._set_default_mount_param()
def __str__(self):
return str(self.params)
def __call__(self):
return self.params
def _set_default_mount_param(self):
# Set default part
self.apisock("api_sock").log_level(self.anchor.log_level)
if self.conf is not None:
self.params.mountpoint(self.mountpoint).config(self.conf.path())
if self.rafs_image is not None:
self.params.bootstrap(self.rafs_image.bootstrap_path)
def _wait_for_mount(self, test_fn=os.path.ismount):
elapsed = 0
while elapsed < 300:
if test_fn(self.mountpoint):
return True
if self.p.poll() is not None:
pytest.fail("file system process terminated prematurely")
elapsed -= 1
time.sleep(0.01)
pytest.fail("mountpoint failed to come up")
def thread_num(self, num):
self.params.set_param("thread-num", str(num))
return self
def fscache_thread_num(self, num):
self.params.set_param("fscache-threads", str(num))
return self
def set_fscache(self):
self.params.set_param("fscache", self.anchor.fscache_dir)
return self
def log_level(self, level):
self.params.log_level(level)
return self
def prefetch_files(self, file_path: str):
self.params.set_param("prefetch-files", file_path)
return self
def shared_dir(self, shared_dir):
self.params.set_param("shared-dir", shared_dir)
return self
def set_mountpoint(self, mp):
self.params.set_param("mountpoint", mp)
self.mountpoint = mp
return self
def supervisor(self, path):
self.params.set_param("supervisor", path)
return self
def id(self, daemon_id):
self.params.set_param("id", daemon_id)
return self
def upgrade(self):
self.params.set_flags("upgrade")
return self
def failover_policy(self, p):
self.params.set_param("failover-policy", p)
return self
def apisock(self, apisock):
self.params.set_param("apisock", apisock)
self.__apisock = apisock
self.anchor.put_dustbin(apisock)
return self
def get_apisock(self):
return self.__apisock
def bootstrap(self, b):
self.params.set_param("bootstrap", b)
return self
def mount(self, limited_mem=False, wait_mount=True, dump_config=True):
"""
:limited_mem: Unit is KB, limit nydusd process virtual memory usage thus to
inject some faults.
"""
cmd = str(self).split()
self.anchor.checker_sock = self.get_apisock()
if dump_config and self.conf is not None:
self.conf.dump_rafs_conf()
if isinstance(limited_mem, Size):
limit_kb = limited_mem.B // Size(1, Unit.KB).B
cmd = f"ulimit -v {limit_kb};" + cmd
_, p = utils.run(
cmd,
False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
self.p = p
if wait_mount:
self._wait_for_mount()
return self
def start(self):
cmd = str(self).split()
_, p = utils.run(
cmd,
False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
self.p = p
return self
def wait_mount(self):
self._wait_for_mount()
@contextlib.contextmanager
def automatic_mount_umount(self):
self.mount()
yield
self.umount()
def umount(self):
"""
Umount is sometimes invoked during teardown. So it can't assert.
"""
self._catcher_dead = True
ret, _ = utils.execute(["umount", "-l", self.mountpoint], print_output=True)
assert ret
# self.p.wait()
# assert self.p.returncode == 0
def is_mounted(self):
def _costum(self):
_, output = utils.execute(
["cat", "/proc/mounts"], print_output=False, print_cmd=False
)
mounts = output.split("\n")
for m in mounts:
if self.mountpoint in m:
return True
return False
check_fn = os.path.ismount
return check_fn(self.mountpoint)
def shutdown(self):
if self.is_mounted():
self.umount()
logging.error("shutting down nydusd")
self.p.terminate()
self.p.wait()
assert self.p.returncode == 0
BLOB_CONF_TEMPLATE = """
{
"type": "bootstrap",
"id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"domain_id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"config": {
"id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"backend_type": "registry",
"backend_config": {
"readahead": false,
"host": "hub.byted.org",
"repo": "gechangwei/java",
"auth": "",
"scheme": "http",
"proxy": {
"fallback": false
}
},
"cache_type": "fscache",
"cache_config": {
"work_dir": "/var/lib/containerd-nydus-grpc/snapshots/3754/fs"
},
"metadata_path": "/var/lib/containerd-nydus-grpc/snapshots/3754/fs/image/image.boot"
},
"fs_prefetch": {
"enable": false,
"prefetch_all": false,
"threads_count": 0,
"merging_size": 0,
"bandwidth_rate": 0
}
}
"""
class BlobEntryConf:
def __init__(self, anchor) -> None:
self.conf_base = json.loads(
BLOB_CONF_TEMPLATE, object_hook=lambda x: Namespace(**x)
)
self.anchor = anchor
self.conf_base.config.cache_config.work_dir = self.anchor.blobcache_dir
def set_type(self, t):
self.conf_base.type = t
return self
def set_repo(self, repo):
self.conf_base.config.repo = repo
return self
def set_metadata_path(self, path):
self.conf_base.config.metadata_path = path
return self
def set_fsid(self, fsid):
self.conf_base.id = fsid
self.conf_base.domain_id = fsid
self.conf_base.config.id = fsid
return self
def set_backend(self):
self.conf_base.config.backend_config.host = self.anchor.backend_proxy_url
self.conf_base.config.backend_config.repo = "nydus"
return self
def set_prefetch(self, threads_cnt=4):
self.conf_base.fs_prefetch.enable = True
self.conf_base.fs_prefetch.prefetch_all = True
self.conf_base.fs_prefetch.threads_count = threads_cnt
return self
def dumps(self):
return json.dumps(self.conf_base, default=vars)

View File

@ -0,0 +1,59 @@
import os
import tempfile
import utils
class Snapshotter(utils.ArtifactProcess):
def __init__(self, anchor: "NydusAnchor") -> None:
self.anchor = anchor
self.snapshotter_bin = anchor.snapshotter_bin
self.__sock = tempfile.NamedTemporaryFile(suffix="snapshotter.sock")
self.flags = []
def sock(self):
return self.__sock.name
def set_root(self, dir):
self.root = os.path.join(dir, "io.containerd.snapshotter.v1.nydus")
def cache_dir(self):
return os.path.join(self.root, "cache")
def run(self, rafs_conf: os.PathLike):
cmd = [
self.snapshotter_bin,
"--nydusd-path",
self.anchor.nydusd_bin,
"--config-path",
rafs_conf,
"--root",
self.root,
"--address",
self.__sock.name,
"--log-level",
"info",
"--log-to-stdout",
]
cmd = cmd + self.flags
ret, self.p = utils.run(
cmd,
wait=False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
def shared_mount(self):
self.flags.append("--shared-daemon")
return self
def enable_nydus_overlayfs(self):
self.flags.append("--enable-nydus-overlayfs")
return self
def shutdown(self):
self.p.terminate()
self.p.wait()

View File

@ -0,0 +1,82 @@
import socket
import array
import os
import struct
from multiprocessing import Process
import threading
import time
class RafsSupervisor:
def __init__(self, watcher_socket_name, conn_id):
self.watcher_socket_name = watcher_socket_name
self.conn_id = conn_id
@classmethod
def recv_fds(cls, sock, msglen, maxfds):
"""Function from https://docs.python.org/3/library/socket.html#socket.socket.recvmsg"""
fds = array.array("i") # Array of ints
msg, ancdata, flags, addr = sock.recvmsg(
msglen, socket.CMSG_LEN(maxfds * fds.itemsize)
)
for cmsg_level, cmsg_type, cmsg_data in ancdata:
if cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS:
# Append data, ignoring any truncated integers at the end.
fds.frombytes(
cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]
)
return msg, list(fds)
@classmethod
def send_fds(cls, sock, msg, fds):
"""Function from https://docs.python.org/3/library/socket.html#socket.socket.sendmsg"""
return sock.sendmsg(
[msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", fds))]
)
def wait_recv_fd(self, event):
try:
os.unlink(self.watcher_socket_name)
except FileNotFoundError:
pass
sock = socket.socket(family=socket.AF_UNIX)
sock.bind(self.watcher_socket_name)
event.set()
sock.listen()
client, _ = sock.accept()
msg, fds = self.recv_fds(client, 100000, 1)
self.fds = fds
self.opaque = msg
client.close()
def wait_send_fd(self):
try:
os.unlink(self.watcher_socket_name)
except FileNotFoundError:
pass
sock = socket.socket(family=socket.AF_UNIX)
sock.bind(self.watcher_socket_name)
sock.listen()
client, _ = sock.accept()
msg = self.opaque
RafsSupervisor.send_fds(client, msg, self.fds)
client.close()
def send_fd(self):
t = threading.Thread(target=self.wait_send_fd)
t.start()
def recv_fd(self):
event = threading.Event()
t = threading.Thread(target=self.wait_recv_fd, args=(event,))
t.start()
return event

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,659 @@
import posixpath
import subprocess
import logging
import sys
import os
import signal
from typing import Tuple
import io
import string
import random
try:
import psutil
except ModuleNotFoundError:
pass
import contextlib
import math
import enum
import datetime
import re
import random
import json
import tarfile
import pprint
import stat
import platform
def logging_setup(logging_stream=sys.stderr):
"""Inspired from Kadalu project"""
root = logging.getLogger()
if root.hasHandlers():
return
verbose = False
try:
if os.environ["NYDUS_TEST_VERBOSE"] == "YES":
verbose = True
except KeyError as _:
pass
# Errors should also be printed to screen.
handler = logging.StreamHandler(logging_stream)
if verbose:
root.setLevel(logging.DEBUG)
handler.setLevel(logging.DEBUG)
else:
root.setLevel(logging.INFO)
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
"[%(asctime)s] %(levelname)s "
"[%(module)s - %(lineno)s:%(funcName)s] "
"- %(message)s"
)
handler.setFormatter(formatter)
root.addHandler(handler)
def execute(cmd, **kwargs):
exc = None
shell = kwargs.pop("shell", False)
print_output = kwargs.pop("print_output", False)
print_cmd = kwargs.pop("print_cmd", True)
print_err = kwargs.pop("print_err", True)
if print_cmd:
logging.info("Executing command: %s" % cmd)
try:
output = subprocess.check_output(
cmd, shell=shell, stderr=subprocess.STDOUT, **kwargs
)
output = output.decode("utf-8")
if print_output:
logging.info("%s" % output)
except subprocess.CalledProcessError as exc:
o = exc.output.decode() if exc.output is not None else ""
if print_err:
logging.error(
"Command: %s\nReturn code: %d\nError output:\n%s"
% (cmd, exc.returncode, o)
)
return False, o
return True, output
def run(cmd, wait: bool = True, verbose=True, **kwargs):
if verbose:
logging.info(cmd)
else:
logging.debug(cmd)
popen_obj = subprocess.Popen(cmd, **kwargs)
if wait:
popen_obj.wait()
return popen_obj.returncode, popen_obj
def kill_all_processes(program_name, sig=signal.SIGKILL):
ret, out = execute(["pidof", program_name])
if not ret:
logging.warning("No %s running" % program_name)
return
processes = out.replace("\n", "").split(" ")
for pid in processes:
try:
logging.info("Killing process %d" % int(pid))
os.kill(int(pid), sig)
except Exception as exc:
logging.exception(exc)
def get_pid(proc_name: str) -> list:
proc_list = []
for proc in psutil.process_iter():
try:
if proc_name.lower() in proc.name().lower():
proc_list.append((proc.pid, proc.name()))
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
return proc_list
def read_images_array(p) -> list:
with open(p) as f:
images = [i.rstrip("\n") for i in f.readlines() if not i.startswith("#")]
return images
@contextlib.contextmanager
def pushd(new_path: str):
previous_dir = os.getcwd()
os.chdir(new_path)
try:
yield
finally:
os.chdir(previous_dir)
def round_up(n, decimals=0):
return int(math.ceil(n / float(decimals))) * decimals
def get_current_time():
return datetime.datetime.now()
def delta_time(t_end, t_start):
delta = t_end - t_start
return delta.total_seconds(), delta.microseconds
@contextlib.contextmanager
def timer(slogan):
start = get_current_time()
try:
yield
finally:
end = get_current_time()
sec, usec = delta_time(end, start)
logging.info("%s, Takes time %u.%u seconds", slogan, sec, usec // 1000)
class Unit(enum.Enum):
Byte = 1
KB = 1024
MB = 1024 * KB
GB = 1024 * MB
TB = 1024 * GB
Blocks512 = 512
Blocks4096 = 4096
def get_value(self):
return self.value
class Size:
_KiB = 1024
_MiB = _KiB * 1024
_GiB = _MiB * 10244
_TiB = _GiB * 1024
_SECTOR_SIZE = 512
def __init__(self, value: int, unit: Unit = Unit.Byte):
self.bytes = value * unit.get_value()
def __index__(self):
return self.bytes
@classmethod
def from_B(cls, value):
return cls(value)
@classmethod
def from_KiB(cls, value):
return cls(value * cls._KiB)
@classmethod
def from_MiB(cls, value):
return cls(value * cls._MiB)
@classmethod
def from_GiB(cls, value):
return cls(value * cls._GiB)
@classmethod
def from_TiB(cls, value):
return cls(value * cls._TiB)
@classmethod
def from_sector(cls, value):
return cls(value * cls._SECTOR_SIZE)
@property
def B(self):
return self.bytes
@property
def KiB(self):
return self.bytes // self._KiB
@property
def MiB(self):
return self.bytes // self._MiB
@property
def GiB(self):
return self.bytes // self._GiB
@property
def TiB(self):
return self.bytes / self._TiB
@property
def sectors(self):
return self.bytes // self._SECTOR_SIZE
def __str__(self):
if self.bytes < self._KiB:
return "{}B".format(self.B)
elif self.bytes < self._MiB:
return "{}K".format(self.KiB)
elif self.bytes < self._GiB:
return "{}M".format(self.MiB)
elif self.bytes < self._TiB:
return "{}G".format(self.GiB)
else:
return "{}T".format(self.TiB)
def dump_process_mem_cpu_load(pid):
"""
https://psutil.readthedocs.io/en/latest/
"""
p = psutil.Process(pid)
mem_i = p.memory_info()
logging.info(
"[SYS LOAD]: RSS: %u(%u MB) VMS: %u(%u MB) DIRTY: %u | CPU num: %u, Usage: %f"
% (
mem_i.rss,
mem_i.rss / 1024 // 1024,
mem_i.vms,
mem_i.vms / 1024 // 1024,
mem_i.dirty,
p.cpu_num(),
p.cpu_percent(0.5),
)
)
def file_disk_usage(path):
s = os.stat(path).st_blocks * 512
return s
def list_object_to_dict(lst):
return_list = []
for l in lst:
return_list.append(object_to_dict(l))
return return_list
def object_to_dict(object):
if hasattr(object, "__dict__"):
dict = vars(object)
else:
return object
for k, v in dict.items():
if type(v).__name__ not in ["list", "dict", "str", "int", "float", "bool"]:
dict[k] = object_to_dict(v)
if type(v) is list:
dict[k] = list_object_to_dict(v)
return dict
def get_fs_type(path):
partitions = psutil.disk_partitions()
partitions.sort(reverse=True)
for part in partitions:
if path.startswith(part.mountpoint):
return part.fstype
def mess_file(path):
file_size = os.path.getsize(path)
offset = random.randint(0, file_size)
fd = os.open(path, os.O_WRONLY)
os.pwrite(fd, os.urandom(1000), offset)
os.close(fd)
# based on https://stackoverflow.com/a/42865957/2002471
units = {"B": 1, "KB": 1024, "MB": 1024**2, "GB": 1024**3}
def parse_size(size):
size = size.upper()
if not re.match(r" ", size):
size = re.sub(r"([KMGT]?B)", r" \1", size)
number, unit = [string.strip() for string in size.split()]
return int(float(number) * units[unit])
def clean_pagecache():
execute(["echo", "3", ">", "/proc/sys/vm/drop_caches"])
def pretty_print(*args, **kwargs):
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(*args, **kwargs)
def is_regular(path):
mode = os.stat(path)[stat.ST_MODE]
return stat.S_ISREG(mode)
class ArtifactProcess:
def __init__(self) -> None:
super().__init__()
def shutdown(self):
pass
import gzip
def is_gzip(path):
"""
gzip.BadGzipFile: means it is not a gzip
"""
with gzip.open(path, "r") as fh:
try:
fh.read(1)
except Exception:
return False
return True
class Skopeo:
def __init__(self) -> None:
super().__init__()
self.bin = os.path.join(
"framework",
"bin",
"skopeo" if platform.machine() == "x86_64" else "skopeo.aarch64",
)
@staticmethod
def repo_from_image_ref(image):
repo = posixpath.basename(image).split(":")[0]
registry = posixpath.dirname(image)
return posixpath.join(registry, repo)
def inspect(
self, image, tls_verify=False, image_arch="amd64", features=None, verifier=None
):
"""
{
"manifests": [
{
"digest": "sha256:0415f56ccc05526f2af5a7ae8654baec97d4a614f24736e8eef41a4591f08019",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "amd64",
"os": "linux"
},
"size": 527
},
<snipped>
---
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1457,
"digest": "sha256:b97242f89c8a29d13aea12843a08441a4bbfc33528f55b60366c1d8f6923d0d4"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 764663,
"digest": "sha256:e5d9363303ddee1686b203170d78283404e46a742d4c62ac251aae5acbda8df8"
}
]
}
<snipped>
---
Example to fetch manifest by its hash
skopeo inspect --raw docker://docker.io/busybox@sha256:0415f56ccc05526f2af5a7ae8654baec97d4a614f24736e8eef41a4591f08019
"""
cmd = [self.bin, "inspect", "--raw", f"docker://{image}"]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
m = json.loads(out)
# manifest = None
digest = None
if m["mediaType"] == "application/vnd.docker.distribution.manifest.v2+json":
manifest = m
elif (
m["mediaType"]
== "application/vnd.docker.distribution.manifest.list.v2+json"
):
for mf in m["manifests"]:
# Choose coresponding platform
if (
mf["platform"]["architecture"] == image_arch
and mf["platform"]["os"] == "linux"
):
if features is not None:
if "os.features" not in mf["platform"]:
continue
elif mf["platform"]["os.features"][0] != features:
logging.error("cccc %s", mf["platform"]["os.features"][0])
continue
digest = mf["digest"]
repo = Skopeo.repo_from_image_ref(image)
cmd = [
self.bin,
"inspect",
"--raw",
f"docker://{repo}@{digest}",
]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
assert p.returncode == 0
manifest = json.loads(out)
break
else:
assert False
assert isinstance(manifest, dict)
return manifest, digest
def copy_to_local(
self, image, layers, extraced_dir, tls_verify=False, resource_digest=None
):
"""
:layers: From which to decompress each layer
"""
os.makedirs(extraced_dir, exist_ok=True)
if resource_digest is not None:
repo = Skopeo.repo_from_image_ref(image)
cmd = [
self.bin,
"--insecure-policy",
"copy",
f"docker://{repo}@{resource_digest}",
f"dir:{extraced_dir}",
]
else:
cmd = [
self.bin,
"copy",
"--insecure-policy",
f"docker://{image}",
f"dir:{extraced_dir}",
]
if not tls_verify:
cmd.insert(1, "--tls-verify=false")
ret, p = run(
cmd,
wait=True,
shell=False,
stdout=subprocess.PIPE,
)
assert ret == 0
if layers is not None:
with pushd(extraced_dir):
for i in layers:
# Blob layer downloaded has no "sha256" prefix
try:
layer = i.replace("sha256:", "")
os.makedirs(i, exist_ok=True)
with tarfile.open(
layer, "r:gz" if is_gzip(layer) else "r:"
) as tar_gz:
tar_gz.extractall(path=i)
except FileNotFoundError:
logging.warning("Should already downloaded")
def copy_all_to_registry(self, source_image_tagged, dest_image_tagged):
cmd = [
self.bin,
"--insecure-policy",
"copy",
"--all",
"--tls-verify=false",
f"docker://{source_image_tagged}",
f"docker://{dest_image_tagged}",
]
ret, p = run(
cmd,
wait=True,
shell=False,
stdout=subprocess.PIPE,
)
assert ret == 0
def manifest_list(self, image, tls_verify=False):
cmd = [self.bin, "inspect", "--raw", f"docker://{image}"]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
m = json.loads(out)
if m["mediaType"] == "application/vnd.docker.distribution.manifest.v2+json":
return None
elif (
m["mediaType"]
== "application/vnd.docker.distribution.manifest.list.v2+json"
):
return m
def pretty_print(artifact: dict):
a = json.dumps(artifact, indent=4)
print(a)
def write_tar_gz(source, tar_gz):
def f(ti):
ti.name = os.path.relpath(ti.name, start=source)
return ti
with tarfile.open(tar_gz, "w:gz") as t:
t.add(source, arcname="")
def parse_stargz(stargz):
"""
The footer MUST be the following 51 bytes (1 byte = 8 bits in gzip).
Footer format:
- 10 bytes gzip header
- 2 bytes XLEN (length of Extra field) = 26 (4 bytes header + 16 hex digits + len("STARGZ"))
- 2 bytes Extra: SI1 = 'S', SI2 = 'G'
- 2 bytes Extra: LEN = 22 (16 hex digits + len("STARGZ"))
- 22 bytes Extra: subfield = fmt.Sprintf("%016xSTARGZ", offsetOfTOC)
- 5 bytes flate header: BFINAL = 1(last block), BTYPE = 0(non-compressed block), LEN = 0
- 8 bytes gzip footer
(End of eStargz)
"""
f = open(stargz, "rb")
f.seek(-51, 2)
footer = f.read(51)
assert len(footer) == 51
header_extra = footer[16:]
toc_offset = header_extra[0:16]
toc_offset = int(toc_offset.decode("utf-8"), base=16)
f.seek(toc_offset)
toc_gzip = f.read(toc_offset - 51)
toc_tar = gzip.decompress(toc_gzip)
t = io.BytesIO(toc_tar)
with tarfile.open(fileobj=t, mode="r") as tf:
def is_within_directory(directory, target):
abs_directory = os.path.abspath(directory)
abs_target = os.path.abspath(target)
prefix = os.path.commonprefix([abs_directory, abs_target])
return prefix == abs_directory
def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
for member in tar.getmembers():
member_path = os.path.join(path, member.name)
if not is_within_directory(path, member_path):
raise Exception("Attempted Path Traversal in Tar File")
tar.extractall(path, members, numeric_owner)
safe_extract(tf)
f.close()
return "stargz.index.json"
def docker_image_repo(reference):
return posixpath.basename(reference).split(":")[0]
def random_string(l=64):
res = "".join(random.choices(string.ascii_uppercase + string.digits, k=l))
return res

View File

@ -0,0 +1,208 @@
from abc import ABCMeta, abstractmethod
from distributor import Distributor
from utils import Size, Unit, pushd
import xattr
import os
import utils
from workload_gen import WorkloadGen
"""
Scratch a target directory
Verify image according to per schema
"""
class Verifier:
__metaclass__ = ABCMeta
def __init__(self, target, dist: Distributor):
self.target = target
self.dist = dist
@abstractmethod
def scratch(self):
pass
@abstractmethod
def verify(self):
pass
class XattrVerifier(Verifier):
def __init__(self, target, dist: Distributor):
super().__init__(target, dist)
def scratch(self, scratch_dir):
"""Put various kinds of xattr value into.
1. Very long value
2. a common short value
3. Nothing resides in value field
4. Single file, multiple pairs.
5. /n
6. whitespace
7. 中文
8. Binary
9. Only key?
"""
self.dist.put_symlinks(100)
files_cnt = 20
self.dist.put_multiple_files(files_cnt, Size(9, Unit.KB))
self.scratch_dir = os.path.abspath(scratch_dir)
self.source_files = {}
self.source_xattrs = {}
self.source_dirs = {}
self.source_dirs_xattrs = {}
self.encoding = "gb2312"
self.xattr_pairs = 50 if utils.get_fs_type(os.getcwd()) == "xfs" else 20
# TODO: Only key without values?
with pushd(self.scratch_dir):
for f in self.dist.files[-files_cnt:]:
relative_path = os.path.relpath(f, start=self.scratch_dir)
self.source_xattrs[relative_path] = {}
for idx in range(0, self.xattr_pairs):
# TODO: Random this Key
k = f"trusted.nydus.{Distributor.generate_random_name(20, chinese=True)}"
v = f"_{Distributor.generate_random_length_name(20, chinese=True)}"
xattr.setxattr(f, k.encode(self.encoding), v.encode(self.encoding))
# Use relative or canonicalized names as key to locate
# path in source rootfs directory. So we verify if image is
# packed correctly.
self.source_files[relative_path] = os.path.abspath(f)
self.source_xattrs[relative_path][k] = v
dir_cnt = 20
self.dist.put_directories(dir_cnt)
# Add xattr key-value paris to directories.
with pushd(self.scratch_dir):
for d in self.dist.dirs[-dir_cnt:]:
relative_path = os.path.relpath(d, start=self.scratch_dir)
self.source_dirs_xattrs[relative_path] = {}
for idx in range(0, self.xattr_pairs):
# TODO: Random this Key
k = f"trusted.{Distributor.generate_random_name(20)}"
v = f"{Distributor.generate_random_length_name(50)}"
xattr.setxattr(d, k, v.encode())
# Use relative or canonicalized names as key to locate
# path in source rootfs directory. So we verify if image is
# packed correctly.
self.source_dirs[relative_path] = os.path.abspath(d)
self.source_dirs_xattrs[relative_path][k] = v
def verify(self, target_dir):
""""""
with pushd(target_dir):
for f in self.source_files.keys():
fp = os.path.join(target_dir, f)
attrs = os.listxattr(path=fp, follow_symlinks=False)
assert len(attrs) == self.xattr_pairs
for k in self.source_xattrs[f].keys():
v = os.getxattr(fp, k.encode(self.encoding)).decode(self.encoding)
assert v == self.source_xattrs[f][k]
attrs = os.listxattr(fp, follow_symlinks=False)
if self.encoding != "gb2312":
for attr in attrs:
v = xattr.getxattr(f, attr)
assert attr in self.source_xattrs[f].keys()
assert v.decode(self.encoding) == self.source_xattrs[f][attr]
with pushd(target_dir):
for d in self.source_dirs.keys():
dp = os.path.join(target_dir, d)
attrs = xattr.listxattr(dp)
assert len(attrs) == self.xattr_pairs
for attr in attrs:
v = xattr.getxattr(d, attr)
assert attr in self.source_dirs_xattrs[d].keys()
assert v.decode(self.encoding) == self.source_dirs_xattrs[d][attr]
class SymlinkVerifier(Verifier):
def __init__(self, target, dist: Distributor):
super().__init__(target, dist)
def scratch(self):
# TODO: directory symlinks?
self.dist.put_symlinks(140)
self.dist.put_symlinks(24, chinese=True)
def verify(self, target_dir, source_dir):
for sl in self.dist.symlinks:
vt = os.path.join(target_dir, sl)
st = os.path.join(source_dir, sl)
assert os.readlink(st) == os.readlink(vt)
class HardlinkVerifier(Verifier):
def __init_(self, target, dist):
super().__init__(target, dist)
def scratch(self):
self.dist.put_hardlinks(30)
self.outer_source_name = "outer_source"
self.inner_hardlink_name = "inner_hardlink"
with pushd(os.path.dirname(os.path.realpath(self.dist.top_dir))):
fd = os.open(self.outer_source_name, os.O_CREAT | os.O_RDWR)
os.close(fd)
os.link(
self.outer_source_name,
os.path.join(self.target, self.inner_hardlink_name),
)
assert (
os.stat(os.path.join(self.target, self.inner_hardlink_name)).st_nlink == 2
)
def verify(self, target_dir, source_dir):
for links in self.dist.hardlinks.values():
try:
links_iter = iter(links)
l = next(links_iter)
except StopIteration:
continue
t_hl_path = os.path.join(target_dir, l)
last_md5 = WorkloadGen.calc_file_md5(t_hl_path)
last_stat = os.stat(t_hl_path)
last_path = t_hl_path
for l in links_iter:
t_hl_path = os.path.join(target_dir, l)
t_hl_md5 = WorkloadGen.calc_file_md5(t_hl_path)
t_hl_stat = os.stat(t_hl_path)
assert last_md5 == t_hl_md5
assert (
last_stat == t_hl_stat
), f"last hardlink path {last_path}, cur hardlink path {t_hl_path}"
last_md5 = t_hl_md5
last_stat = t_hl_stat
last_path = t_hl_path
with pushd(target_dir):
assert (
os.stat(os.path.join(target_dir, self.inner_hardlink_name)).st_nlink
== 1
)
class DirectoryVerifier(Verifier):
pass
class FileModeVerifier(Verifier):
pass
class UGIDVerifier(Verifier):
pass
class SparseVerifier(Verifier):
pass

View File

@ -0,0 +1,113 @@
from utils import pushd
import os
import shutil
import xattr
import stat
import enum
class WhiteoutSpec(enum.Enum):
OCI = "oci"
OVERLAY = "overlayfs"
def get_value(self):
return self.value
def __str__(self) -> str:
return self.get_value()
class Whiteout:
opaque_dir_key = "trusted.overlay.opaque".encode()
opaque_dir_value = "y".encode()
def __init__(self, spec=WhiteoutSpec.OCI) -> None:
super().__init__()
self.spec = spec
@staticmethod
def mirror_fs_structure(top, path):
"""
:top: Target dir into which to construct mirrored tree.
"path: Should be a relative path like `a/b/c`
So this function creates directories recursively until reaching to the last component.
Moreover, call should be responsible for creating the target file or directory.
"""
path = os.path.normpath(path)
dir_path = ""
with pushd(top):
for d in path.split("/")[:-1]:
try:
os.chdir(d)
except FileNotFoundError:
if len(d) == 0:
continue
os.mkdir(d)
os.chdir(d)
finally:
dir_path += d + "/"
return dir_path, path.split("/")[-1]
@staticmethod
def mirror_files(files, original_rootfs, target_rootfs):
"""
files paths relative to rootfs, e.g.
foo/bar/f
"""
for f in files:
mirrored_path, name = Whiteout.mirror_fs_structure(target_rootfs, f)
src_path = os.path.join(original_rootfs, f)
dst_path = os.path.join(target_rootfs, mirrored_path, name)
shutil.copyfile(src_path, dst_path, follow_symlinks=False)
def whiteout_one_file(self, top, lower_relpath):
"""
:top: The top root directory from which to mirror from lower relative path.
:lower_relpath: Should look like `a/b/c` and this function puts `{top}/a/b/.wh.c` into upper layer
"""
whiteout_file_parent, whiteout_file = Whiteout.mirror_fs_structure(
top, lower_relpath
)
if self.spec == WhiteoutSpec.OCI:
f = os.open(
os.path.join(top, whiteout_file_parent, f".wh.{whiteout_file}"),
os.O_CREAT,
)
os.close(f)
elif self.spec == WhiteoutSpec.OVERLAY:
d = os.path.join(top, whiteout_file_parent, whiteout_file)
os.mknod(
d,
0o644 | stat.S_IFCHR,
0,
)
# Whitout a regular does not need such xattr pair, but it's a naughty monkey
xattr.setxattr(d, self.opaque_dir_key, self.opaque_dir_value)
def whiteout_opaque_directory(self, top, lower_relpath):
upper_opaque_dir = os.path.join(top, lower_relpath)
if self.spec == WhiteoutSpec.OCI:
os.makedirs(upper_opaque_dir, exist_ok=True)
f = os.open(os.path.join(upper_opaque_dir, ".wh..wh..opq"), os.O_CREAT)
os.close(f)
elif self.spec == WhiteoutSpec.OVERLAY:
os.makedirs(upper_opaque_dir, exist_ok=True)
xattr.setxattr(upper_opaque_dir, self.opaque_dir_key, self.opaque_dir_value)
def whiteout_one_dir(self, top, lower_relpath):
whiteout_dir_parent, whiteout_dir = Whiteout.mirror_fs_structure(
top, lower_relpath
)
if self.spec == WhiteoutSpec.OCI:
os.makedirs(os.path.join(top, whiteout_dir_parent, f".wh.{whiteout_dir}"))
elif self.spec == WhiteoutSpec.OVERLAY:
d = os.path.join(top, whiteout_dir_parent, whiteout_dir)
os.mknod(
d,
0o644 | stat.S_IFCHR,
0,
)
# Whitout a direcotoy does not need such xattr pair, but it's a naughty monkey
xattr.setxattr(d, self.opaque_dir_key, self.opaque_dir_value)
xattr.setxattr(d, "trusted.nydus.opaque", "y".encode())

View File

@ -0,0 +1,558 @@
import multiprocessing
import os
import random
import threading
from stat import *
from utils import logging_setup, Unit, Size, pushd, dump_process_mem_cpu_load
import logging
import datetime
import hashlib
import time
import io
import threading
import multiprocessing
from multiprocessing import Queue, current_process
import stat
def get_current_time():
return datetime.datetime.now()
def rate_limit(interval_rate):
last = datetime.datetime.now()
def inner(func):
def wrapped(*args):
nonlocal last
if (datetime.datetime.now() - last).seconds > interval_rate:
func(*args)
last = datetime.datetime.now()
return wrapped
return inner
@rate_limit(interval_rate=5)
def dump_status(name, cnt):
logging.info("Process %d - %s verified %lu files", os.getpid(), name, cnt)
size_list = [
1,
8,
13,
16,
19,
32,
64,
101,
100,
102,
100,
256,
Size(4, Unit.KB).B,
Size(7, Unit.KB).B,
Size(8, Unit.KB).B,
Size(16, Unit.KB).B,
Size(17, Unit.KB).B,
Size(1, Unit.MB).B - 100,
Size(1, Unit.MB).B,
Size(3, Unit.MB).B - Size(2, Unit.KB).B,
Size(3, Unit.MB).B,
Size(4, Unit.MB).B,
]
class WorkloadGen:
def __init__(self, target_dir, verify_dir):
"""
:target_dir: Generate IO against which directory
:verify_dir: Generally, it has to be the original rootfs of the test image
"""
self.target_dir = target_dir
self.verify_dir = verify_dir
self.verify = True
self.io_error = False
self.verifier = {} # For append write verification
logging.info(
"Target dir: %s, Verified dir: %s", self.target_dir, self.verify_dir
)
def collect_all_dirs(self):
# In case this function is called more than once.
if hasattr(self, "collected"):
return
self.collected = True
self._collected_dirs = []
self._collected_dirs.append(self.target_dir)
with pushd(self.target_dir):
self._collect_each_dir(self.target_dir, self.target_dir)
def _collect_each_dir(self, root_dir, parent_dir):
files = os.listdir(parent_dir)
with pushd(parent_dir):
for one in files:
st = os.lstat(one)
if S_ISDIR(st.st_mode) and len(os.listdir(one)) != 0:
realpath = os.path.realpath(one)
self._collected_dirs.append(realpath)
self._collect_each_dir(root_dir, one)
else:
continue
def iter_all_files(self, file_op, dir_op=None):
for (cur_dir, subdirs, files) in os.walk(
self.target_dir, topdown=True, followlinks=False
):
with pushd(cur_dir):
for f in files:
file_op(f)
if dir_op is not None:
for d in subdirs:
dir_op(d)
def verify_single_file(self, path_from_mp):
target_md5 = WorkloadGen.calc_file_md5(path_from_mp)
# Locate where the source file is, so to calculate its md5 which
# will be verified later
source_path = os.path.join(
self.verify_dir, os.path.relpath(path_from_mp, start=self.target_dir)
)
source_md5 = WorkloadGen.calc_file_md5(source_path)
assert (
target_md5 == source_md5
), f"Verification error. Want {source_md5} but got {target_md5}"
@staticmethod
def count_files(top_dir):
"""
Including hidden files and directories.
Just count files within `top_dir` whether it's oci special file or not.
"""
total = 0
for (cur_dir, subdirs, files) in os.walk(
top_dir, topdown=True, followlinks=False
):
total += len(files)
total += len(subdirs)
logging.info("%d is counted!", total)
return total
@staticmethod
def calc_file_md5(path):
md5 = hashlib.md5()
with open(path, "rb") as f:
for block in iter(lambda: f.read(Size(128, Unit.KB).B), b""):
md5.update(block)
return md5.digest()
def __verify_one_level(self, path_queue, conn):
target_files = []
cnt = 0
err_cnt = 0
name = current_process().name
while True:
# In higher version of python, multiprocess queue can be noticed when it is closed.
# Then we don't have to wait for timeout
try:
(abs_dir, dirs, files) = path_queue.get(timeout=3)
except Exception as exc:
logging.info("Verify process %s finished.", name)
conn.send((target_files, cnt, err_cnt))
return
dump_status(name, cnt)
sub_dir_count = 0
for f in files:
# Per as to OCI image spec, whiteout special file should be present.
# TODO: uncomment me!
assert not f.startswith(".wh.")
# don't try to validate symlink
cur_path = os.path.join(abs_dir, f)
relpath = os.path.relpath(cur_path, start=self.target_dir)
target_files.append(relpath)
source_path = os.path.join(self.verify_dir, relpath)
try:
if os.path.islink(cur_path):
if os.readlink(cur_path) != os.readlink(source_path):
err_cnt += 1
logging.error("Symlink mismatch, %s", cur_path)
elif os.path.isfile(cur_path):
# TODO: How to verify special files?
cur_md5 = WorkloadGen.calc_file_md5(cur_path)
source_md5 = WorkloadGen.calc_file_md5(source_path)
if cur_md5 != source_md5:
err_cnt += 1
logging.error("Verification error. File %s", cur_path)
assert False
elif stat.S_ISBLK(os.stat(cur_path).st_mode):
assert (
os.stat(cur_path).st_rdev == os.stat(source_path).st_rdev
), f"left {os.stat(cur_path).st_rdev} while right {os.stat(source_path).st_rdev} "
elif stat.S_ISCHR(os.stat(cur_path).st_mode):
assert (
os.stat(cur_path).st_rdev == os.stat(source_path).st_rdev
), f"left {os.stat(cur_path).st_rdev} while right {os.stat(source_path).st_rdev} "
elif stat.S_ISFIFO(os.stat(cur_path).st_mode):
pass
elif stat.S_ISSOCK(os.stat(cur_path).st_mode):
pass
except AssertionError as exp:
logging.warning("current %s, source %s", cur_path, source_path)
raise exp
cnt += 1
for d in dirs:
assert not d.startswith(".wh.")
cur_path = os.path.join(abs_dir, d)
relpath = os.path.relpath(cur_path, start=self.target_dir)
target_files.append(relpath)
# Directory nlink should equal to 2 + amount of child direcotory
if not os.path.islink(cur_path):
sub_dir_count += 1
assert sub_dir_count + 2 == os.stat(abs_dir).st_nlink
def verify_entire_fs(self, filter_list: list = []) -> bool:
cnt = 0
err_cnt = 0
target_files = set()
processes = []
# There is underlying threads transfering objects. Set its size smaller so
# there will not be many error printed once the side of the queue is closed.
path_queue = Queue(20)
for i in range(8):
(parent_conn, child_conn) = multiprocessing.Pipe(False)
p = multiprocessing.Process(
name=f"verifier_{i}",
target=self.__verify_one_level,
args=(path_queue, child_conn),
)
p.start()
processes.append((p, parent_conn))
for (abs_dir, dirs, files) in os.walk(self.target_dir, topdown=True):
try:
path_queue.put((abs_dir, dirs, files))
except Exception:
return False
for (p, conn) in processes:
try:
(child_files, child_cnt, child_err_cnt) = conn.recv()
except EOFError:
logging.error("EOF")
return False
p.join()
target_files.update(child_files)
cnt += child_cnt
err_cnt += child_err_cnt
path_queue.close()
path_queue.join_thread()
del path_queue
logging.info("Verified %u files in %s", cnt, self.target_dir)
if err_cnt > 0:
logging.error("Verify fails, %u errors", err_cnt)
return False
# Collect files belonging to the original rootfs into `source_files`.
# Criteria is that each file in `source_files` should appear in the rafs.
source_files = set()
opaque_dirs = []
for (abs_dir, dirs, files) in os.walk(self.verify_dir):
for f in files:
cur_path = os.path.join(abs_dir, f)
relpath = os.path.relpath(cur_path, start=self.verify_dir)
source_files.add(relpath)
if f == ".wh..wh..opq":
opaque_dirs.append(os.path.relpath(abs_dir, start=self.verify_dir))
for d in dirs:
cur_path = os.path.join(abs_dir, d)
relpath = os.path.relpath(cur_path, start=self.verify_dir)
source_files.add(relpath)
diff_files = list()
for el in source_files:
if not el in target_files:
diff_files.append(el)
trimmed_diff_files = []
whiteout_files = [
(
os.path.basename(f),
os.path.join(
os.path.dirname(f), os.path.basename(f).replace(".wh.", "", 1)
),
)
for f in diff_files
if os.path.basename(f).startswith(".wh.")
]
# The only possible reason we have different files is due to whiteout
for suspect in diff_files:
for d in opaque_dirs:
if suspect.startswith(d):
trimmed_diff_files.append(suspect)
continue
# Seems overlay fs does not hide the opaque special file(char) if nothing to whiteout
try:
# Example: c????????? ? ? ? ? ? foo
with open(os.path.join(self.verify_dir, suspect), "rb") as f:
pass
except OSError as e:
if e.errno == 2:
trimmed_diff_files.append(suspect)
else:
pass
# For example:
# ['DIR.0.0/pQGLzKTWSpaCatjcwAqiZAGOxbfexiOvVsXqFqUhldTxLsIpONVnavybHObiCZepXsLyoPwDAXOoDtJFdZVUlrisTDaenJhsJVXegHuTMzFFqhowZAfcgggxVfEvXDtAVakarhSkZhavBtuuTFPOqgyowbI.regular',
# 'DIR.0.0/.wh.pQGLzKTWSpaCatjcwAqiZAGOxbfexiOvVsXqFqUhldTxLsIpONVnavybHObiCZepXsLyoPwDAXOoDtJFdZVUlrisTDaenJhsJVXegHuTMzFFqhowZAfcgggxVfEvXDtAVakarhSkZhavBtuuTFPOqgyowbI.regular',
# 'DIR.0.0/DIR.1.1/DIR.2.0/zktaNKmXMVgITVbAUFHpNfvECfVIdO.dir', 'DIR.0.0/DIR.1.1/DIR.2.0/.wh.zktaNKmXMVgITVbAUFHpNfvECfVIdO.dir', 'i/am/troublemaker/.wh.foo']
if len(whiteout_files):
base = os.path.basename(suspect)
if f".wh.{base}" in list(zip(*whiteout_files))[0]:
trimmed_diff_files.append(suspect)
for (_, s) in whiteout_files:
if suspect.startswith(s):
trimmed_diff_files.append(suspect)
diff_files = list(
filter(
lambda x: x not in trimmed_diff_files
and x not in filter_list
and not os.path.basename(x).startswith(".wh."),
diff_files,
)
)
assert len(diff_files) == 0, print(diff_files)
return True
def read_collected_files(self, duration):
"""
Randomly select a file from a random directory which was collected
when set up this workload generator. No dir recursive read happens.
"""
dirs_cnt = len(self._collected_dirs)
logging.info("Total %u directories will be have stress read", dirs_cnt)
t_begin = get_current_time()
t_delta = t_begin - t_begin
op_cnt, total_size = 0, 0
while t_delta.total_seconds() <= duration:
target_dir = random.choice(self._collected_dirs)
files = os.listdir(target_dir)
target_file = random.choice(files)
one_path = os.path.join(target_dir, target_file)
if os.path.isdir(one_path):
os.listdir(one_path)
continue
if os.path.islink(one_path):
# Don't expect anything broken happen.
os.readlink(one_path)
relpath = os.path.relpath(one_path, start=self.target_dir)
sym_path = os.path.join(self.verify_dir, relpath)
assert os.readlink(one_path) == os.readlink(sym_path)
continue
if not os.path.isfile(one_path):
continue
with open(one_path, "rb") as f:
st = os.stat(one_path)
file_size = st.st_size
do_read = True
while do_read:
# Select a file position randomly
pos = random.randint(0, file_size)
try:
f.seek(pos)
except io.UnsupportedOperation as exc:
logging.exception(exc)
break
except Exception as exc:
raise type(exc)(
str(exc)
+ f"Seek pos {pos}, file {one_path}, file size {file_size}"
)
io_size = WorkloadGen.pick_io_size()
logging.debug(
"File %s , Pos %u, IO Size %u", target_file, pos, io_size
)
op_cnt += 1
total_size += io_size
try:
buf = f.read(io_size)
assert io_size == len(buf) or file_size - pos == len(
buf
), f"file path {one_path}: io_size {io_size} buf len {len(buf)} file_size {file_size} pos {pos}"
except IOError as exc:
logging.error(
"file %s, offset %u, io size %u", one_path, pos, io_size
)
raise exc
if random.randint(0, 13) % 4 == 0:
do_read = False
if self.verify:
self.verify_file_range(one_path, pos, io_size, buf)
t_delta = get_current_time() - t_begin
return op_cnt, total_size, t_delta.total_seconds()
def verify_file_range(self, file_path, offset, length, buf):
relpath = os.path.relpath(file_path, start=self.target_dir)
file_path = os.path.join(self.verify_dir, relpath)
with open(file_path, "rb") as f:
f.seek(offset)
out = f.read(length)
orig_md5 = hashlib.md5(out).digest()
buf_md5 = hashlib.md5(buf).digest()
if orig_md5 != buf_md5:
logging.error(
"File Verification error. path: %s offset: %lu len: %u. want %s but got %s",
file_path,
offset,
length,
str(orig_md5),
str(buf_md5),
)
raise Exception(
f"Verification error {file_path} {offset} {length} failed."
)
def io_read(self, io_duration, conn=None):
try:
cnt, size, duration = self.read_collected_files(io_duration)
WorkloadGen.print_summary(cnt, size, duration)
except Exception as exc:
logging.exception("Stress read failure, %s", exc)
self.io_error = True
finally:
if conn is not None:
conn.send(self.io_error)
conn.close()
def setup_workload_generator(self):
self.collect_all_dirs()
def torture_read(self, threads_cnt: int, duration: int, verify=True):
readers_list = []
self.verify = verify
for idx in range(0, threads_cnt):
reader_name = "rafs_reader_%d" % idx
(parent_conn, child_conn) = multiprocessing.Pipe(False)
rafs_reader = multiprocessing.Process(
name=reader_name,
target=self.io_read,
args=(duration, child_conn),
)
logging.info("Reader %s starts work" % reader_name)
readers_list.append((rafs_reader, parent_conn))
rafs_reader.start()
self.readers = readers_list
def finish_torture_read(self):
for one in self.readers:
self.io_error = one[1].recv() or self.io_error
one[0].join()
if self.verify:
assert not self.io_error
self.stop_load_monitor()
@classmethod
def print_summary(cls, cnt, size, duration):
logging.info(
"Issued reads: %(cnt)lu Total read size: %(size)lu bytes Time duration: %(duration)u"
% {"cnt": cnt, "size": size, "duration": duration}
)
@staticmethod
def pick_io_size():
return random.choice(size_list)
@staticmethod
def issue_single_write(file_name, offset, bs: Size, size: Size):
"""
:size: Amount of data to be written to
:bs: Each write io block size
:offset: From which offset of the file to star write
"""
block = os.urandom(bs.B)
left = size.B
fd = os.open(file_name, os.O_RDWR)
while left > 0:
os.pwrite(fd, block, offset + size.B - left)
left -= bs.B
os.close(fd)
@staticmethod
def issue_single_read(dir, file_name, offset: Size, bs: Size):
with pushd(dir):
with open(file_name, "rb") as f:
buf = os.pread(f.fileno(), bs.B, offset.B)
return buf
def start_load_monitor(self, pid):
def _dump_mem_info(anchor, pid):
while not self.monitor_stopped:
dump_process_mem_cpu_load(pid)
time.sleep(2)
self.load_monitor = threading.Thread(
name="load_monitor", target=_dump_mem_info, args=(self, pid)
)
self.monitor_stopped = False
self.load_monitor.start()
def stop_load_monitor(self):
if "load_monitor" in self.__dict__:
self.monitor_stopped = True
self.load_monitor.join()
if __name__ == "__main__":
print("This is workload generator")
with open("append_test", "a") as f:
wg = WorkloadGen(None, None)
wg.do_append(f.fileno(), Size(1, Unit.KB), Size(16, Unit.KB))
wg = WorkloadGen(".", None)
wg.torture_append(2, Size(1, Unit.KB), Size(16, Unit.MB))
wg.finish_torture_append()

Some files were not shown because too many files have changed in this diff Show More