Compare commits

..

87 Commits

Author SHA1 Message Date
Jiang Liu 004cdf6749 storage: refine the way to implement BlobIoChunk
Backport the new implementation of BlobIoChunk from master into v2.1.

Fixes: https://github.com/dragonflyoss/image-service/issues/1198

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-27 17:06:32 +08:00
Jiang Liu c2737ddb39 deny: fix CVE warnings about openssl and h2
Fix CVE warnings about openssl and h2.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-27 10:52:32 +08:00
Yan Song afe8c4633c e2e: keep the test stronger
Remove the useless registry auth test.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-27 10:52:32 +08:00
Yan Song 8f8cd6bc46 nydusify: cleanup temporary directories for check
Remove temporarily generated directories to save disk space
after `nydusify check` command, and prevent file conflict
between multiple `nydusify check` operations.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-27 10:52:32 +08:00
Yan Song 60400200ea nydusify: fix overlayfs mount options for check
To fix the error when mount OCI v1 image in check subcommand:

```
error: mount options is too long
```

The mount options has 4k buffer size limitation, we will
encounter the issue in a huge images with mass layers.

We need to shorten the length of lowerdir paths in overlayfs
option, change `sha256:xxx` to `layer-N` to alleviate the issue.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-27 10:52:32 +08:00
Yan Song ce05676321 action: fix checkout on pull_request_target
The `pull_request_target` trigger will checkout the master branch
codes strictly, but we must use the PR codes for the smoke test.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-25 21:58:21 +08:00
Yan Song 9612a2af00 action: fix smoke test for branch pattern
To match `master` and `stable/*` branches at least.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-20 16:43:08 +08:00
Jiang Liu a1872cf306 rafs: fix a regression caused by commit 2616fb2c05
Fix a regression caused by commit 2616fb2c05.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-20 11:37:16 +08:00
Jiang Liu 1038ddec3c rafs: fix a possible bug in v6_dirent_size()
Function Node::v6_dirent_size() may return wrong result when "." and
".." are not at the first and second entries in the sorted dirent array.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-20 11:37:16 +08:00
Jiang Liu 3ffc8aec6b bats: fix a failure related git
Fix a failure related git
```
Your branch is up to date with 'origin/v2.1-backport'.

nothing to commit, working tree clean
[tone]Error: The return code of run() in run.sh is not 0
hint: discouraged. You can squelch this message by running one of the following
hint: commands sometime before your next pull:
hint:
hint:   git config pull.rebase false  # merge (the default strategy)
hint:   git config pull.rebase true   # rebase
hint:   git config pull.ff only       # fast-forward only
hint:
hint: You can replace "git config" with "git config --global" to set a default
hint: preference for all repositories. You can also pass --rebase, --no-rebase,
hint: or --ff-only on the command line to override the configured default per
hint: invocation.
```

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-20 11:37:16 +08:00
Yan Song ef2033c2e2 builder: support `--parent-bootstrap` for merge
This option allows merging multiple bootstraps of upper layer with
the bootstrap of a parent image, so that we can implement container
commit operation for nydus image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-09 10:06:56 +00:00
Yan Song 2a83e06e50 nydusify: enable pigz by default
We should use pigz for supporting parallel gzip decompression, so that
improve the conversion speed when unpack gzip layer for source image.

We still allow users to specify the env `CONTAINERD_DISABLE_PIGZ=1` to
disable the feature when encounter any decompression error.

See 33c0eafb17/archive/compression/compression.go (L261)

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-28 07:02:07 +00:00
imeoer c69a43eeeb
Merge pull request #1106 from jiangliu/rafs-entry-name-v2.1
rafs: fix a bug in calculate offset for dirent name
2023-02-24 15:35:40 +08:00
Jiang Liu a8ef7ea415 rafs: fix a bug in calculate offset for dirent name
There's a bug in calculate offset and size for RAFS v6 Dirent name,
it will be treat as 0 instead of 4096 if the last block is 4096 bytes.

Fixes: https://github.com/dragonflyoss/image-service/issues/1098

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 15:29:01 +08:00
imeoer 9eccf9b5fe
Merge pull request #1095 from jiangliu/v2.1-compat
rafs: reserve bits in RafsSuperFlags to be compatible with v2.2
2023-02-19 23:23:37 +08:00
Jiang Liu f6bd98fcdd rafs: reserve bits in RafsSuperFlags to be compatible with v2.2
Reserve bits in RafsSuperFlags so images generated by nydus 2.2 can be
mounted by v2.1.4 and later.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-19 23:06:47 +08:00
Jiang Liu 876fb68f15
Merge pull request #1058 from yqleng1987/add-bats-test-for-stable-branch
add e2e testcases for ci for stable branch
2023-02-06 23:18:55 +08:00
Yiqun Leng 99322a0d5a add e2e testcases for ci for stable branch
These cases have been merged into master branch and the difference is
excluding zran format case since stable branch doesn't support zran for now.

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-02-06 18:42:02 +08:00
Jiang Liu 24c3bb9ab2
Merge pull request #1017 from jiangliu/v6-underflow-2.1
rafs: fix a underflow bug in rafs v6 implementation
2023-01-18 18:27:10 +08:00
Jiang Liu e3621f5397 rafs: fix a underflow bug in rafs v6 implementation
Fix a underflow bug in find_target_block() of RAFS v6. `last` is usize
instead of isize so it may underflow.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-18 17:42:54 +08:00
Jiang Liu 1916c76767
Merge pull request #967 from imeoer/stable/v2.1-cherry-pick
[backport] nydusify: some minor fixups
2022-12-19 21:53:01 +08:00
Yan Song 876571ba4e nydusify: fix a http fallback case for build cache
When the `--source/--target` options specified by the users
is targeting the https registry, but `--build-cache` option
is targeting the http registry, nydusify can't fallback to
plain http for build cache registry, it causing a pull/push
failure for the build cache image.

This patch fixed the failure case.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-19 09:53:39 +00:00
Yan Song c433c44c93 nydusify: fix panic if only --target be specified
nydusify panics when use `nydusify check --target localhost:5000/library/test:nydus`:

```
INFO[2022-12-19T07:24:02Z] Parsing image localhost:5000/library/test:nydus
INFO[2022-12-19T07:24:02Z] trying next host                              error="failed to do request: Head \"https://localhost:5000/v2/library/test/manifests/nydus\": http: server gave HTTP response to HTTPS client" host="localhost:5000"
INFO[2022-12-19T07:24:02Z] Parsing image localhost:5000/library/test:nydus
INFO[2022-12-19T07:24:02Z] Dumping OCI and Nydus manifests to ./output
INFO[2022-12-19T07:24:02Z] Pulling Nydus bootstrap to output/nydus_bootstrap
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xb363bd]

goroutine 1 [running]:
github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker.(*Checker).check(0xc000222500, {0x14544f8, 0xc0000a8000})
	/nydus-rs/contrib/nydusify/pkg/checker/checker.go:160 +0xedd
github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker.(*Checker).Check(0xc000222500, {0x14544f8, 0xc0000a8000})
	/nydus-rs/contrib/nydusify/pkg/checker/checker.go:88 +0xee
main.main.func2(0xc0000bbe40)
	/nydus-rs/contrib/nydusify/cmd/nydusify.go:608 +0x5b1
github.com/urfave/cli/v2.(*Command).Run(0xc000540fc0, 0xc0000bba80)
	/go/pkg/mod/github.com/urfave/cli/v2@v2.3.0/command.go:163 +0x8b8
github.com/urfave/cli/v2.(*App).RunContext(0xc0004e91e0, {0x14544f8, 0xc0000a8000}, {0xc0000ba040, 0x4, 0x4})
	/go/pkg/mod/github.com/urfave/cli/v2@v2.3.0/app.go:313 +0xb2a
github.com/urfave/cli/v2.(*App).Run(0xc0004e91e0, {0xc0000ba040, 0x4, 0x4})
	/go/pkg/mod/github.com/urfave/cli/v2@v2.3.0/app.go:224 +0x75
main.main()
	/nydus-rs/contrib/nydusify/cmd/nydusify.go:885 +0x8d5d
```

This patch checks the target is nil or not first.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-19 09:53:31 +00:00
Changwei Ge 3c649123cc
Merge pull request #961 from mofishzz/stable/v2.1_fix_v6_lookup
rafs: fix overflow panic of rafs v6 lookup
2022-12-15 11:27:09 +08:00
Huang Jianan d9d82e54bd rafs: fix overflow panic of rafs v6 lookup
The directory for v6 is stored in the following way:
...name1name2
The first subdirectory we got here is ".".

When looking for some file whose ascii value is less than "." like
"*", we need to move forward to the block index -1. This caused
uszie's index "pirvot" to attempt to subtract with overflow and then
panic.

Fixes: 50ca1a1 ("rafs: optimize entry search in rafs v6")
Signed-off-by: Huang Jianan <jnhuang@linux.alibaba.com>
2022-12-15 10:55:27 +08:00
Jiang Liu 820b3782e2
Merge pull request #940 from changweige/2.1-fix-graceful-exit
nydusd: register signal hanlder earlier
2022-12-09 10:27:30 +08:00
Jiang Liu e02fd274d3
Merge pull request #939 from changweige/2.1-pick-prefetch-fix
rafs: fix a bug in fs prefetch
2022-12-09 10:26:30 +08:00
Changwei Ge df8a6f59e7 nydusd: register signal hanlder earlier
Otherwise, it loses a window to gracefully exit
by umounting FUSE

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-09 10:12:56 +08:00
Jiang Liu 92c536a214 rafs: fix a bug in fs prefetch
There's a bug which skips data chunks at blob tail.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-09 09:33:08 +08:00
imeoer 0b286cb3f5
Merge pull request #936 from changweige/2.1-backoff-retry
2.1 backoff retry
2022-12-08 16:56:03 +08:00
Changwei Ge db8760aa1a nydusctl: refine nydusctl general information print messages
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-08 16:38:21 +08:00
Changwei Ge 10accd6284 storage: introduce BackOff delayer to mitigate backend presure
Don't retry immediately since registry can return "too many requests"
error. We'd better be slow to retry.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-08 16:34:51 +08:00
Jiang Liu fad6d17130
Merge pull request #914 from imeoer/backport-auto-scheme
[backport] nydusd: automatically retry registry http scheme
2022-12-02 15:51:56 +08:00
Yan Song 6f7c8e5a20 storage: enhance retryable registry http scheme
Check the tls connection error with `wrong version number` keywords,
it's more reliable than the specific error code.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-02 06:59:53 +00:00
Wenyu Huang 7e4e33becf nydusd: automatically retry registry http scheme
Signed-off-by: Wenyu Huang <huangwenyuu@outlook.com>
2022-12-02 06:59:45 +00:00
imeoer f3580b581f
Merge pull request #899 from changweige/2.1-enrich-nydusctl
Let nydusd show prefetch bandwidth and latency
2022-11-30 14:04:13 +08:00
Changwei Ge b2fa20d5f8 nydusctl: show prefetch latency and bandwidth
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-30 13:46:34 +08:00
Changwei Ge 5ced20e5ce nydusctl: adapt renaming prefetch_mr_count to prefetch_requests_count
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-30 13:46:34 +08:00
Jiang Liu 2eb90d9eb9
Merge pull request #892 from changweige/2.1-prefetch-metrcis
2.1 prefetch metrics
2022-11-30 12:17:03 +08:00
Jiang Liu 4119e1c34f
Merge pull request #887 from changweige/2.1-v6-blob-prefetch
rafs: prefetch based on blob chunks rather than files
2022-11-29 10:44:16 +08:00
imeoer f2e8a9b5e2
Merge pull request #898 from changweige/2.1-port-nydusify
2.1 port nydusify
2022-11-28 11:53:26 +08:00
Changwei Ge 852bdc2aab nydusify: add a parameter to change chunk size
Public the parameter to end users.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-28 11:34:19 +08:00
Changwei Ge fb0e8d13d8 nydusify: add CLI parameter --compressor to control nydus-image
It has been proved that zstd has a smaller image size. We
should provide user a option to use zstd as nydus image compressor
to reduce image size.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-28 11:34:15 +08:00
Changwei Ge 9b0a538d83 rafs: prefetch based on blob chunks rather than files
Perform different policy for v5 format and v6 format as rafs v6's blobs are capable to
to download chunks and decompress them all by themselves. For rafs v6, directly perform
chunk based full prefetch to reduce requests to container registry and
P2P cluster.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-28 11:19:43 +08:00
Changwei Ge c7b3f89b2e metrics: rename prefetch_mr_count to prefetch_requests_count
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-28 10:41:00 +08:00
Changwei Ge fc69c331ac metrics/cache: add more prefetch related metrics
Record prefech request average latency.
Calculate prefetch average bandwidth.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-28 10:29:39 +08:00
Changwei Ge bbcb0bffa3 metrics: add method set() to initialize the metric
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-24 12:11:19 +08:00
Jiang Liu 3efd75ae6a
Merge pull request #880 from changweige/2.1-fix-frequent-retry
fix too frequent retry
2022-11-22 18:48:19 +08:00
Changwei Ge 8684dac117 storage: fix too frequent retry when blob prefetch fails
tick() will complete when the next instant reaches which is
a very short time rather than 1 second.

In addition limit the total retry times

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-22 14:30:02 +08:00
Changwei Ge aa989d0264 storage: change error type if meta file is not found
ENOENT would be more suggestive.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-22 14:15:57 +08:00
Jiang Liu b8460d14d4
Merge pull request #877 from changweige/port-update-uhttp
cargo: update version of dbs-uhttp
2022-11-21 15:34:15 +08:00
imeoer 48a7a74143
Merge pull request #872 from changweige/port-nydusify-version
nydusify: beautify version print message of nydusify
2022-11-21 14:50:54 +08:00
Changwei Ge 8d05ba289a cargo: update version of dbs-uhttp
dbs-uhttp has fixed the problem that http client
get EBUSY error when fetching body data from API server.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-21 14:49:33 +08:00
Changwei Ge f47355d376 nydusify: fix a typo in its version message
It should be Revision

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-21 14:48:24 +08:00
Changwei Ge b8d57bda3d nydusify: beautify version print message of nydusify
Print more infomations on git version, reversion and golang version

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-18 09:27:18 +08:00
Peng Tao 4d2c95793b
Merge pull request #870 from sctb512/backport-fix-print-auth
storage: fix registry to avoid print bearer auth
2022-11-17 18:02:37 +08:00
Bin Tang 4f3da68db0 storage: fix registry to avoid print bearer auth
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-17 17:56:09 +08:00
imeoer 522791b4b3
Merge pull request #842 from sctb512/backport-fix-mirror-health-check
storage: remove unused code for refreshing registry tokens
2022-11-07 10:04:02 +08:00
Bin Tang ad8b9a7f96 storage: remove unused code for refreshing registry tokens
There is no need to change 'grant_type' for refreshing registry tokens.
Because we use the URL with cached 'grant_type' can get token as well.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-06 22:12:08 +08:00
imeoer 2fd7070bf7
Merge pull request #839 from imeoer/stable/v2.1-cherry-pick
[backport to stable/v2.1] storage: add mirror health checking support
2022-11-04 21:35:35 +08:00
Bin Tang a514f66851 storage: fix syntax for mirror health checking
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-04 13:27:47 +00:00
Bin Tang 34d84cac91 storage: refresh token to avoid forwarding to P2P/dragonfly
Forward 401 response to P2P/dragonfly will affect performance.
When there is a mirror that auth_through false, we refresh the token regularly
to avoid forwarding the 401 response to mirror.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-04 13:27:42 +00:00
Bin Tang 41fefcdbae storage: add mirror health checking support
Currently, the mirror is set to unavailable if the failed times reach failure_limit.
We added mirror health checking, which will recover unavailable mirror server.
The failure_limit indicates the failed time at which the mirror is set to unavailable.
The health_check_interval indicates the time interval to recover the unavailable mirror.
The ping_url is the endpoint to check mirror server health.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-04 13:27:35 +00:00
imeoer 29a9af49a4
Merge pull request #838 from jiangliu/v2.1.1-pub2
prepare for publishing to crates.io
2022-11-04 21:25:17 +08:00
Jiang Liu 2496bc98f3 release: prepare for publishing to crates.io
Prepare for publishing to crates.io.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-04 21:03:40 +08:00
imeoer 36b4edb638
Merge pull request #834 from imeoer/stable-update-release
action: update release notes for download mirror
2022-11-04 10:26:49 +08:00
Yan Song dd0a0d8522 action: update release notes for download mirror
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-11-04 02:25:59 +00:00
imeoer 29af7a1267
Merge pull request #830 from changweige/backport-nydusify-drop-label
nydusify: drop label "nydus-blob-ids" from meta layer
2022-11-03 14:04:16 +08:00
Changwei Ge 07788809a2 nydusify: drop label "nydus-blob-ids" from meta layer
Image with layers more than 64 can't be pulled by containerd
since the label is exceeding the label size limitation 4096bytes.

We should figure out another way to do GC in nydus-snapshotter

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-03 13:55:35 +08:00
imeoer 405c79de17
Merge pull request #828 from changweige/nydusify-backport-v2.1
Backport 3 patches for stable/v2.1
2022-11-03 09:54:25 +08:00
泰友 c8b21e3529 fix: miss oss file of nydusify packer
Reproduction:

Prepare configuration file used for pack command. {
"bucket_name": "XXX",
"endpoint": "XXX",
"access_key_id": "XXX",
"access_key_secret": "XXX",
"meta_prefix": "nydus_rund_sidecar_meta",
"blob_prefix": "blobs"
}

Pack by nydusify sudo contrib/nydusify/cmd/nydusify pack
--source-dir test
--output-dir tmp
--name ccx-test
--backend-push
--backend-config-file backend-config.json
--backend-type oss
--nydus-image target/debug/nydus-image

Miss blob file and meta file in oss

Problem:

Forgot to CompleteMultipartUpload after chunk uploading.

Fix:

CompleteMultipartUpload to complete uploading.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2022-11-02 19:01:22 +08:00
泰友 67e0cc6f32 refact: use specified object prefix and meta prefix directly
issue: https://github.com/dragonflyoss/image-service/issues/608

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2022-11-02 19:01:17 +08:00
泰友 121a108ac9 fix: nydusify pack fail
Reproduction
1. Prepare configuration file used for pack command.
{
    "bucket_name": "XXX",
    "endpoint": "XXX",
    "access_key_id": "XXX",
    "access_key_secret": "XXX",
    "meta_prefix": "nydus_rund_sidecar_meta",
    "blob_prefix": "blobs"
}

2. Pack by nydusify
sudo contrib/nydusify/cmd/nydusify pack \
--source-dir test \
--output-dir tmp \
--name ccx-test \
--backend-push \
--backend-config-file backend-config.json \
--backend-type oss \
--nydus-image target/debug/nydus-image

3. Got error
FATA[2022-10-08T18:06:46+08:00] failed to push pack result to remote: failed to put metafile to remote: split file by part size: open tmp/tmp/ccx-test.meta: no such file or directory

Problem
The path of bootstrap file, which is to upload, is wrong.

Fix
Use imageName as req.Meta, which is bootstrap file to upload.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2022-11-02 18:58:03 +08:00
Jiang Liu e9a774c2ee
Merge pull request #805 from changweige/pytest-stop-v2
action/nydus-test: stop on the first test failure
2022-10-20 14:17:21 +08:00
Changwei Ge 7975d09dc3 action/nydus-test: stop on the first test failure
By default, pytest will continue executing test even
current test fails. It's hard to tell what to happen
on such a environment. And it makes it hard to investigate
the first failed case.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-10-20 13:35:01 +08:00
Jiang Liu 8c9c73b5b7
Merge pull request #804 from imeoer/bring-auto-version
release: update version on build automatically
2022-10-19 20:18:47 +08:00
Yan Song c7eaa2e858 release: update version on build automatically
We only need to git tag to release a version without modifying
the version field in Cargo.toml and Cargo.lock.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-10-19 12:14:28 +00:00
Changwei Ge 6a0bef4ce6
Merge pull request #799 from changweige/backport-patches
Backport some patches for stable/v2.1
2022-10-18 15:18:25 +08:00
Yan Song 8a32d5b61e nydusify: fix overlay error for image with single layer
Nydusify check subcommand will check the consistency of
OCI image and nydus image by mounting (overlayfs or nydusd).

For the OCI image with a single layer, we should use bind
mount instead of overlay to mount rootfs, otherwise an error
will be thrown like:

```
wrong fs type, bad option, bad superblock on overlay, missing
codepage or helper program, or other error.
```

This commit also refine the codes for image.Mount/image.Umount.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-10-18 14:48:00 +08:00
Bin Tang 5ac9831130 fix mirror's performance issue
In some scenarios(e.g. P2P/Dragonfly), sending an authorization request
to the mirror will cause performance loss. We add parameter
auth_through. When auth_through is false, nydusd will directly send
non-authorization request to original registry.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-10-18 14:47:39 +08:00
Xin Yin c75d3fbfcf storage: retry timeout chunks for fscache ondemand path
for fscache ondemand path, if some requested chunks are set to pending by
prefetch threads, and wait them timeout, will casue EIO to container side.

retry the timeout chunks on ondemand path, minimize EIOs.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2022-10-18 14:47:14 +08:00
Jiang Liu 91d26745e2
Merge pull request #785 from sctb512/rafsv6-file-parent
nydus-image: fix inspect to get correct path of rafs v6 file
2022-10-17 10:05:32 +08:00
Jiang Liu a51a7185f1
Merge pull request #793 from changweige/fix-v5-prefetch-table
nydus-image/v5: prefetch table should contain inode numbers rather its index
2022-10-14 21:49:40 +08:00
Changwei Ge afaf75cfff nydus-image/v5: prefetch table should contain inode numbers rather its
index

Nydusd is performing prefetch by finding all inodes matching its
inode number.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-10-14 16:18:56 +08:00
Bin Tang c28585f06f fix inspect to get correct path of rafs v6 file
Because rafs v6 doesn't support get_parent, the prefetch and icheck
command of inspect will cause error. We fixed it by handling
get_file_name get_file_name and path_from_ino for rafs v6 files
separately. This commit does not affect the rafs core code.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-10-14 15:29:19 +08:00
Changwei Ge fd588c918f
Merge pull request #789 from changweige/port-2.1-enlarge-fuse-threads-num
nydusd: enlarge default fuse server threads
2022-10-11 10:37:39 +08:00
Changwei Ge 3b15cf50a5 nydusd: enlarge default fuse server threads
Now the default value is only 1, it affacts performance.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-10-11 10:36:18 +08:00
523 changed files with 41067 additions and 74425 deletions

7
.github/CODEOWNERS vendored
View File

@ -1,7 +0,0 @@
# A CODEOWNERS file uses a pattern that follows the same rules used in gitignore files.
# The pattern is followed by one or more GitHub usernames or team names using the
# standard @username or @org/team-name format. You can also refer to a user by an
# email address that has been added to their GitHub account, for example user@example.com
* @dragonflyoss/nydus-reviewers
.github @dragonflyoss/nydus-maintainers

View File

@ -1,44 +0,0 @@
## Additional Information
_The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._
### Version of nydus being used (nydusd --version)
<!-- Example:
Version: v2.2.0
Git Commit: a38f6b8d6257af90d59880265335dd55fab07668
Build Time: 2023-03-01T10:05:57.267573846Z
Profile: release
Rustc: rustc 1.66.1 (90743e729 2023-01-10)
-->
### Version of nydus-snapshotter being used (containerd-nydus-grpc --version)
<!-- Example:
Version: v0.5.1
Revision: a4b21d7e93481b713ed5c620694e77abac637abb
Go version: go1.18.6
Build time: 2023-01-28T06:05:42
-->
### Kernel information (uname -r)
_command result: uname -r_
### GNU/Linux Distribution, if applicable (cat /etc/os-release)
_command result: cat /etc/os-release_
### containerd-nydus-grpc command line used, if applicable (ps aux | grep containerd-nydus-grpc)
```
```
### client command line used, if applicable (such as: nerdctl, docker, kubectl, ctr)
```
```
### Screenshots (if applicable)
## Details about issue

View File

@ -1,21 +0,0 @@
## Relevant Issue (if applicable)
_If there are Issues related to this PullRequest, please list it._
## Details
_Please describe the details of PullRequest._
## Types of changes
_What types of changes does your PullRequest introduce? Put an `x` in all the boxes that apply:_
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation Update (if none of the other choices apply)
## Checklist
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have updated the documentation accordingly.
- [ ] I have added tests to cover my changes.

23
.github/codecov.yml vendored
View File

@ -1,23 +0,0 @@
coverage:
status:
project:
default:
enabled: yes
target: auto # auto compares coverage to the previous base commit
# adjust accordingly based on how flaky your tests are
# this allows a 0.2% drop from the previous base commit coverage
threshold: 0.2%
patch: false
comment:
layout: "reach, diff, flags, files"
behavior: default
require_changes: true # if true: only post the comment if coverage changes
codecov:
require_ci_to_pass: false
notify:
wait_for_ci: true
# When modifying this file, please validate using
# curl -X POST --data-binary @codecov.yml https://codecov.io/validate

View File

@ -1,250 +0,0 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

View File

@ -1,40 +0,0 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

View File

@ -1,329 +0,0 @@
name: Benchmark
on:
schedule:
# Run at 03:00 clock UTC on Monday and Wednesday
- cron: "0 03 * * 1,3"
pull_request:
paths:
- '.github/workflows/benchmark.yml'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
benchmark-description:
runs-on: ubuntu-latest
steps:
- name: Description
run: |
echo "## Benchmark Environment" > $GITHUB_STEP_SUMMARY
echo "| operating system | cpu | memory " >> $GITHUB_STEP_SUMMARY
echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=oci
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-oci.json
export SNAPSHOTTER=overlayfs
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: smoke/${{ matrix.image }}-oci.json
benchmark-fsversion-v5:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-5
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v5.json
benchmark-fsversion-v6:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-6
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v6.json
benchmark-zran:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=zran
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: smoke/${{ matrix.image }}-zran.json
benchmark-result:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download benchmark-oci
uses: actions/download-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v5
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v6
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-zran
uses: actions/download-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: benchmark-result
- name: Benchmark Summary
run: |
case ${{matrix.image}} in
"wordpress")
echo "### workload: wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"node")
echo "### workload: node index.js; wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"python")
echo "### workload: python -c 'print("hello")'" > $GITHUB_STEP_SUMMARY
;;
"golang")
echo "### workload: go run main.go" > $GITHUB_STEP_SUMMARY
;;
"ruby")
echo "### workload: ruby -e "puts \"hello\""" > $GITHUB_STEP_SUMMARY
;;
"amazoncorretto")
echo "### workload: javac Main.java; java Main" > $GITHUB_STEP_SUMMARY
;;
esac
cd benchmark-result
metric_files=(
"${{ matrix.image }}-oci.json"
"${{ matrix.image }}-fsversion-v5.json"
"${{ matrix.image }}-fsversion-v6.json"
"${{ matrix.image }}-zran.json"
)
echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |convert-time(s)|" >> $GITHUB_STEP_SUMMARY
echo "|:-------------|:-----------:|:----------:|:---------------:|:--------------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for file in "${metric_files[@]}"; do
name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/')
data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024)) \(.conversion_elapsed / 1e9)"' "$file" | \
awk '{ printf "%.2f | %.0f | %.2f | %.2f | %.2f", $1, $2, $3, $4, $5 }')
echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY
done

85
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,85 @@
name: CI
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 23:08 clock UTC
- cron: "8 23 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-ut:
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.17.x, 1.18.x]
env:
DOCKER: false
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: cache go mod
uses: actions/cache@v2
with:
path: /go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/docker-nydus-graphdriver/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
- name: test contrib UT
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.49.0
make contrib-test
smoke:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Cache Nydus
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-amd64
- name: Cache Docker Layers
uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Smoke Test
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.49.0
echo Cargo Home: $CARGO_HOME
echo Running User: $(whoami)
make docker-smoke
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
macos-ut:
runs-on: macos-11
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v2
- name: build and check
run: |
rustup component add rustfmt && rustup component add clippy
make
make ut
deny:
name: Cargo Deny
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v2
- uses: EmbarkStudios/cargo-deny-action@v1

View File

@ -1,4 +1,4 @@
name: Convert & Check Images
name: Convert Top Docker Hub Images
on:
schedule:
@ -14,376 +14,73 @@ env:
FSCK_PATCH_PATH: misc/top_images/fsck.patch
jobs:
nydusify-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
fsck-erofs-build:
convert-images:
runs-on: ubuntu-latest
# don't run this action on forks
if: github.repository_owner == 'dragonflyoss'
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v2
- name: Install Nydus binaries
run: |
NYDUS_VERSION=$(curl --silent "https://api.github.com/repos/dragonflyoss/image-service/releases/latest" | grep -Po '"tag_name": "\K.*?(?=")')
wget https://github.com/dragonflyoss/image-service/releases/download/$NYDUS_VERSION/nydus-static-$NYDUS_VERSION-linux-amd64.tgz
tar xzf nydus-static-$NYDUS_VERSION-linux-amd64.tgz
sudo cp nydus-static/nydusify nydus-static/nydus-image /usr/local/bin/
sudo cp nydus-static/nydusd /usr/local/bin/nydusd
- name: Log in to the container registry
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build fsck.erofs
run: |
sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev
git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
cd erofs-utils && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/
- name: Upload fsck.erofs
uses: actions/upload-artifact@v4
with:
name: fsck-erofs-artifact
path: |
/usr/local/bin/fsck.erofs
convert-zran:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check zran images
- name: Convert RAFS v5 images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-zran
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-oci-ref"
ghcr_repo=${{ env.REGISTRY }}/${{ env.ORGANIZATION }}
# push oci image to ghcr/local for zran reference
sudo docker pull $I:latest
sudo docker tag $I:latest $ghcr_repo/$I
sudo docker tag $I:latest localhost:5000/$I
sudo DOCKER_CONFIG=$HOME/.docker docker push $ghcr_repo/$I
sudo docker push localhost:5000/$I
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--oci-ref \
--source $ghcr_repo/$I \
--target $ghcr_repo/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--oci-ref \
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64 \
--output-json convert-zran/${I}.json
# check zran image and referenced oci image
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check \
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
convert-native-v5:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Convert and check RAFS v5 images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v5
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v5"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v5 \
--fs-version 5 \
--platform linux/amd64,linux/arm64
--build-cache ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/nydus-build-cache:$I-v5 \
--fs-version 5
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5 \
--fs-version 5 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v5/${I}.json
--fs-version 5
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
convert-native-v6:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6 \
--fs-version 6 \
--platform linux/amd64,linux/arm64
--build-cache ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/nydus-build-cache:$I-v6 \
--fs-version 6
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 \
--fs-version 6 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6/${I}.json
--fs-version 6
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo fsck.erofs -d1 output/nydus_bootstrap
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
convert-native-v6-batch:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 batch images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6-batch
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6-batch"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6-batch/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
convert-metric:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6, convert-native-v6-batch]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download Zran Metric
uses: actions/download-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
- name: Download V5 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
- name: Download V6 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
- name: Download V6 Batch Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
- name: Summary
run: |
echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY
echo "> Compare the size of OCI image and Nydus image."
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|oci/nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")")
zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")")
v5SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v5/${I}.json) / 1048576")")
v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")")
v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")")
v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")")
batchSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
batchTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|${batchSourceImageSize}/${batchTargetImageSize}|" >> $GITHUB_STEP_SUMMARY
done
echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY
echo "> Time elapsed to convert OCI image to Nydus image."
echo "|image name|nydus-zran|nydus-v5|nydus-v6|nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")")
v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")")
v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")")
batchConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6-batch/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|${batchConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
done
- uses: geekyeggo/delete-artifact@v2
with:
name: '*'

110
.github/workflows/it.yml vendored Normal file
View File

@ -0,0 +1,110 @@
name: Nydus Integration Test
on:
schedule:
# Do conversion every day at 00:03 clock UTC
- cron: "3 0 * * *"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [amd64]
fs_version: [5, 6]
steps:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
- name: Setup pytest
run: |
sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3
sudo python3 -m pip install --upgrade pip
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
- name: containerd runc and crictl
run: |
sudo wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz
sudo tar zxvf ./crictl-v1.17.0-linux-amd64.tar.gz -C /usr/local/bin
sudo wget https://github.com/containerd/containerd/releases/download/v1.4.3/containerd-1.4.3-linux-amd64.tar.gz
mkdir containerd
sudo tar -zxf ./containerd-1.4.3-linux-amd64.tar.gz -C ./containerd
sudo mv ./containerd/bin/* /usr/bin/
sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64 -O /usr/bin/runc
sudo chmod +x /usr/bin/runc
- name: Set up ossutils
run: |
sudo wget https://gosspublic.alicdn.com/ossutil/1.7.13/ossutil64 -O /usr/bin/ossutil64
sudo chmod +x /usr/bin/ossutil64
- uses: actions/checkout@v3
- name: Cache cargo
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.1 cross
rustup component add rustfmt clippy
make -e RUST_TARGET=$RUST_TARGET -e CARGO=cross static-release
make release -C contrib/nydus-backend-proxy/
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
pwd
ls -lh target/$RUST_TARGET/release
- name: Set up anchor file
env:
OSS_AK_ID: ${{ secrets.OSS_TEST_AK_ID }}
OSS_AK_SEC: ${{ secrets.OSS_TEST_AK_SECRET }}
FS_VERSION: ${{ matrix.fs_version }}
run: |
sudo mkdir -p /home/runner/nydus-test-workspace
sudo mkdir -p /home/runner/nydus-test-workspace/proxy_blobs
sudo cat > /home/runner/work/image-service/image-service/contrib/nydus-test/anchor_conf.json << EOF
{
"workspace": "/home/runner/nydus-test-workspace",
"nydus_project": "/home/runner/work/image-service/image-service",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "localhost:5000",
"registry_namespace": "",
"registry_auth": "YOURAUTH==",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/home/runner/nydus-test-workspace/proxy_blobs"
},
"oss": {
"endpoint": "oss-cn-beijing.aliyuncs.com",
"ak_id": "$OSS_AK_ID",
"ak_secret": "$OSS_AK_SEC",
"bucket": "nydus-ci"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd",
"ossutil_bin": "/usr/bin/ossutil64"
},
"fs_version": "$FS_VERSION",
"logging_file": "stderr",
"target": "musl"
}
EOF
- name: run e2e tests
run: |
cd /home/runner/work/image-service/image-service/contrib/nydus-test
sudo mkdir -p /blobdir
sudo python3 nydus_test_config.py --dist fs_structure.yaml
sudo pytest -vs -x --durations=0 functional-test/test_api.py functional-test/test_nydus.py functional-test/test_layered_image.py

View File

@ -1,45 +0,0 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -1,4 +1,4 @@
name: Release
name: release
on:
push:
@ -19,60 +19,28 @@ jobs:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.4 cross
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl .
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: nydus-artifacts-linux-${{ matrix.arch }}
path: |
@ -82,33 +50,27 @@ jobs:
configs
nydus-macos:
runs-on: macos-13
runs-on: macos-11
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
rustup component add rustfmt clippy
make -e INSTALL_DIR_PREFIX=. install
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: nydus-artifacts-darwin-${{ matrix.arch }}
path: |
@ -125,22 +87,31 @@ jobs:
env:
DOCKER: false
steps:
- uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
go-version: '1.18'
- name: cache go mod
uses: actions/cache@v2
with:
path: /go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/docker-nydus-graphdriver/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
- name: build contrib go components
run: |
make -e GOARCH=${{ matrix.arch }} contrib-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/docker-nydus-graphdriver/bin/nydus_graphdriver .
sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
- name: store-artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: nydus-artifacts-linux-${{ matrix.arch }}-contrib
name: nydus-artifacts-linux-${{ matrix.arch }}
path: |
ctr-remote
nydus_graphdriver
nydusify
nydus-overlayfs
containerd-nydus-grpc
@ -154,41 +125,7 @@ jobs:
needs: [nydus-linux, contrib-linux]
steps:
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare release tarball
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="nydus-static-$tag-${{ matrix.os }}-${{ matrix.arch }}.tgz"
chmod +x nydus-static/*
tar cf - nydus-static | gzip > ${tarball}
echo "tarball=${tarball}" >> $GITHUB_ENV
shasum="$tarball.sha256sum"
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
# use a seperate job for darwin because github action if: condition cannot handle && properly.
prepare-tarball-darwin:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64]
os: [darwin]
needs: [nydus-macos]
steps:
- name: download artifacts
uses: actions/download-artifact@v4
uses: actions/download-artifact@v2
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static
@ -204,9 +141,42 @@ jobs:
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
name: nydus-release-tarball
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
# use a seperate job for darwin because github action if: condition cannot handle && properly.
prepare-tarball-darwin:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64]
os: [darwin]
needs: [nydus-macos]
steps:
- name: download artifacts
uses: actions/download-artifact@v2
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static
- name: prepare release tarball
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="nydus-static-$tag-${{ matrix.os }}-${{ matrix.arch }}.tgz"
chmod +x nydus-static/*
tar cf - nydus-static | gzip > ${tarball}
echo "tarball=${tarball}" >> $GITHUB_ENV
shasum="$tarball.sha256sum"
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: nydus-release-tarball
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
@ -216,15 +186,15 @@ jobs:
needs: [prepare-tarball-linux, prepare-tarball-darwin]
steps:
- name: download artifacts
uses: actions/download-artifact@v4
uses: actions/download-artifact@v2
with:
pattern: nydus-release-tarball-*
merge-multiple: true
name: nydus-release-tarball
path: nydus-tarball
- name: prepare release env
run: |
echo "tarballs<<EOF" >> $GITHUB_ENV
for I in $(ls nydus-tarball);do echo "nydus-tarball/${I}" >> $GITHUB_ENV; done
cnt=0
for I in $(ls nydus-tarball);do cnt=$((cnt+1)); echo "nydus-tarball/${I}" >> $GITHUB_ENV; done
echo "EOF" >> $GITHUB_ENV
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
echo "tag=${tag}" >> $GITHUB_ENV
@ -239,87 +209,3 @@ jobs:
generate_release_notes: true
files: |
${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true

View File

@ -1,386 +0,0 @@
name: Smoke Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false GOARCH=${{ matrix.arch }} contrib-release
- name: Upload Nydusify
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
contrib-lint:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- path: contrib/nydusify
- path: contrib/nydus-overlayfs
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache: false
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.64
working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose
nydus-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
nydus-image
nydusd
nydusd-build-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
nydus-integration-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Docker Cache
uses: jpribyl/action-docker-layer-caching@v0.1.0
continue-on-error: true
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: |
target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Older Binaries
id: prepare-binaries
run: |
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
versions=(v0.1.0 ${NYDUS_STABLE_VERSION})
version_archs=(v0.1.0-x86_64 ${NYDUS_STABLE_VERSION}-linux-amd64)
for i in ${!versions[@]}; do
version=${versions[$i]}
version_arch=${version_archs[$i]}
wget -q https://github.com/dragonflyoss/nydus/releases/download/$version/nydus-static-$version_arch.tgz
sudo mkdir nydus-$version /usr/bin/nydus-$version
sudo tar xzf nydus-static-$version_arch.tgz -C nydus-$version
sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/
done
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test
run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest
sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest
sudo bash misc/prepare.sh
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}"
versions=(v0.1.0 ${NYDUS_STABLE_VERSION} latest)
version_exports=(v0_1_0 ${NYDUS_STABLE_VERSION_EXPORT} latest)
for i in ${!version_exports[@]}; do
version=${versions[$i]}
version_export=${version_exports[$i]}
export NYDUS_BUILDER_$version_export=/usr/bin/nydus-$version/nydus-image
export NYDUS_NYDUSD_$version_export=/usr/bin/nydus-$version/nydusd
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
sudo -E make smoke-only
nydus-unit-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Unit Test
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Unit Test
run: |
make -e DOCKER=false contrib-test
- name: Upload contrib coverage file
uses: actions/upload-artifact@v4
with:
name: contrib-test-coverage-artifact
path: |
contrib/nydusify/coverage.txt
nydus-unit-test-coverage:
runs-on: ubuntu-latest
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Generate code coverage
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
- name: Upload nydus coverage file
uses: actions/upload-artifact@v4
with:
name: nydus-test-coverage-artifact
path: |
codecov.json
upload-coverage-to-codecov:
runs-on: ubuntu-latest
needs: [contrib-unit-test-coverage, nydus-unit-test-coverage]
steps:
- uses: actions/checkout@v4
- name: Download nydus coverage file
uses: actions/download-artifact@v4
with:
name: nydus-test-coverage-artifact
- name: Download contrib coverage file
uses: actions/download-artifact@v4
with:
name: contrib-test-coverage-artifact
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: ./codecov.json,./coverage.txt
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
fail_ci_if_error: true
nydus-cargo-deny:
name: cargo-deny
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v2
performance-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- mode: fs-version-5
- mode: fs-version-6
- mode: zran
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh
- name: Performance Test
run: |
export PERFORMANCE_TEST_MODE=${{ matrix.mode }}
sudo -E make smoke-performance
takeover-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh takeover_test
- name: Takeover Test
run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover

View File

@ -1,31 +0,0 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

9
.gitignore vendored
View File

@ -1,14 +1,7 @@
**/target*
**/*.rs.bk
**/.vscode
/.vscode
.idea
.cargo
**/.pyc
__pycache__
.DS_Store
go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

View File

@ -1,16 +0,0 @@
## CNCF Dragonfly Nydus Adopters
A non-exhaustive list of Nydus adopters is provided below.
Please kindly share your experience about Nydus with us and help us to improve Nydus ❤️.
**_[Alibaba Cloud](https://www.alibabacloud.com)_** - Aliyun serverless image pull time drops from 20 seconds to 0.8s seconds.
**_[Ant Group](https://www.antgroup.com)_** - Serving large-scale clusters with millions of container creations each day.
**_[ByteDance](https://www.bytedance.com)_** - Serving container image acceleration in Technical Infrastructure of ByteDance.
**_[KuaiShou](https://www.kuaishou.com)_** - Starting to deploy millions of containers with Dragonfly and Nydus.
**_[Yue Miao](https://www.laiyuemiao.com)_** - The startup time of micro service has been greatly improved, and reduced the network consumption.
**_[CoreWeave](https://coreweave.com/)_** - Dramatically reduce the pull time of container image which embedded machine learning models.

2425
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -6,125 +6,73 @@ description = "Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
exclude = ["contrib/", "smoke/", "tests/"]
edition = "2021"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
resolver = "2"
build = "build.rs"
[profile.release]
panic = "abort"
[[bin]]
name = "nydusctl"
path = "src/bin/nydusctl/main.rs"
[[bin]]
name = "nydusd"
path = "src/bin/nydusd/main.rs"
[[bin]]
name = "nydus-image"
path = "src/bin/nydus-image/main.rs"
[lib]
name = "nydus"
path = "src/lib.rs"
[dependencies]
anyhow = "1"
clap = { version = "4.0.18", features = ["derive", "cargo"] }
flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.12.0"
hex = "0.4.3"
hyper = "0.14.11"
hyperlocal = "0.8.0"
lazy_static = "1"
libc = "0.2"
rlimit = "0.8.3"
log = "0.4.8"
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
mio = { version = "0.8", features = ["os-poll", "os-ext"] }
nix = "0.24.0"
rlimit = "0.9.0"
rusqlite = { version = "0.30.0", features = ["bundled"] }
libc = "0.2"
vmm-sys-util = "0.10.0"
clap = "2.33"
# pin regex to fix RUSTSEC-2022-0013
regex = "1.5.5"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.51"
tar = "0.4.40"
tokio = { version = "1.35.1", features = ["macros"] }
sha2 = "0.10.2"
time = { version = "0.3.14", features = ["serde-human-readable"] }
lazy_static = "1.4.0"
xattr = "0.2.2"
nix = "0.24.0"
anyhow = "1.0.35"
base64 = "0.13.0"
rust-fsm = "0.6.0"
vm-memory = { version = "0.9.0", features = ["backend-mmap"], optional = true }
openssl = { version = "0.10.48", features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
openssl-src = { version = "=111.25.0" }
hyperlocal = "0.8.0"
tokio = { version = "1.18.2", features = ["macros"] }
hyper = "0.14.11"
# pin rand_core to bring in fix for https://rustsec.org/advisories/RUSTSEC-2021-0023
rand_core = "0.6.2"
tar = "0.4.38"
mio = { version = "0.8", features = ["os-poll", "os-ext"] }
# Build static linked openssl library
openssl = { version = '0.10.72', features = ["vendored"] }
fuse-backend-rs = { version = "0.9" }
vhost = { version = "0.4.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.5.1", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { version = "0.4.0", optional = true }
nydus-api = { version = "0.4.0", path = "api", features = [
"error-backtrace",
"handler",
] }
nydus-builder = { version = "0.2.0", path = "builder" }
nydus-rafs = { version = "0.4.0", path = "rafs" }
nydus-service = { version = "0.4.0", path = "service", features = [
"block-device",
] }
nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit",
] }
nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
vhost-user-backend = { version = "0.15.0", optional = true }
virtio-bindings = { version = "0.1", features = [
"virtio-v5_0_0",
], optional = true }
virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
nydus-api = { version = "0.1.0", path = "api" }
nydus-app = { version = "0.3.0", path = "app" }
nydus-error = { version = "0.2.1", path = "error" }
nydus-rafs = { version = "0.1.0", path = "rafs", features = ["backend-registry", "backend-oss"] }
nydus-storage = { version = "0.5.0", path = "storage" }
nydus-utils = { version = "0.3.0", path = "utils" }
nydus-blobfs = { version = "0.1.0", path = "blobfs", features = ["virtiofs"], optional = true }
[dev-dependencies]
xattr = "1.0.1"
vmm-sys-util = "0.12.1"
sendfd = "0.3.3"
env_logger = "0.8.2"
rand = "0.8.5"
[features]
default = [
"fuse-backend-rs/fusedev",
"backend-registry",
"backend-oss",
"backend-s3",
"backend-http-proxy",
"backend-localdisk",
"dedup",
]
virtiofs = [
"nydus-service/virtiofs",
"vhost",
"vhost-user-backend",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vmm-sys-util",
]
block-nbd = ["nydus-service/block-nbd"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = [
"nydus-storage/backend-localdisk",
"nydus-storage/backend-localdisk-gpt",
]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
default = ["fuse-backend-rs/fusedev"]
virtiofs = ["fuse-backend-rs/vhost-user-fs", "vm-memory", "vhost", "vhost-user-backend", "virtio-queue", "virtio-bindings"]
[workspace]
members = [
"api",
"builder",
"clib",
"rafs",
"storage",
"service",
"upgrade",
"utils",
]
members = ["api", "app", "error", "rafs", "storage", "utils", "blobfs"]

View File

@ -1,2 +0,0 @@
[build]
pre-build = ["apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y cmake"]

View File

@ -1,15 +0,0 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

143
Makefile
View File

@ -1,4 +1,4 @@
all: release
all: build
all-build: build contrib-build
@ -15,10 +15,9 @@ INSTALL_DIR_PREFIX ?= "/usr/local/bin"
DOCKER ?= "true"
CARGO ?= $(shell which cargo)
RUSTUP ?= $(shell which rustup)
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
SUDO = $(shell which sudo)
CARGO_COMMON ?=
CARGO_COMMON ?=
EXCLUDE_PACKAGES =
UNAME_M := $(shell uname -m)
@ -44,6 +43,8 @@ endif
endif
RUST_TARGET_STATIC ?= $(STATIC_TARGET)
CTR-REMOTE_PATH = contrib/ctr-remote
DOCKER-GRAPHDRIVER_PATH = contrib/docker-nydus-graphdriver
NYDUSIFY_PATH = contrib/nydusify
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
@ -51,6 +52,12 @@ current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
env_go_path := $(shell go env GOPATH 2> /dev/null)
go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
# Set the env DIND_CACHE_DIR to specify a cache directory for
# docker-in-docker container, used to cache data for docker pull,
# then mitigate the impact of docker hub rate limit, for example:
# env DIND_CACHE_DIR=/path/to/host/var-lib-docker make docker-nydusify-smoke
dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,)
# Functions
# Func: build golang target in docker
@ -60,13 +67,13 @@ go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
define build_golang
echo "Building target $@ by invoking: $(2)"
if [ $(DOCKER) = "true" ]; then \
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.21 $(2) ;\
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.18 $(2) ;\
else \
$(2) -C $(1); \
fi
endef
.PHONY: .release_version .format .musl_target .clean_libz_sys \
.PHONY: .release_version .format .musl_target \
all all-build all-release all-static-release build release static-release
.release_version:
@ -78,20 +85,15 @@ endef
.musl_target:
$(eval CARGO_BUILD_FLAGS += --target ${RUST_TARGET_STATIC})
# Workaround to clean up stale cache for libz-sys
.clean_libz_sys:
@${CARGO} clean --target ${RUST_TARGET_STATIC} -p libz-sys
@${CARGO} clean --target ${RUST_TARGET_STATIC} --release -p libz-sys
# Targets that are exposed to developers and users.
build: .format
${CARGO} build $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# Cargo will skip checking if it is already checked
${CARGO} clippy --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --bins --tests -- -Dwarnings --allow clippy::unnecessary_cast --allow clippy::needless_borrow
${CARGO} clippy $(CARGO_COMMON) --workspace $(EXCLUDE_PACKAGES) --bins --tests -- -Dwarnings
release: .format .release_version build
static-release: .clean_libz_sys .musl_target .format .release_version build
static-release: .musl_target .format .release_version build
clean:
${CARGO} clean
@ -102,57 +104,59 @@ install: release
@sudo install -m 755 target/release/nydus-image $(INSTALL_DIR_PREFIX)/nydus-image
@sudo install -m 755 target/release/nydusctl $(INSTALL_DIR_PREFIX)/nydusctl
# unit test
ut: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --no-fail-fast --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
ut:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) -- --skip integration --nocapture --test-threads=8
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
smoke: ut
$(SUDO) TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) $(CARGO) test --test '*' $(CARGO_COMMON) -- --nocapture --test-threads=8
# install miri first from https://github.com/rust-lang/miri/
miri-ut-nextest: .release_version
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
docker-nydus-smoke:
docker build -t nydus-smoke --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/nydus-smoke
docker run --rm --privileged ${CARGO_BUILD_GEARS} \
-e TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) \
-v ~/.cargo:/root/.cargo \
-v $(TEST_WORKDIR_PREFIX) \
-v ${current_dir}:/nydus-rs \
nydus-smoke
# install test dependencies
pre-coverage:
${CARGO} +stable install cargo-llvm-cov --locked
${RUSTUP} component add llvm-tools-preview
# TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image.
# So musl compilation must be involved.
# And docker-in-docker deployment involves image building?
docker-nydusify-smoke: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestSmoke
# print unit test coverage to console
coverage: pre-coverage
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
docker-nydusify-image-test: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestDockerHubImage
# write unit teset coverage to codecov.json, used for Github CI
coverage-codecov:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
# Run integration smoke test in docker-in-docker container. It requires some special settings,
# refer to `misc/example/README.md` for details.
docker-smoke: docker-nydus-smoke docker-nydusify-smoke
smoke-only:
make -C smoke test
contrib-build: nydusify ctr-remote nydus-overlayfs docker-nydus-graphdriver
smoke-performance:
make -C smoke test-performance
contrib-release: nydusify-release ctr-remote-release \
nydus-overlayfs-release docker-nydus-graphdriver-release
smoke-benchmark:
make -C smoke test-benchmark
contrib-test: nydusify-test ctr-remote-test \
nydus-overlayfs-test docker-nydus-graphdriver-test
smoke-takeover:
make -C smoke test-takeover
smoke: release smoke-only
contrib-build: nydusify nydus-overlayfs
contrib-release: nydusify-release nydus-overlayfs-release
contrib-test: nydusify-test nydus-overlayfs-test
contrib-lint: nydusify-lint nydus-overlayfs-lint
contrib-clean: nydusify-clean nydus-overlayfs-clean
contrib-clean: nydusify-clean ctr-remote-clean \
nydus-overlayfs-clean docker-nydus-graphdriver-clean
contrib-install:
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 contrib/ctr-remote/bin/ctr-remote $(INSTALL_DIR_PREFIX)/ctr-remote
@sudo install -m 755 contrib/docker-nydus-graphdriver/bin/nydus-graphdriver $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify
@ -168,8 +172,17 @@ nydusify-test:
nydusify-clean:
$(call build_golang,${NYDUSIFY_PATH},make clean)
nydusify-lint:
$(call build_golang,${NYDUSIFY_PATH},make lint)
ctr-remote:
$(call build_golang,${CTR-REMOTE_PATH},make)
ctr-remote-release:
$(call build_golang,${CTR-REMOTE_PATH},make release)
ctr-remote-test:
$(call build_golang,${CTR-REMOTE_PATH},make test)
ctr-remote-clean:
$(call build_golang,${CTR-REMOTE_PATH},make clean)
nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
@ -183,9 +196,29 @@ nydus-overlayfs-test:
nydus-overlayfs-clean:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean)
nydus-overlayfs-lint:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make lint)
docker-nydus-graphdriver:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make)
docker-nydus-graphdriver-release:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make release)
docker-nydus-graphdriver-test:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make test)
docker-nydus-graphdriver-clean:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make clean)
docker-static:
docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static
docker-example: all-static-release
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydusd misc/example
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydus-image misc/example
cp contrib/nydusify/cmd/nydusify misc/example
docker build -t nydus-rs-example misc/example
@cid=$(shell docker run --rm -t -d --privileged $(dind_cache_mount) nydus-rs-example)
@docker exec $$cid /run.sh
@EXIT_CODE=$$?
@docker rm -f $$cid
@exit $$EXIT_CODE

184
README.md
View File

@ -1,82 +1,70 @@
[**[⬇️ Download]**](https://github.com/dragonflyoss/nydus/releases)
[**[📖 Website]**](https://nydus.dev/)
[**[☸ Quick Start (Kubernetes)**]](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md)
[**[🤓 Quick Start (nerdctl)**]](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md)
[**[❓ FAQs & Troubleshooting]**](https://github.com/dragonflyoss/nydus/wiki/FAQ)
# Nydus: Dragonfly Container Image Service
<p><img src="misc/logo.svg" width="170"></p>
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases)
[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
![CI](https://github.com/dragonflyoss/image-service/actions/workflows/ci.yml/badge.svg?event=schedule)
![Image Conversion](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml/badge.svg?event=schedule)
![Release Test Daily](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml/badge.svg?event=schedule)
[![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
[![Release Test Daily](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml?query=event%3Aschedule)
[![Benchmark](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml?query=event%3Aschedule)
[![Coverage](https://codecov.io/gh/dragonflyoss/nydus/branch/master/graph/badge.svg)](https://codecov.io/gh/dragonflyoss/nydus)
The nydus project implements a content-addressable filesystem on top of a RAFS format that improves the current OCI image specification, in terms of container launching speed, image space, and network bandwidth efficiency, as well as data integrity.
## Introduction
Nydus implements a content-addressable file system on the RAFS format, which enhances the current OCI image specification by improving container launch speed, image space and network bandwidth efficiency, and data integrity.
The following Benchmarking results demonstrate that Nydus images significantly outperform OCI images in terms of container cold startup elapsed time on Containerd, particularly as the OCI image size increases.
The following benchmarking result shows the performance improvement compared with the OCI image for the container cold startup elapsed time on containerd. As the OCI image size increases, the container startup time of using Nydus image remains very short.
![Container Cold Startup](./misc/perf.jpg)
## Principles
Nydus' key features include:
***Provide Fast, Secure And Easy Access to Data Distribution***
- Container images can be downloaded on demand in chunks for lazy pulling to boost container startup
- Chunk-based content-addressable data de-duplication to minimize storage, transmission and memory footprints
- Merged filesystem tree in order to remove all intermediate layers as an option
- in-kernel EROFS or FUSE filesystem together with overlayfs to provide full POSIX compatibility
- E2E image data integrity check. So security issues like "Supply Chain Attach" can be avoided and detected at runtime
- Compatible with the OCI artifacts spec and distribution spec, so nydus image can be stored in a regular container registry
- Native [eStargz](https://github.com/containerd/stargz-snapshotter) image support with remote snapshotter plugin `nydus-snapshotter` for containerd runtime.
- Various container image storage backends are supported. For example, Registry, NAS, Aliyun/OSS.
- Integrated with CNCF incubating project Dragonfly to distribute container images in P2P fashion and mitigate the pressure on container registries
- Capable to prefetch data block before user IO hits the block thus to reduce read latency
- Record files access pattern during runtime gathering access trace/log, by which user abnormal behaviors are easily caught
- Access trace based prefetch table
- User I/O amplification to reduce the amount of small requests to storage backend.
- **Performance**: Second-level container startup speed, millisecond-level function computation code package loading speed.
- **Low Cost**: Written in memory-safed language `Rust`, numerous optimizations help improve memory, CPU, and network consumption.
- **Flexible**: Supports container runtimes such as [runC](https://github.com/opencontainers/runc) and [Kata](https://github.com/kata-containers), and provides [Confidential Containers](https://github.com/confidential-containers) and vulnerability scanning capabilities
- **Security**: End to end data integrity check, Supply Chain Attack can be detected and avoided at runtime.
Currently Nydus includes following tools:
## Key features
| Tool | Description |
| ---------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [ctr-remote](https://github.com/dragonflyoss/image-service/tree/master/contrib/ctr-remote) | An enhanced `containerd` CLI tool enable nydus support with `containerd` ctr |
| [nydus-docker-graphdriver](https://github.com/dragonflyoss/image-service/tree/master/contrib/docker-nydus-graphdriver) | Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/image-service/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
- **On-demand Load**: Container images/packages are downloaded on-demand in chunk unit to boost startup.
- **Chunk Deduplication**: Chunk level data de-duplication cross-layer or cross-image to reduce storage, transport, and memory cost.
- **Compatible with Ecosystem**: Storage backend support with Registry, OSS, NAS, Shared Disk, and [P2P service](https://d7y.io/). Compatible with the [OCI images](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-zran.md), and provide native [eStargz images](https://github.com/containerd/stargz-snapshotter) support.
- **Data Analyzability**: Record accesses, data layout optimization, prefetch, IO amplification, abnormal behavior detection.
- **POSIX Compatibility**: In-Kernel EROFS or FUSE filesystems together with overlayfs provide full POSIX compatibility
- **I/O optimization**: Use merged filesystem tree, data prefetching and User I/O amplification to reduce read latency and improve user I/O performance.
## Ecosystem
### Nydus tools
| Tool | Description |
| ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
### Supported platforms
Currently Nydus is supporting the following platforms in container ecosystem:
| Type | Platform | Description | Status |
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage | Registry/OSS/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Runtime | [Kubernetes](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md) | Run Nydus image using CRI interface | |
| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | ✅ |
| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/moby/buildkit/pull/2581) | Provides the ability to build and export Nydus images directly from Dockerfile | 🚧 |
| Runtime | Kubernetes | Run Nydus image using CRI interface | ✅ |
| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Run Nydus image in containerd with nydus-snapshotter | ✅ |
| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 |
| Runtime | [Docker](https://github.com/dragonflyoss/image-service/tree/master/contrib/docker-nydus-graphdriver) | Run Nydus image in Docker container with graphdriver plugin | ✅ |
| Runtime | [Nerdctl](https://github.com/containerd/nerdctl) | Run Nydus image with `nerdctl --snapshotter nydus run ...` | ✅ |
| Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | ✅ |
| Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | ✅ |
## Build
To try nydus image service:
1. Convert an original OCI image to nydus image and store it somewhere like Docker/Registry, NAS or Aliyun/OSS. This can be directly done by `nydusify`. Normal users don't have to get involved with `nydus-image`.
2. Get `nydus-snapshotter`(`containerd-nydus-grpc`) installed locally and configured properly. Or install `nydus-docker-graphdriver` plugin.
3. Operate container in legacy approaches. For example, `docker`, `nerdctl`, `crictl` and `ctr`.
## Build Binary
### Build Binary
```shell
# build debug binary
make
@ -86,36 +74,30 @@ make release
make docker-static
```
### Build Nydus Image
Convert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).
Build Nydus image from Dockerfile directly: [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md).
Build Nydus layer from various sources: [Nydus Image Builder](./docs/nydus-image.md).
#### Image prefetch optimization
To further reduce container startup time, a nydus image with a prefetch list can be built using the NRI plugin (containerd >=1.7): [Container Image Optimizer](https://github.com/containerd/nydus-snapshotter/blob/main/docs/optimize_nydus_image.md)
## Run
### Quick Start
## Quick Start with Kubernetes and Containerd
For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md)
### Run Nydus Snapshotter
## Build Nydus Image
Build Nydus image from directory source: [Nydus Image Builder](./docs/nydus-image.md).
Convert OCI image to Nydus image: [Nydusify](./docs/nydusify.md).
## Nydus Snapshotter
Nydus-snapshotter is a non-core sub-project of containerd.
Check out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter).
It works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter.
### Run Nydusd Daemon
## Run Nydusd Daemon
Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` when a container rootfs is prepared.
Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` or `nydus-docker-graphdriver` when a container rootfs is prepared.
Run Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md).
### Run Nydus with in-kernel EROFS filesystem
## Run Nydus with in-kernel EROFS filesystem
In-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then.
@ -123,52 +105,56 @@ Since [Linux 5.19](https://lwn.net/Articles/896140), EROFS has added a new file-
Guide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md)
### Run Nydus with Dragonfly P2P system
## Build Images via Harbor
Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point pressure of the registry server. Benchmarking results in the production environment demonstrate that using Dragonfly can reduce network latency by more than 80%, to understand the performance results and integration steps, please refer to the [nydus integration](https://d7y.io/docs/setup/integration/nydus).
Nydus cooperates with Harbor community to develop [acceleration-service](https://github.com/goharbor/acceleration-service) which provides a general service for Harbor to support image acceleration based on kinds of accelerators like Nydus, eStargz, etc.
If you want to deploy Dragonfly and Nydus at the same time through Helm, please refer to the **[Quick Start](https://github.com/dragonflyoss/helm-charts/blob/main/INSTALL.md)**.
## Docker graph driver support
### Run OCI image directly with Nydus
Docker graph driver is also accompanied, it helps to start container from nydus image. For more particular instructions, please refer to
Nydus is able to generate a tiny artifact called a `nydus zran` from an existing OCI image in the short time. This artifact can be used to accelerate the container boot time without the need for a full image conversion. For more information, please see the [documentation](./docs/nydus-zran.md).
- [Nydus Graph Driver](./contrib/docker-nydus-graphdriver/README.md)
### Run with Docker(Moby)
## Learn Concepts and Commands
Nydus provides a variety of methods to support running on docker(Moby), please refer to [Nydus Setup for Docker(Moby) Environment](./docs/docker-env-setup.md)
Browse the documentation to learn more. Here are some topics you may be interested in:
### Run with macOS
- [A Nydus Tutorial for Beginners](./docs/tutorial.md)
- [Nydus Design Doc](./docs/nydus-design.md)
- Our talk on Open Infra Summit 2020: [Toward Next Generation Container Image](https://drive.google.com/file/d/1LRfLUkNxShxxWU7SKjc_50U0N9ZnGIdV/view)
- [EROFS, What Are We Doing Now For Containers?](https://static.sched.com/hosted_files/kccncosschn21/fd/EROFS_What_Are_We_Doing_Now_For_Containers.pdf)
- [The Evolution of the Nydus Image Acceleration](https://d7y.io/blog/2022/06/06/evolution-of-nydus/) \([Video](https://youtu.be/yr6CB1JN1xg)\)
Nydus can also run with macfuse(a.k.a osxfuse). For more details please read [nydus with macOS](./docs/nydus_with_macos.md).
## Run with macos
### Run eStargz image (with lazy pulling)
- Nydus can also run with macfuse(a.k.a osxfuse).For more details please read [nydus with macos](./docs/nydus_with_macos.md).
## Run eStargz image (with lazy pulling)
The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option.
In the future, `zstd::chunked` can work in this way as well.
### Run Nydus Service
Using the key features of nydus as native in your project without preparing and invoking `nydusd` deliberately, [nydus-service](./service/README.md) helps to reuse the core services of nyuds.
## Documentation
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
## Community
Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities.
Questions, bug reports, technical discussion, feature requests and contribution are always welcomed!
We're very pleased to hear your use cases any time.
Feel free to reach us via Slack or Dingtalk.
Feel free to reach/join us via Slack and/or Dingtalk
- **Slack:** [Nydus Workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)
- Slack
- **Twitter:** [@dragonfly_oss](https://twitter.com/dragonfly_oss)
Join our Slack [workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)
- **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)
- Dingtalk
Join nydus-devel group by clicking [URL](https://h5.dingtalk.com/circle/healthCheckin.html?dtaction=os&corpId=dingbbd4fb77fb7c4f7f85db999db6125bc4&1fd25e0=3e15bd0&cbdbhh=qwertyuiop) on your phone.
You can also search our talking group by number _34971767_ and QR code
<img src="./misc/dingtalk.jpg" width="250" height="300"/>
Nydus bi-weekly technical community meeting is also regularly available, currrently held on Wednesdays
at 06:00 UTC (14:00 Beijing, Shanghai) starting from Aug 10, 2022.
For more details, please see our [HackMD](https://hackmd.io/@Nydus/Bk8u2X0p9) page.

View File

@ -1,31 +1,25 @@
[package]
name = "nydus-api"
version = "0.4.0"
version = "0.1.2"
description = "APIs for Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
[dependencies]
dbs-uhttp = { version = "0.3.0" }
http = "0.2.1"
lazy_static = "1.4.0"
libc = "0.2"
log = "0.4.8"
mio = { version = "0.8", features = ["os-poll", "os-ext"]}
serde = { version = "1.0.110", features = ["rc"] }
serde_derive = "1.0.110"
serde_json = "1.0.53"
toml = "0.5"
url = "2.1.1"
vmm-sys-util = "0.10"
thiserror = "1.0.30"
backtrace = { version = "0.3", optional = true }
dbs-uhttp = { version = "0.3.0", optional = true }
http = { version = "0.2.1", optional = true }
lazy_static = { version = "1.4.0", optional = true }
mio = { version = "0.8", features = ["os-poll", "os-ext"], optional = true }
serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
url = { version = "2.1.1", optional = true }
[dev-dependencies]
vmm-sys-util = { version = "0.12.1" }
[features]
error-backtrace = ["backtrace"]
handler = ["dbs-uhttp", "http", "lazy_static", "mio", "url"]
nydus-error = { version = "0.2", path = "../error" }
nydus-utils = { version = "0.3", path = "../utils" }

View File

@ -348,8 +348,10 @@ components:
description: usually to be the metadata source
type: string
prefetch_files:
description: local file path which recorded files/directories to be prefetched and separated by newlines
type: string
description: files that need to be prefetched
type: array
items:
type: string
config:
description: inline request, use to configure fs backend.
type: string

View File

@ -44,7 +44,7 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
/blobs:
/blob_objects:
summary: Manage cached blob objects
####################################################################
get:
@ -96,21 +96,6 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
operationId: deleteBlobFile
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/BlobId"
responses:
"204":
description: "Successfully deleted the blob file!"
"500":
description: "Can't delete the blob file!"
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
################################################################
components:
schemas:

File diff suppressed because it is too large Load Diff

View File

@ -1,252 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fmt::Debug;
/// Display error messages with line number, file path and optional backtrace.
pub fn make_error(
err: std::io::Error,
_raw: impl Debug,
_file: &str,
_line: u32,
) -> std::io::Error {
#[cfg(feature = "error-backtrace")]
{
if let Ok(val) = std::env::var("RUST_BACKTRACE") {
if val.trim() != "0" {
error!("Stack:\n{:?}", backtrace::Backtrace::new());
error!("Error:\n\t{:?}\n\tat {}:{}", _raw, _file, _line);
return err;
}
}
error!(
"Error:\n\t{:?}\n\tat {}:{}\n\tnote: enable `RUST_BACKTRACE=1` env to display a backtrace",
_raw, _file, _line
);
}
err
}
/// Define error macro like `x!()` or `x!(err)`.
/// Note: The `x!()` macro will convert any origin error (Os, Simple, Custom) to Custom error.
macro_rules! define_error_macro {
($fn:ident, $err:expr) => {
#[macro_export]
macro_rules! $fn {
() => {
std::io::Error::new($err.kind(), format!("{}: {}:{}", $err, file!(), line!()))
};
($raw:expr) => {
$crate::error::make_error($err, &$raw, file!(), line!())
};
}
};
}
/// Define error macro for libc error codes
macro_rules! define_libc_error_macro {
($fn:ident, $code:ident) => {
define_error_macro!($fn, std::io::Error::from_raw_os_error(libc::$code));
};
}
// TODO: Add format string support
// Add more libc error macro here if necessary
define_libc_error_macro!(einval, EINVAL);
define_libc_error_macro!(enoent, ENOENT);
define_libc_error_macro!(ebadf, EBADF);
define_libc_error_macro!(eacces, EACCES);
define_libc_error_macro!(enotdir, ENOTDIR);
define_libc_error_macro!(eisdir, EISDIR);
define_libc_error_macro!(ealready, EALREADY);
define_libc_error_macro!(enosys, ENOSYS);
define_libc_error_macro!(epipe, EPIPE);
define_libc_error_macro!(eio, EIO);
/// Return EINVAL error with formatted error message.
#[macro_export]
macro_rules! bail_einval {
($($arg:tt)*) => {{
return Err(einval!(format!($($arg)*)))
}}
}
/// Return EIO error with formatted error message.
#[macro_export]
macro_rules! bail_eio {
($($arg:tt)*) => {{
return Err(eio!(format!($($arg)*)))
}}
}
// Add more custom error macro here if necessary
define_error_macro!(last_error, std::io::Error::last_os_error());
define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)]
mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 {
return Err(einval!());
}
Ok(())
}
#[test]
fn test_einval() {
assert_eq!(
check_size(0x2000).unwrap_err().kind(),
std::io::Error::from_raw_os_error(libc::EINVAL).kind()
);
}
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
}

File diff suppressed because it is too large Load Diff

View File

@ -3,13 +3,11 @@
//
// SPDX-License-Identifier: Apache-2.0
use dbs_uhttp::{Method, Request, Response};
use crate::http::{ApiError, ApiRequest, ApiResponse, ApiResponsePayload, HttpError};
use crate::http_handler::{
use crate::http::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
EndpointHandler, HttpResult,
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, EndpointHandler, HttpError, HttpResult,
};
use dbs_uhttp::{Method, Request, Response};
// Convert an ApiResponse to a HTTP response.
//

View File

@ -7,10 +7,9 @@
use dbs_uhttp::{Method, Request, Response};
use crate::http::{ApiError, ApiRequest, ApiResponse, ApiResponsePayload, HttpError};
use crate::http_handler::{
use crate::http::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
EndpointHandler, HttpResult,
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, EndpointHandler, HttpError, HttpResult,
};
/// HTTP URI prefix for API v1.
@ -140,7 +139,7 @@ impl EndpointHandler for MetricsFsFilesHandler {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest")
.is_some_and(|b| b.parse::<bool>().unwrap_or(false));
.map_or(false, |b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics))
}

View File

@ -6,15 +6,12 @@
//! Nydus API v2.
use crate::BlobCacheEntry;
use dbs_uhttp::{Method, Request, Response};
use crate::http::{
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, BlobCacheObjectId, HttpError,
};
use crate::http_handler::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
EndpointHandler, HttpResult,
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, BlobCacheObjectId, EndpointHandler,
HttpError, HttpResult,
};
/// HTTP URI prefix for API v2.
@ -86,10 +83,7 @@ impl EndpointHandler for BlobObjectListHandlerV2 {
Err(HttpError::BadRequest)
}
(Method::Put, Some(body)) => {
let mut conf: Box<BlobCacheEntry> = parse_body(body)?;
if !conf.prepare_configuration_info() {
return Err(HttpError::BadRequest);
}
let conf = parse_body(body)?;
let r = kicker(ApiRequest::CreateBlobObject(conf));
Ok(convert_to_response(r, HttpError::CreateBlobObject))
}
@ -100,10 +94,6 @@ impl EndpointHandler for BlobObjectListHandlerV2 {
let r = kicker(ApiRequest::DeleteBlobObject(param));
return Ok(convert_to_response(r, HttpError::DeleteBlobObject));
}
if let Some(blob_id) = extract_query_part(req, "blob_id") {
let r = kicker(ApiRequest::DeleteBlobFile(blob_id));
return Ok(convert_to_response(r, HttpError::DeleteBlobFile));
}
Err(HttpError::BadRequest)
}
_ => Err(HttpError::BadRequest),

View File

@ -1,404 +0,0 @@
use std::collections::HashMap;
use std::io::{Error, ErrorKind, Result};
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
use std::sync::mpsc::{Receiver, Sender};
use std::sync::Arc;
use std::time::SystemTime;
use std::{fs, thread};
use dbs_uhttp::{Body, HttpServer, MediaType, Request, Response, ServerError, StatusCode, Version};
use http::uri::Uri;
use mio::unix::SourceFd;
use mio::{Events, Interest, Poll, Token, Waker};
use serde::Deserialize;
use url::Url;
use crate::http::{
ApiError, ApiRequest, ApiResponse, DaemonErrorKind, ErrorMessage, HttpError, MetricsError,
MetricsErrorKind,
};
use crate::http_endpoint_common::{
EventsHandler, ExitHandler, MetricsBackendHandler, MetricsBlobcacheHandler, MountHandler,
SendFuseFdHandler, StartHandler, TakeoverFuseFdHandler,
};
use crate::http_endpoint_v1::{
FsBackendInfo, InfoHandler, MetricsFsAccessPatternHandler, MetricsFsFilesHandler,
MetricsFsGlobalHandler, MetricsFsInflightHandler, HTTP_ROOT_V1,
};
use crate::http_endpoint_v2::{BlobObjectListHandlerV2, InfoV2Handler, HTTP_ROOT_V2};
const EXIT_TOKEN: Token = Token(usize::MAX);
const REQUEST_TOKEN: Token = Token(1);
/// Specialized version of [`std::result::Result`] for value returned by [`EndpointHandler`].
pub type HttpResult = std::result::Result<Response, HttpError>;
/// Get query parameter with `key` from the HTTP request.
pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// Splicing req.uri with "http:" prefix might look weird, but since it depends on
// crate `Url` to generate query_pairs HashMap, which is working on top of Url not Uri.
// Better that we can add query part support to Micro-http in the future. But
// right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix)
.inspect_err(|e| {
error!("api: can't parse request {:?}", e);
})
.ok()?;
for (k, v) in url.query_pairs() {
if k == key {
trace!("api: got query param {}={}", k, v);
return Some(v.into_owned());
}
}
None
}
/// Parse HTTP request body.
pub(crate) fn parse_body<'a, F: Deserialize<'a>>(b: &'a Body) -> std::result::Result<F, HttpError> {
serde_json::from_slice::<F>(b.raw()).map_err(HttpError::ParseBody)
}
/// Translate ApiError message to HTTP status code.
pub(crate) fn translate_status_code(e: &ApiError) -> StatusCode {
match e {
ApiError::DaemonAbnormal(kind) | ApiError::MountFilesystem(kind) => match kind {
DaemonErrorKind::NotReady => StatusCode::ServiceUnavailable,
DaemonErrorKind::Unsupported => StatusCode::NotImplemented,
DaemonErrorKind::UnexpectedEvent(_) => StatusCode::BadRequest,
_ => StatusCode::InternalServerError,
},
ApiError::Metrics(MetricsErrorKind::Stats(MetricsError::NoCounter)) => StatusCode::NotFound,
_ => StatusCode::InternalServerError,
}
}
/// Generate a successful HTTP response message.
pub(crate) fn success_response(body: Option<String>) -> Response {
if let Some(body) = body {
let mut r = Response::new(Version::Http11, StatusCode::OK);
r.set_body(Body::new(body));
r
} else {
Response::new(Version::Http11, StatusCode::NoContent)
}
}
/// Generate a HTTP error response message with status code and error message.
pub(crate) fn error_response(error: HttpError, status: StatusCode) -> Response {
let mut response = Response::new(Version::Http11, status);
let err_msg = ErrorMessage {
code: "UNDEFINED".to_string(),
message: format!("{:?}", error),
};
response.set_body(Body::new(err_msg));
response
}
/// Trait for HTTP endpoints to handle HTTP requests.
pub trait EndpointHandler: Sync + Send {
/// Handles an HTTP request.
///
/// The main responsibilities of the handlers includes:
/// - parse and validate incoming request message
/// - send the request to subscriber
/// - wait response from the subscriber
/// - generate HTTP result
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult;
}
/// Struct to route HTTP requests to corresponding registered endpoint handlers.
pub struct HttpRoutes {
/// routes is a hash table mapping endpoint URIs to their endpoint handlers.
pub routes: HashMap<String, Box<dyn EndpointHandler + Sync + Send>>,
}
macro_rules! endpoint_v1 {
($path:expr) => {
format!("{}{}", HTTP_ROOT_V1, $path)
};
}
macro_rules! endpoint_v2 {
($path:expr) => {
format!("{}{}", HTTP_ROOT_V2, $path)
};
}
lazy_static! {
/// HTTP_ROUTES contain all the nydusd HTTP routes.
pub static ref HTTP_ROUTES: HttpRoutes = {
let mut r = HttpRoutes {
routes: HashMap::new(),
};
// Common
r.routes.insert(endpoint_v1!("/daemon/events"), Box::new(EventsHandler{}));
r.routes.insert(endpoint_v1!("/daemon/exit"), Box::new(ExitHandler{}));
r.routes.insert(endpoint_v1!("/daemon/start"), Box::new(StartHandler{}));
r.routes.insert(endpoint_v1!("/daemon/fuse/sendfd"), Box::new(SendFuseFdHandler{}));
r.routes.insert(endpoint_v1!("/daemon/fuse/takeover"), Box::new(TakeoverFuseFdHandler{}));
r.routes.insert(endpoint_v1!("/mount"), Box::new(MountHandler{}));
r.routes.insert(endpoint_v1!("/metrics/backend"), Box::new(MetricsBackendHandler{}));
r.routes.insert(endpoint_v1!("/metrics/blobcache"), Box::new(MetricsBlobcacheHandler{}));
// Nydus API, v1
r.routes.insert(endpoint_v1!("/daemon"), Box::new(InfoHandler{}));
r.routes.insert(endpoint_v1!("/daemon/backend"), Box::new(FsBackendInfo{}));
r.routes.insert(endpoint_v1!("/metrics"), Box::new(MetricsFsGlobalHandler{}));
r.routes.insert(endpoint_v1!("/metrics/files"), Box::new(MetricsFsFilesHandler{}));
r.routes.insert(endpoint_v1!("/metrics/inflight"), Box::new(MetricsFsInflightHandler{}));
r.routes.insert(endpoint_v1!("/metrics/pattern"), Box::new(MetricsFsAccessPatternHandler{}));
// Nydus API, v2
r.routes.insert(endpoint_v2!("/daemon"), Box::new(InfoV2Handler{}));
r.routes.insert(endpoint_v2!("/blobs"), Box::new(BlobObjectListHandlerV2{}));
r
};
}
fn kick_api_server(
to_api: &Sender<Option<ApiRequest>>,
from_api: &Receiver<ApiResponse>,
request: ApiRequest,
) -> ApiResponse {
to_api.send(Some(request)).map_err(ApiError::RequestSend)?;
from_api.recv().map_err(ApiError::ResponseRecv)?
}
// Example:
// <-- GET /
// --> GET / 200 835ms 746b
fn trace_api_begin(request: &dbs_uhttp::Request) {
debug!("<--- {:?} {:?}", request.method(), request.uri());
}
fn trace_api_end(response: &dbs_uhttp::Response, method: dbs_uhttp::Method, recv_time: SystemTime) {
let elapse = SystemTime::now().duration_since(recv_time);
debug!(
"---> {:?} Status Code: {:?}, Elapse: {:?}, Body Size: {:?}",
method,
response.status(),
elapse,
response.content_length()
);
}
fn exit_api_server(to_api: &Sender<Option<ApiRequest>>) {
if to_api.send(None).is_err() {
error!("failed to send stop request api server");
}
}
fn handle_http_request(
request: &Request,
to_api: &Sender<Option<ApiRequest>>,
from_api: &Receiver<ApiResponse>,
) -> Response {
let begin_time = SystemTime::now();
trace_api_begin(request);
// Micro http should ensure that req path is legal.
let uri_parsed = request.uri().get_abs_path().parse::<Uri>();
let mut response = match uri_parsed {
Ok(uri) => match HTTP_ROUTES.routes.get(uri.path()) {
Some(route) => route
.handle_request(request, &|r| kick_api_server(to_api, from_api, r))
.unwrap_or_else(|err| error_response(err, StatusCode::BadRequest)),
None => error_response(HttpError::NoRoute, StatusCode::NotFound),
},
Err(e) => {
error!("Failed parse URI, {}", e);
error_response(HttpError::BadRequest, StatusCode::BadRequest)
}
};
response.set_server("Nydus API");
response.set_content_type(MediaType::ApplicationJson);
trace_api_end(&response, request.method(), begin_time);
response
}
/// Start a HTTP server to serve API requests.
///
/// Start a HTTP server parsing http requests and send to nydus API server a concrete
/// request to operate nydus or fetch working status.
/// The HTTP server sends request by `to_api` channel and wait for response from `from_api` channel.
pub fn start_http_thread(
path: &str,
to_api: Sender<Option<ApiRequest>>,
from_api: Receiver<ApiResponse>,
) -> Result<(thread::JoinHandle<Result<()>>, Arc<Waker>)> {
// Try to remove existed unix domain socket
let _ = fs::remove_file(path);
let socket_path = PathBuf::from(path);
let mut poll = Poll::new()?;
let waker = Arc::new(Waker::new(poll.registry(), EXIT_TOKEN)?);
let waker2 = waker.clone();
let mut server = HttpServer::new(socket_path).map_err(|e| {
if let ServerError::IOError(e) = e {
e
} else {
Error::new(ErrorKind::Other, format!("{:?}", e))
}
})?;
poll.registry().register(
&mut SourceFd(&server.epoll().as_raw_fd()),
REQUEST_TOKEN,
Interest::READABLE,
)?;
let thread = thread::Builder::new()
.name("nydus-http-server".to_string())
.spawn(move || {
// Must start the server successfully or just die by panic
server.start_server().unwrap();
info!("http server started");
let mut events = Events::with_capacity(100);
let mut do_exit = false;
loop {
match poll.poll(&mut events, None) {
Err(e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => {
error!("http server poll events failed, {}", e);
exit_api_server(&to_api);
return Err(e);
}
Ok(_) => {}
}
for event in &events {
match event.token() {
EXIT_TOKEN => do_exit = true,
REQUEST_TOKEN => match server.requests() {
Ok(request_vec) => {
for server_request in request_vec {
let reply = server_request.process(|request| {
handle_http_request(request, &to_api, &from_api)
});
// Ignore error when sending response
server.respond(reply).unwrap_or_else(|e| {
error!("HTTP server error on response: {}", e)
});
}
}
Err(e) => {
error!("HTTP server error on retrieving incoming request: {}", e);
}
},
_ => unreachable!("unknown poll token."),
}
}
if do_exit {
exit_api_server(&to_api);
break;
}
}
info!("http-server thread exits");
// Keep the Waker alive to match the lifetime of the poll loop above
drop(waker2);
Ok(())
})?;
Ok((thread, waker))
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::mpsc::channel;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit"));
assert!(HTTP_ROUTES
.routes
.contains_key("/api/v1/daemon/fuse/sendfd"));
assert!(HTTP_ROUTES
.routes
.contains_key("/api/v1/daemon/fuse/takeover"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight"));
}
#[test]
fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs"));
}
#[test]
fn test_kick_api_server() {
let (to_api, from_route) = channel();
let (to_route, from_api) = channel();
let request = ApiRequest::GetDaemonInfo;
let thread = thread::spawn(move || match kick_api_server(&to_api, &from_api, request) {
Err(reply) => matches!(reply, ApiError::ResponsePayloadType),
Ok(_) => panic!("unexpected reply message"),
});
let req2 = from_route.recv().unwrap();
matches!(req2.as_ref().unwrap(), ApiRequest::GetDaemonInfo);
let reply: ApiResponse = Err(ApiError::ResponsePayloadType);
to_route.send(reply).unwrap();
thread.join().unwrap();
let (to_api, from_route) = channel();
let (to_route, from_api) = channel();
drop(to_route);
let request = ApiRequest::GetDaemonInfo;
assert!(kick_api_server(&to_api, &from_api, request).is_err());
drop(from_route);
let request = ApiRequest::GetDaemonInfo;
assert!(kick_api_server(&to_api, &from_api, request).is_err());
}
#[test]
fn test_extract_query_part() {
let req = Request::try_from(
b"GET http://localhost/api/v1/daemon?arg1=test HTTP/1.0\r\n\r\n",
None,
)
.unwrap();
let arg1 = extract_query_part(&req, "arg1").unwrap();
assert_eq!(arg1, "test");
assert!(extract_query_part(&req, "arg2").is_none());
}
#[test]
fn test_start_http_thread() {
let tmpdir = TempFile::new().unwrap();
let path = tmpdir.as_path().to_str().unwrap();
let (to_api, from_route) = channel();
let (_to_route, from_api) = channel();
let (thread, waker) = start_http_thread(path, to_api, from_api).unwrap();
waker.wake().unwrap();
let msg = from_route.recv().unwrap();
assert!(msg.is_none());
let _ = thread.join().unwrap();
}
}

View File

@ -7,41 +7,16 @@
//! The `nydus-api` crate defines API and related data structures for Nydus Image Service.
//! All data structures used by the API are encoded in JSON format.
#[cfg_attr(feature = "handler", macro_use)]
#[macro_use]
extern crate log;
#[macro_use]
extern crate serde;
#[cfg(feature = "handler")]
extern crate serde_derive;
#[macro_use]
extern crate lazy_static;
pub mod config;
pub use config::*;
#[macro_use]
pub mod error;
extern crate nydus_error;
pub mod http;
pub use self::http::*;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_common;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_v1;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_v2;
#[cfg(feature = "handler")]
pub(crate) mod http_handler;
#[cfg(feature = "handler")]
pub use http_handler::{
extract_query_part, start_http_thread, EndpointHandler, HttpResult, HttpRoutes, HTTP_ROUTES,
};
/// Application build and version information.
#[derive(Serialize, Clone)]
pub struct BuildTimeInfo {
pub package_ver: String,
pub git_commit: String,
pub build_time: String,
pub profile: String,
pub rustc: String,
}

14
app/CHANGELOG.md Normal file
View File

@ -0,0 +1,14 @@
# Changelog
## [Unreleased]
### Added
### Fixed
### Deprecated
## [v0.1.0]
### Added
- Initial release

1
app/CODEOWNERS Normal file
View File

@ -0,0 +1 @@
* @bergwolf @imeoer @jiangliu

24
app/Cargo.toml Normal file
View File

@ -0,0 +1,24 @@
[package]
name = "nydus-app"
version = "0.3.1"
authors = ["The Nydus Developers"]
description = "Application framework for Nydus Image Service"
readme = "README.md"
repository = "https://github.com/dragonflyoss/image-service"
license = "Apache-2.0 OR BSD-3-Clause"
edition = "2018"
build = "build.rs"
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dependencies]
regex = "1.5.5"
flexi_logger = { version = "0.23", features = ["compress"] }
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive"] }
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
nydus-error = { version = "0.2", path = "../error" }

202
app/LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

57
app/README.md Normal file
View File

@ -0,0 +1,57 @@
# nydus-app
The `nydus-app` crate is a collection of utilities to help creating applications for [`Nydus Image Service`](https://github.com/dragonflyoss/image-service) project, which provides:
- `struct BuildTimeInfo`: application build and version information.
- `fn dump_program_info()`: dump program build and version information.
- `fn setup_logging()`: setup logging infrastructure for application.
## Support
**Platforms**:
- x86_64
- aarch64
**Operating Systems**:
- Linux
## Usage
Add `nydus-app` as a dependency in `Cargo.toml`
```toml
[dependencies]
nydus-app = "*"
```
Then add `extern crate nydus-app;` to your crate root if needed.
## Examples
- Setup application infrastructure.
```rust
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use clap::App;
use std::io::Result;
use nydus_app::{BuildTimeInfo, setup_logging};
fn main() -> Result<()> {
let level = cmd.value_of("log-level").unwrap().parse().unwrap();
let (bti_string, build_info) = BuildTimeInfo::dump();
let _cmd = App::new("")
.version(bti_string.as_str())
.author(crate_authors!())
.get_matches();
setup_logging(None, level)?;
print!("{}", build_info);
Ok(())
}
```
## License
This code is licensed under [Apache-2.0](LICENSE).

View File

@ -32,7 +32,7 @@ fn get_git_commit_hash() -> String {
}
fn get_git_commit_version() -> String {
let tag = Command::new("git").args(["describe", "--tags"]).output();
let tag = Command::new("git").args(&["describe", "--tags"]).output();
if let Ok(tag) = tag {
if let Some(tag) = String::from_utf8_lossy(&tag.stdout).lines().next() {
return tag.to_string();

View File

@ -3,15 +3,55 @@
//
// SPDX-License-Identifier: Apache-2.0
//! Application framework and utilities for Nydus.
//!
//! The `nydus-app` crates provides common helpers and utilities to support Nydus application:
//! - Application Building Information: [`struct BuildTimeInfo`](struct.BuildTimeInfo.html) and
//! [`fn dump_program_info()`](fn.dump_program_info.html).
//! - Logging helpers: [`fn setup_logging()`](fn.setup_logging.html) and
//! [`fn log_level_to_verbosity()`](fn.log_level_to_verbosity.html).
//! - Signal handling: [`fn register_signal_handler()`](signal/fn.register_signal_handler.html).
//!
//! ```rust,ignore
//! #[macro_use(crate_authors, crate_version)]
//! extern crate clap;
//!
//! use clap::App;
//! use nydus_app::{BuildTimeInfo, setup_logging};
//! # use std::io::Result;
//!
//! fn main() -> Result<()> {
//! let level = cmd.value_of("log-level").unwrap().parse().unwrap();
//! let (bti_string, build_info) = BuildTimeInfo::dump();
//! let _cmd = App::new("")
//! .version(bti_string.as_str())
//! .author(crate_authors!())
//! .get_matches();
//!
//! setup_logging(None, level, 0)?;
//! print!("{}", build_info);
//!
//! Ok(())
//! }
//! ```
#[macro_use]
extern crate log;
#[macro_use]
extern crate nydus_error;
#[macro_use]
extern crate serde;
use std::env::current_dir;
use std::io::Result;
use std::path::PathBuf;
use flexi_logger::{
self, style, Cleanup, Criterion, DeferredNow, FileSpec, Logger, Naming,
TS_DASHES_BLANK_COLONS_DOT_BLANK,
self, colored_opt_format, opt_format, Cleanup, Criterion, FileSpec, Logger, Naming,
};
use log::{Level, LevelFilter, Record};
use log::LevelFilter;
pub mod signal;
pub fn log_level_to_verbosity(level: log::LevelFilter) -> usize {
if level == log::LevelFilter::Off {
@ -21,67 +61,56 @@ pub fn log_level_to_verbosity(level: log::LevelFilter) -> usize {
}
}
fn get_file_name<'a>(record: &'a Record) -> Option<&'a str> {
record.file().map(|v| match v.rfind("/src/") {
None => v,
Some(pos) => match v[..pos].rfind('/') {
None => &v[pos..],
Some(p) => &v[p..],
},
})
pub mod built_info {
pub const PROFILE: &str = env!("PROFILE");
pub const RUSTC_VERSION: &str = env!("RUSTC_VERSION");
pub const BUILT_TIME_UTC: &str = env!("BUILT_TIME_UTC");
pub const GIT_COMMIT_VERSION: &str = env!("GIT_COMMIT_VERSION");
pub const GIT_COMMIT_HASH: &str = env!("GIT_COMMIT_HASH");
}
fn opt_format(
w: &mut dyn std::io::Write,
now: &mut DeferredNow,
record: &Record,
) -> std::result::Result<(), std::io::Error> {
let level = record.level();
if level == Level::Info {
write!(
w,
"[{}] {} {}",
now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK),
record.level(),
&record.args()
)
} else {
write!(
w,
"[{}] {} [{}:{}] {}",
now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK),
record.level(),
get_file_name(record).unwrap_or("<unnamed>"),
record.line().unwrap_or(0),
&record.args()
)
}
/// Dump program build and version information.
pub fn dump_program_info() {
info!(
"Program Version: {}, Git Commit: {:?}, Build Time: {:?}, Profile: {:?}, Rustc Version: {:?}",
built_info::GIT_COMMIT_VERSION,
built_info::GIT_COMMIT_HASH,
built_info::BUILT_TIME_UTC,
built_info::PROFILE,
built_info::RUSTC_VERSION,
);
}
fn colored_opt_format(
w: &mut dyn std::io::Write,
now: &mut DeferredNow,
record: &Record,
) -> std::result::Result<(), std::io::Error> {
let level = record.level();
if level == Level::Info {
write!(
w,
"[{}] {} {}",
style(level).paint(now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK).to_string()),
style(level).paint(level.to_string()),
style(level).paint(record.args().to_string())
)
} else {
write!(
w,
"[{}] {} [{}:{}] {}",
style(level).paint(now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK).to_string()),
style(level).paint(level.to_string()),
get_file_name(record).unwrap_or("<unnamed>"),
record.line().unwrap_or(0),
style(level).paint(record.args().to_string())
)
/// Application build and version information.
#[derive(Serialize, Clone)]
pub struct BuildTimeInfo {
pub package_ver: String,
pub git_commit: String,
build_time: String,
profile: String,
rustc: String,
}
impl BuildTimeInfo {
pub fn dump() -> (String, Self) {
let info_string = format!(
"\rVersion: \t{}\nGit Commit: \t{}\nBuild Time: \t{}\nProfile: \t{}\nRustc: \t\t{}\n",
built_info::GIT_COMMIT_VERSION,
built_info::GIT_COMMIT_HASH,
built_info::BUILT_TIME_UTC,
built_info::PROFILE,
built_info::RUSTC_VERSION,
);
let info = Self {
package_ver: built_info::GIT_COMMIT_VERSION.to_string(),
git_commit: built_info::GIT_COMMIT_HASH.to_string(),
build_time: built_info::BUILT_TIME_UTC.to_string(),
profile: built_info::PROFILE.to_string(),
rustc: built_info::RUSTC_VERSION.to_string(),
};
(info_string, info)
}
}
@ -117,7 +146,7 @@ pub fn setup_logging(
})?;
spec = spec.basename(basename);
// `flexi_logger` automatically add `.log` suffix if the file name has no extension.
// `flexi_logger` automatically add `.log` suffix if the file name has not extension.
if let Some(suffix) = path.extension() {
let suffix = suffix.to_str().ok_or_else(|| {
eprintln!("invalid file extension {:?}", suffix);

33
blobfs/Cargo.toml Normal file
View File

@ -0,0 +1,33 @@
[package]
name = "nydus-blobfs"
version = "0.1.1"
description = "Blob object file system for Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
[dependencies]
fuse-backend-rs = { version = "0.9" }
libc = "0.2"
log = "0.4.8"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
vm-memory = { version = "0.9" }
nydus-error = { version = "0.2", path = "../error" }
nydus-rafs = { version = "0.1", path = "../rafs" }
nydus-storage = { version = "0.5", path = "../storage", features = ["backend-localfs"] }
[dev-dependencies]
nydus-app = { version = "0.3", path = "../app" }
[features]
virtiofs = [ "fuse-backend-rs/virtiofs", "nydus-rafs/virtio-fs" ]
backend-oss = ["nydus-rafs/backend-oss"]
backend-registry = ["nydus-rafs/backend-registry"]
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "x86_64-apple-darwin"]

506
blobfs/src/lib.rs Normal file
View File

@ -0,0 +1,506 @@
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Fuse blob passthrough file system, mirroring an existing FS hierarchy.
//!
//! This file system mirrors the existing file system hierarchy of the system, starting at the
//! root file system. This is implemented by just "passing through" all requests to the
//! corresponding underlying file system.
//!
//! The code is derived from the
//! [CrosVM](https://chromium.googlesource.com/chromiumos/platform/crosvm/) project,
//! with heavy modification/enhancements from Alibaba Cloud OS team.
#[macro_use]
extern crate log;
use fuse_backend_rs::{
api::{filesystem::*, BackendFileSystem, VFS_MAX_INO},
passthrough::Config as PassthroughConfig,
passthrough::PassthroughFs,
};
use nydus_error::{einval, eother};
use nydus_rafs::{
fs::{Rafs, RafsConfig},
RafsIoRead,
};
use serde::Deserialize;
use std::any::Any;
#[cfg(feature = "virtiofs")]
use std::ffi::CStr;
use std::ffi::CString;
use std::fs::create_dir_all;
#[cfg(feature = "virtiofs")]
use std::fs::File;
use std::io;
#[cfg(feature = "virtiofs")]
use std::mem::MaybeUninit;
#[cfg(feature = "virtiofs")]
use std::os::unix::ffi::OsStrExt;
#[cfg(feature = "virtiofs")]
use std::os::unix::io::{AsRawFd, FromRawFd};
use std::path::Path;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::thread;
#[cfg(feature = "virtiofs")]
use nydus_storage::device::BlobPrefetchRequest;
use vm_memory::ByteValued;
mod sync_io;
#[cfg(feature = "virtiofs")]
const EMPTY_CSTR: &[u8] = b"\0";
type Inode = u64;
type Handle = u64;
#[repr(C, packed)]
#[derive(Clone, Copy, Debug, Default)]
struct LinuxDirent64 {
d_ino: libc::ino64_t,
d_off: libc::off64_t,
d_reclen: libc::c_ushort,
d_ty: libc::c_uchar,
}
unsafe impl ByteValued for LinuxDirent64 {}
/// Options that configure xxx
#[derive(Clone, Default, Deserialize)]
pub struct BlobOndemandConfig {
/// The rafs config used to set up rafs device for the purpose of
/// `on demand read`.
pub rafs_conf: RafsConfig,
/// THe path of bootstrap of an container image (for rafs in
/// kernel).
///
/// The default is ``.
#[serde(default)]
pub bootstrap_path: String,
/// The path of blob cache directory.
#[serde(default)]
pub blob_cache_dir: String,
}
impl FromStr for BlobOndemandConfig {
type Err = io::Error;
fn from_str(s: &str) -> io::Result<BlobOndemandConfig> {
serde_json::from_str(s).map_err(|e| einval!(e))
}
}
/// Options that configure the behavior of the blobfs fuse file system.
#[derive(Default, Debug, Clone, PartialEq)]
pub struct Config {
/// Blobfs config is embedded with passthrough config
pub ps_config: PassthroughConfig,
/// This provides on demand config of blob management.
pub blob_ondemand_cfg: String,
}
#[allow(dead_code)]
struct RafsHandle {
rafs: Arc<Mutex<Option<Rafs>>>,
handle: Arc<Mutex<Option<thread::JoinHandle<Option<Rafs>>>>>,
}
#[allow(dead_code)]
struct BootstrapArgs {
rafs_handle: RafsHandle,
blob_cache_dir: String,
}
// Safe to Send/Sync because the underlying data structures are readonly
unsafe impl Sync for BootstrapArgs {}
unsafe impl Send for BootstrapArgs {}
#[cfg(feature = "virtiofs")]
impl BootstrapArgs {
fn get_rafs_handle(&self) -> io::Result<()> {
let mut c = self.rafs_handle.rafs.lock().unwrap();
match (*self.rafs_handle.handle.lock().unwrap()).take() {
Some(handle) => {
let rafs = handle.join().unwrap().ok_or_else(|| {
error!("blobfs: get rafs failed.");
einval!("create rafs failed in thread.")
})?;
debug!("blobfs: async create Rafs finish!");
*c = Some(rafs);
Ok(())
}
None => Err(einval!("create rafs failed in thread.")),
}
}
fn fetch_range_sync(&self, prefetches: &[BlobPrefetchRequest]) -> io::Result<()> {
let c = self.rafs_handle.rafs.lock().unwrap();
match &*c {
Some(rafs) => rafs.fetch_range_synchronous(prefetches),
None => Err(einval!("create rafs failed in thread.")),
}
}
}
/// A file system that simply "passes through" all requests it receives to the underlying file
/// system.
///
/// To keep the implementation simple it servers the contents of its root directory. Users
/// that wish to serve only a specific directory should set up the environment so that that
/// directory ends up as the root of the file system process. One way to accomplish this is via a
/// combination of mount namespaces and the pivot_root system call.
pub struct BlobFs {
pfs: PassthroughFs,
#[allow(dead_code)]
bootstrap_args: BootstrapArgs,
}
impl BlobFs {
fn ensure_path_exist(path: &Path) -> io::Result<()> {
if path.as_os_str().is_empty() {
return Err(einval!("path is empty"));
}
if !path.exists() {
create_dir_all(path).map_err(|e| {
error!(
"create dir error. directory is {:?}. {}:{}",
path,
file!(),
line!()
);
e
})?;
}
Ok(())
}
/// Create a Blob file system instance.
pub fn new(cfg: Config) -> io::Result<BlobFs> {
trace!("BlobFs config is: {:?}", cfg);
let bootstrap_args = Self::load_bootstrap(&cfg)?;
let pfs = PassthroughFs::new(cfg.ps_config)?;
Ok(BlobFs {
pfs,
bootstrap_args,
})
}
fn load_bootstrap(cfg: &Config) -> io::Result<BootstrapArgs> {
let blob_ondemand_conf = BlobOndemandConfig::from_str(&cfg.blob_ondemand_cfg)?;
// check if blob cache dir exists.
let path = Path::new(blob_ondemand_conf.blob_cache_dir.as_str());
Self::ensure_path_exist(path).map_err(|e| {
error!("blob_cache_dir not exist");
e
})?;
let path = Path::new(blob_ondemand_conf.bootstrap_path.as_str());
if !path.exists() || blob_ondemand_conf.bootstrap_path == String::default() {
return Err(einval!("no valid bootstrap"));
}
let mut rafs_conf = blob_ondemand_conf.rafs_conf.clone();
// we must use direct mode to get mmap'd bootstrap.
rafs_conf.mode = "direct".to_string();
let mut bootstrap =
<dyn RafsIoRead>::from_file(path.to_str().unwrap()).map_err(|e| eother!(e))?;
trace!("blobfs: async create Rafs start!");
let rafs_join_handle = std::thread::spawn(move || {
let mut rafs = match Rafs::new(rafs_conf, "blobfs", &mut bootstrap) {
Ok(rafs) => rafs,
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
};
match rafs.import(bootstrap, None) {
Ok(_) => {}
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
}
Some(rafs)
});
let rafs_handle = RafsHandle {
rafs: Arc::new(Mutex::new(None)),
handle: Arc::new(Mutex::new(Some(rafs_join_handle))),
};
Ok(BootstrapArgs {
rafs_handle,
blob_cache_dir: blob_ondemand_conf.blob_cache_dir,
})
}
#[cfg(feature = "virtiofs")]
fn stat(f: &File) -> io::Result<libc::stat64> {
// Safe because this is a constant value and a valid C string.
let pathname = unsafe { CStr::from_bytes_with_nul_unchecked(EMPTY_CSTR) };
let mut st = MaybeUninit::<libc::stat64>::zeroed();
// Safe because the kernel will only write data in `st` and we check the return value.
let res = unsafe {
libc::fstatat64(
f.as_raw_fd(),
pathname.as_ptr(),
st.as_mut_ptr(),
libc::AT_EMPTY_PATH | libc::AT_SYMLINK_NOFOLLOW,
)
};
if res >= 0 {
// Safe because the kernel guarantees that the struct is now fully initialized.
Ok(unsafe { st.assume_init() })
} else {
Err(io::Error::last_os_error())
}
}
/// Initialize the PassthroughFs
pub fn import(&self) -> io::Result<()> {
self.pfs.import()
}
#[cfg(feature = "virtiofs")]
fn open_file(dfd: i32, pathname: &Path, flags: i32, mode: u32) -> io::Result<File> {
let pathname = CString::new(pathname.as_os_str().as_bytes())
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?;
let fd = if flags & libc::O_CREAT == libc::O_CREAT {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags, mode) }
} else {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags) }
};
if fd < 0 {
return Err(io::Error::last_os_error());
}
// Safe because we just opened this fd.
Ok(unsafe { File::from_raw_fd(fd) })
}
}
impl BackendFileSystem for BlobFs {
fn mount(&self) -> io::Result<(Entry, u64)> {
let ctx = &Context::default();
let entry = self.lookup(ctx, ROOT_ID, &CString::new(".").unwrap())?;
Ok((entry, VFS_MAX_INO))
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test2)]
mod tests {
use super::*;
use fuse_backend_rs::abi::virtio_fs;
use fuse_backend_rs::transport::FsCacheReqHandler;
use nydus_app::setup_logging;
use std::os::unix::prelude::RawFd;
struct DummyCacheReq {}
impl FsCacheReqHandler for DummyCacheReq {
fn map(
&mut self,
_foffset: u64,
_moffset: u64,
_len: u64,
_flags: u64,
_fd: RawFd,
) -> io::Result<()> {
Ok(())
}
fn unmap(&mut self, _requests: Vec<virtio_fs::RemovemappingOne>) -> io::Result<()> {
Ok(())
}
}
// #[test]
// #[cfg(feature = "virtiofs")]
// fn test_blobfs_new() {
// setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
// let config = r#"
// {
// "device": {
// "backend": {
// "type": "localfs",
// "config": {
// "dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/test4k"
// }
// },
// "cache": {
// "type": "blobcache",
// "compressed": false,
// "config": {
// "work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
// }
// }
// },
// "mode": "direct",
// "digest_validate": true,
// "enable_xattr": false,
// "fs_prefetch": {
// "enable": false,
// "threads_count": 10,
// "merging_size": 131072,
// "bandwidth_rate": 10485760
// }
// }"#;
// // let rafs_conf = RafsConfig::from_str(config).unwrap();
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// // blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache1".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// // bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-foo".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_ok());
// }
#[test]
fn test_blobfs_setupmapping() {
setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
let config = r#"
{
"rafs_conf": {
"device": {
"backend": {
"type": "localfs",
"config": {
"blob_file": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/nydus-rs/myblob1/v6/blob-btrfs"
}
},
"cache": {
"type": "blobcache",
"compressed": false,
"config": {
"work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}
}
},
"mode": "direct",
"digest_validate": false,
"enable_xattr": false,
"fs_prefetch": {
"enable": false,
"threads_count": 10,
"merging_size": 131072,
"bandwidth_rate": 10485760
}
},
"bootstrap_path": "nydus-rs/myblob1/v6/bootstrap-btrfs",
"blob_cache_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}"#;
// let rafs_conf = RafsConfig::from_str(config).unwrap();
let ps_config = PassthroughConfig {
root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
.to_string(),
do_import: false,
no_open: true,
..Default::default()
};
let fs_cfg = Config {
ps_config,
blob_ondemand_cfg: config.to_string(),
};
let fs = BlobFs::new(fs_cfg).unwrap();
fs.import().unwrap();
fs.mount().unwrap();
let ctx = &Context::default();
// read bootstrap first, should return err as it's not in blobcache dir.
// let bootstrap = CString::new("foo").unwrap();
// let entry = fs.lookup(ctx, ROOT_ID, &bootstrap).unwrap();
// let mut req = DummyCacheReq {};
// fs.setupmapping(ctx, entry.inode, 0, 0, 4096, 0, 0, &mut req)
// .unwrap();
// FIXME: use a real blob id under test4k.
let blob_cache_dir = CString::new("blobcache").unwrap();
let parent_entry = fs.lookup(ctx, ROOT_ID, &blob_cache_dir).unwrap();
let blob_id = CString::new("80da976ee69d68af6bb9170395f71b4ef1e235e815e2").unwrap();
let entry = fs.lookup(ctx, parent_entry.inode, &blob_id).unwrap();
let foffset = 0;
let len = 1 << 21;
let mut req = DummyCacheReq {};
fs.setupmapping(ctx, entry.inode, 0, foffset, len, 0, 0, &mut req)
.unwrap();
// FIXME: release fs
fs.destroy();
}
}

View File

@ -3,58 +3,98 @@
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE-BSD-3-Clause file.
use std::ffi::CStr;
use std::io;
use std::time::Duration;
use fuse_backend_rs::abi::fuse_abi::{CreateIn, FsOptions, OpenOptions, SetattrValid};
use fuse_backend_rs::abi::virtio_fs;
use fuse_backend_rs::api::filesystem::{
Context, DirEntry, Entry, FileSystem, GetxattrReply, ListxattrReply, ZeroCopyReader,
ZeroCopyWriter,
};
use fuse_backend_rs::transport::FsCacheReqHandler;
use nydus_api::eacces;
use nydus_utils::{round_down, round_up};
//! Fuse passthrough file system, mirroring an existing FS hierarchy.
use super::*;
use crate::fs::Handle;
use crate::metadata::Inode;
const MAPPING_UNIT_SIZE: u64 = 0x200000;
impl BlobfsState {
fn fetch_range_sync(&self, prefetches: &[BlobPrefetchRequest]) -> io::Result<()> {
let rafs_handle = self.rafs_handle.read().unwrap();
match rafs_handle.rafs.as_ref() {
Some(rafs) => rafs.fetch_range_synchronous(prefetches),
None => Err(einval!("blobfs: failed to initialize RAFS filesystem.")),
}
}
}
use fuse_backend_rs::abi::fuse_abi::CreateIn;
#[cfg(feature = "virtiofs")]
use fuse_backend_rs::abi::virtio_fs;
#[cfg(feature = "virtiofs")]
use fuse_backend_rs::transport::FsCacheReqHandler;
use nydus_error::eacces;
#[cfg(feature = "virtiofs")]
use nydus_storage::device::BlobPrefetchRequest;
#[cfg(feature = "virtiofs")]
use std::cmp::min;
use std::ffi::CStr;
use std::io;
#[cfg(feature = "virtiofs")]
use std::path::Path;
use std::time::Duration;
impl BlobFs {
// prepare BlobPrefetchRequest and call device.prefetch().
// Make sure prefetch doesn't use delay_persist as we need the data immediately.
fn load_chunks_on_demand(&self, inode: Inode, offset: u64, len: u64) -> io::Result<()> {
let (blob_id, size) = self.get_blob_id_and_size(inode)?;
if size <= offset || offset.checked_add(len).is_none() {
#[cfg(feature = "virtiofs")]
fn check_st_size(blob_id: &Path, size: i64) -> io::Result<()> {
if size < 0 {
return Err(einval!(format!(
"blobfs: blob_id {:?}, offset {:?} is larger than size {:?}",
"load_chunks_on_demand: blob_id {:?}, size: {:?} is less than 0",
blob_id, size
)));
}
Ok(())
}
#[cfg(feature = "virtiofs")]
fn get_blob_id_and_size(&self, inode: Inode) -> io::Result<(String, u64)> {
// locate blob file that the inode refers to
let blob_id_full_path = self.pfs.readlinkat_proc_file(inode)?;
let parent = blob_id_full_path
.parent()
.ok_or_else(|| einval!("blobfs: failed to find parent"))?;
trace!(
"parent: {:?}, blob id path: {:?}",
parent,
blob_id_full_path
);
let blob_file = Self::open_file(
libc::AT_FDCWD,
blob_id_full_path.as_path(),
libc::O_PATH | libc::O_NOFOLLOW | libc::O_CLOEXEC,
0,
)
.map_err(|e| einval!(e))?;
let st = Self::stat(&blob_file).map_err(|e| {
error!("get_blob_id_and_size: stat failed {:?}", e);
e
})?;
let blob_id = blob_id_full_path
.file_name()
.ok_or_else(|| einval!("blobfs: failed to find blob file"))?;
trace!("load_chunks_on_demand: blob_id {:?}", blob_id);
Self::check_st_size(blob_id_full_path.as_path(), st.st_size)?;
Ok((
blob_id.to_os_string().into_string().unwrap(),
st.st_size as u64,
))
}
#[cfg(feature = "virtiofs")]
fn load_chunks_on_demand(&self, inode: Inode, offset: u64) -> io::Result<()> {
// prepare BlobPrefetchRequest and call device.prefetch().
// Make sure prefetch doesn't use delay_persist as we need the
// data immediately.
let (blob_id, size) = self.get_blob_id_and_size(inode)?;
if size <= offset {
return Err(einval!(format!(
"load_chunks_on_demand: blob_id {:?}, offset {:?} is larger than size {:?}",
blob_id, offset, size
)));
}
let end = std::cmp::min(offset + len, size);
let len = end - offset;
let len = size - offset;
let req = BlobPrefetchRequest {
blob_id,
offset,
len,
len: min(len, 0x0020_0000_u64), // 2M range
};
self.state.fetch_range_sync(&[req]).map_err(|e| {
warn!("blobfs: failed to load data, {:?}", e);
self.bootstrap_args.fetch_range_sync(&[req]).map_err(|e| {
warn!("load chunks: error, {:?}", e);
e
})
}
@ -65,7 +105,8 @@ impl FileSystem for BlobFs {
type Handle = Handle;
fn init(&self, capable: FsOptions) -> io::Result<FsOptions> {
self.state.get_rafs_handle()?;
#[cfg(feature = "virtiofs")]
let _ = self.bootstrap_args.get_rafs_handle()?;
self.pfs.init(capable)
}
@ -73,6 +114,10 @@ impl FileSystem for BlobFs {
self.pfs.destroy()
}
fn statfs(&self, _ctx: &Context, inode: Inode) -> io::Result<libc::statvfs64> {
self.pfs.statfs(_ctx, inode)
}
fn lookup(&self, _ctx: &Context, parent: Inode, name: &CStr) -> io::Result<Entry> {
self.pfs.lookup(_ctx, parent, name)
}
@ -85,52 +130,26 @@ impl FileSystem for BlobFs {
self.pfs.batch_forget(_ctx, requests)
}
fn getattr(
fn opendir(
&self,
_ctx: &Context,
inode: Inode,
_handle: Option<Handle>,
) -> io::Result<(libc::stat64, Duration)> {
self.pfs.getattr(_ctx, inode, _handle)
flags: u32,
) -> io::Result<(Option<Handle>, OpenOptions)> {
self.pfs.opendir(_ctx, inode, flags)
}
fn setattr(
fn releasedir(
&self,
_ctx: &Context,
_inode: Inode,
_attr: libc::stat64,
_handle: Option<Handle>,
_valid: SetattrValid,
) -> io::Result<(libc::stat64, Duration)> {
Err(eacces!("Setattr request is not allowed in blobfs"))
}
fn readlink(&self, _ctx: &Context, inode: Inode) -> io::Result<Vec<u8>> {
self.pfs.readlink(_ctx, inode)
}
fn symlink(
&self,
_ctx: &Context,
_linkname: &CStr,
_parent: Inode,
_name: &CStr,
) -> io::Result<Entry> {
Err(eacces!("Symlink request is not allowed in blobfs"))
}
fn mknod(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_mode: u32,
_rdev: u32,
_umask: u32,
) -> io::Result<Entry> {
Err(eacces!("Mknod request is not allowed in blobfs"))
inode: Inode,
_flags: u32,
handle: Handle,
) -> io::Result<()> {
self.pfs.releasedir(_ctx, inode, _flags, handle)
}
#[allow(unused)]
fn mkdir(
&self,
_ctx: &Context,
@ -139,186 +158,16 @@ impl FileSystem for BlobFs {
_mode: u32,
_umask: u32,
) -> io::Result<Entry> {
error!("do mkdir req error: blob file can not be written.");
Err(eacces!("Mkdir request is not allowed in blobfs"))
}
fn unlink(&self, _ctx: &Context, _parent: Inode, _name: &CStr) -> io::Result<()> {
Err(eacces!("Unlink request is not allowed in blobfs"))
}
#[allow(unused)]
fn rmdir(&self, _ctx: &Context, _parent: Inode, _name: &CStr) -> io::Result<()> {
error!("do rmdir req error: blob file can not be written.");
Err(eacces!("Rmdir request is not allowed in blobfs"))
}
fn rename(
&self,
_ctx: &Context,
_olddir: Inode,
_oldname: &CStr,
_newdir: Inode,
_newname: &CStr,
_flags: u32,
) -> io::Result<()> {
Err(eacces!("Rename request is not allowed in blobfs"))
}
fn link(
&self,
_ctx: &Context,
_inode: Inode,
_newparent: Inode,
_newname: &CStr,
) -> io::Result<Entry> {
Err(eacces!("Link request is not allowed in blobfs"))
}
fn open(
&self,
_ctx: &Context,
inode: Inode,
flags: u32,
_fuse_flags: u32,
) -> io::Result<(Option<Handle>, OpenOptions, Option<u32>)> {
self.pfs.open(_ctx, inode, flags, _fuse_flags)
}
fn create(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_args: CreateIn,
) -> io::Result<(Entry, Option<Handle>, OpenOptions, Option<u32>)> {
Err(eacces!("Create request is not allowed in blobfs"))
}
fn read(
&self,
ctx: &Context,
inode: Inode,
handle: Handle,
w: &mut dyn ZeroCopyWriter,
size: u32,
offset: u64,
lock_owner: Option<u64>,
flags: u32,
) -> io::Result<usize> {
self.load_chunks_on_demand(inode, offset, size as u64)?;
self.pfs
.read(ctx, inode, handle, w, size, offset, lock_owner, flags)
}
fn write(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_r: &mut dyn ZeroCopyReader,
_size: u32,
_offset: u64,
_lock_owner: Option<u64>,
_delayed_write: bool,
_flags: u32,
_fuse_flags: u32,
) -> io::Result<usize> {
Err(eacces!("Write request is not allowed in blobfs"))
}
fn flush(
&self,
_ctx: &Context,
inode: Inode,
handle: Handle,
_lock_owner: u64,
) -> io::Result<()> {
self.pfs.flush(_ctx, inode, handle, _lock_owner)
}
fn fsync(
&self,
_ctx: &Context,
inode: Inode,
datasync: bool,
handle: Handle,
) -> io::Result<()> {
self.pfs.fsync(_ctx, inode, datasync, handle)
}
fn fallocate(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_mode: u32,
_offset: u64,
_length: u64,
) -> io::Result<()> {
Err(eacces!("Fallocate request is not allowed in blobfs"))
}
fn release(
&self,
_ctx: &Context,
inode: Inode,
_flags: u32,
handle: Handle,
_flush: bool,
_flock_release: bool,
_lock_owner: Option<u64>,
) -> io::Result<()> {
self.pfs.release(
_ctx,
inode,
_flags,
handle,
_flush,
_flock_release,
_lock_owner,
)
}
fn statfs(&self, _ctx: &Context, inode: Inode) -> io::Result<libc::statvfs64> {
self.pfs.statfs(_ctx, inode)
}
fn setxattr(
&self,
_ctx: &Context,
_inode: Inode,
_name: &CStr,
_value: &[u8],
_flags: u32,
) -> io::Result<()> {
Err(eacces!("Setxattr request is not allowed in blobfs"))
}
fn getxattr(
&self,
_ctx: &Context,
inode: Inode,
name: &CStr,
size: u32,
) -> io::Result<GetxattrReply> {
self.pfs.getxattr(_ctx, inode, name, size)
}
fn listxattr(&self, _ctx: &Context, inode: Inode, size: u32) -> io::Result<ListxattrReply> {
self.pfs.listxattr(_ctx, inode, size)
}
fn removexattr(&self, _ctx: &Context, _inode: Inode, _name: &CStr) -> io::Result<()> {
Err(eacces!("Removexattr request is not allowed in blobfs"))
}
fn opendir(
&self,
_ctx: &Context,
inode: Inode,
flags: u32,
) -> io::Result<(Option<Handle>, OpenOptions)> {
self.pfs.opendir(_ctx, inode, flags)
}
fn readdir(
&self,
_ctx: &Context,
@ -345,26 +194,56 @@ impl FileSystem for BlobFs {
.readdirplus(_ctx, inode, handle, size, offset, add_entry)
}
fn fsyncdir(
fn open(
&self,
ctx: &Context,
_ctx: &Context,
inode: Inode,
datasync: bool,
handle: Handle,
) -> io::Result<()> {
self.pfs.fsyncdir(ctx, inode, datasync, handle)
flags: u32,
_fuse_flags: u32,
) -> io::Result<(Option<Handle>, OpenOptions)> {
self.pfs.open(_ctx, inode, flags, _fuse_flags)
}
fn releasedir(
fn release(
&self,
_ctx: &Context,
inode: Inode,
_flags: u32,
handle: Handle,
_flush: bool,
_flock_release: bool,
_lock_owner: Option<u64>,
) -> io::Result<()> {
self.pfs.releasedir(_ctx, inode, _flags, handle)
self.pfs.release(
_ctx,
inode,
_flags,
handle,
_flush,
_flock_release,
_lock_owner,
)
}
#[allow(unused)]
fn create(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_args: CreateIn,
) -> io::Result<(Entry, Option<Handle>, OpenOptions)> {
error!("do create req error: blob file cannot write.");
Err(eacces!("Create request is not allowed in blobfs"))
}
#[allow(unused)]
fn unlink(&self, _ctx: &Context, _parent: Inode, _name: &CStr) -> io::Result<()> {
error!("do unlink req error: blob file cannot write.");
Err(eacces!("Unlink request is not allowed in blobfs"))
}
#[cfg(feature = "virtiofs")]
fn setupmapping(
&self,
_ctx: &Context,
@ -376,25 +255,20 @@ impl FileSystem for BlobFs {
moffset: u64,
vu_req: &mut dyn FsCacheReqHandler,
) -> io::Result<()> {
debug!(
"blobfs: setupmapping ino {:?} foffset {} len {} flags {} moffset {}",
inode, foffset, len, flags, moffset
);
if (flags & virtio_fs::SetupmappingFlags::WRITE.bits()) != 0 {
return Err(eacces!("blob file cannot write in dax"));
}
if foffset.checked_add(len).is_none() || foffset + len > u64::MAX - MAPPING_UNIT_SIZE {
return Err(einval!(format!(
"blobfs: invalid offset 0x{:x} and len 0x{:x}",
foffset, len
)));
}
let end = round_up(foffset + len, MAPPING_UNIT_SIZE);
let offset = round_down(foffset, MAPPING_UNIT_SIZE);
let len = end - offset;
self.load_chunks_on_demand(inode, offset, len)?;
self.load_chunks_on_demand(inode, foffset)?;
self.pfs
.setupmapping(_ctx, inode, _handle, foffset, len, flags, moffset, vu_req)
}
#[cfg(feature = "virtiofs")]
fn removemapping(
&self,
_ctx: &Context,
@ -405,10 +279,201 @@ impl FileSystem for BlobFs {
self.pfs.removemapping(_ctx, _inode, requests, vu_req)
}
fn read(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_w: &mut dyn ZeroCopyWriter,
_size: u32,
_offset: u64,
_lock_owner: Option<u64>,
_flags: u32,
) -> io::Result<usize> {
error!(
"do Read req error: blob file cannot do nondax read, please check if dax is enabled"
);
Err(eacces!("Read request is not allowed in blobfs"))
}
#[allow(unused)]
fn write(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_r: &mut dyn ZeroCopyReader,
_size: u32,
_offset: u64,
_lock_owner: Option<u64>,
_delayed_write: bool,
_flags: u32,
_fuse_flags: u32,
) -> io::Result<usize> {
error!("do Write req error: blob file cannot write.");
Err(eacces!("Write request is not allowed in blobfs"))
}
fn getattr(
&self,
_ctx: &Context,
inode: Inode,
_handle: Option<Handle>,
) -> io::Result<(libc::stat64, Duration)> {
self.pfs.getattr(_ctx, inode, _handle)
}
#[allow(unused)]
fn setattr(
&self,
_ctx: &Context,
_inode: Inode,
_attr: libc::stat64,
_handle: Option<Handle>,
_valid: SetattrValid,
) -> io::Result<(libc::stat64, Duration)> {
error!("do setattr req error: blob file cannot write.");
Err(eacces!("Setattr request is not allowed in blobfs"))
}
#[allow(unused)]
fn rename(
&self,
_ctx: &Context,
_olddir: Inode,
_oldname: &CStr,
_newdir: Inode,
_newname: &CStr,
_flags: u32,
) -> io::Result<()> {
error!("do rename req error: blob file cannot write.");
Err(eacces!("Rename request is not allowed in blobfs"))
}
#[allow(unused)]
fn mknod(
&self,
_ctx: &Context,
_parent: Inode,
_name: &CStr,
_mode: u32,
_rdev: u32,
_umask: u32,
) -> io::Result<Entry> {
error!("do mknode req error: blob file cannot write.");
Err(eacces!("Mknod request is not allowed in blobfs"))
}
#[allow(unused)]
fn link(
&self,
_ctx: &Context,
_inode: Inode,
_newparent: Inode,
_newname: &CStr,
) -> io::Result<Entry> {
error!("do link req error: blob file cannot write.");
Err(eacces!("Link request is not allowed in blobfs"))
}
#[allow(unused)]
fn symlink(
&self,
_ctx: &Context,
_linkname: &CStr,
_parent: Inode,
_name: &CStr,
) -> io::Result<Entry> {
error!("do symlink req error: blob file cannot write.");
Err(eacces!("Symlink request is not allowed in blobfs"))
}
fn readlink(&self, _ctx: &Context, inode: Inode) -> io::Result<Vec<u8>> {
self.pfs.readlink(_ctx, inode)
}
fn flush(
&self,
_ctx: &Context,
inode: Inode,
handle: Handle,
_lock_owner: u64,
) -> io::Result<()> {
self.pfs.flush(_ctx, inode, handle, _lock_owner)
}
fn fsync(
&self,
_ctx: &Context,
inode: Inode,
datasync: bool,
handle: Handle,
) -> io::Result<()> {
self.pfs.fsync(_ctx, inode, datasync, handle)
}
fn fsyncdir(
&self,
ctx: &Context,
inode: Inode,
datasync: bool,
handle: Handle,
) -> io::Result<()> {
self.pfs.fsyncdir(ctx, inode, datasync, handle)
}
fn access(&self, ctx: &Context, inode: Inode, mask: u32) -> io::Result<()> {
self.pfs.access(ctx, inode, mask)
}
#[allow(unused)]
fn setxattr(
&self,
_ctx: &Context,
_inode: Inode,
_name: &CStr,
_value: &[u8],
_flags: u32,
) -> io::Result<()> {
error!("do setxattr req error: blob file cannot write.");
Err(eacces!("Setxattr request is not allowed in blobfs"))
}
fn getxattr(
&self,
_ctx: &Context,
inode: Inode,
name: &CStr,
size: u32,
) -> io::Result<GetxattrReply> {
self.pfs.getxattr(_ctx, inode, name, size)
}
fn listxattr(&self, _ctx: &Context, inode: Inode, size: u32) -> io::Result<ListxattrReply> {
self.pfs.listxattr(_ctx, inode, size)
}
#[allow(unused)]
fn removexattr(&self, _ctx: &Context, _inode: Inode, _name: &CStr) -> io::Result<()> {
error!("do removexattr req error: blob file cannot write.");
Err(eacces!("Removexattr request is not allowed in blobfs"))
}
#[allow(unused)]
fn fallocate(
&self,
_ctx: &Context,
_inode: Inode,
_handle: Handle,
_mode: u32,
_offset: u64,
_length: u64,
) -> io::Result<()> {
error!("do fallocate req error: blob file cannot write.");
Err(eacces!("Fallocate request is not allowed in blobfs"))
}
#[allow(unused)]
fn lseek(
&self,
_ctx: &Context,

View File

@ -1,35 +0,0 @@
[package]
name = "nydus-builder"
version = "0.2.0"
description = "Nydus Image Builder"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[dependencies]
anyhow = "1.0.35"
base64 = "0.21"
hex = "0.4.3"
indexmap = "2"
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
sha2 = "0.10.2"
tar = "0.4.40"
vmm-sys-util = "0.12.1"
xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.5.0", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "aarch64-unknown-linux-gnu", "aarch64-apple-darwin"]

View File

@ -1,189 +0,0 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -1,283 +0,0 @@
// Copyright (C) 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate Chunkdict RAFS bootstrap.
//! -------------------------------------------------------------------------------------------------
//! Bug 1: Inconsistent Chunk Size Leading to Blob Size Less Than 4K(v6_block_size)
//! Description: The size of chunks is not consistent, which results in the possibility that a blob,
//! composed of a group of these chunks, may be less than 4K(v6_block_size) in size.
//! This inconsistency leads to a failure in passing the size check.
//! -------------------------------------------------------------------------------------------------
//! Bug 2: Incorrect Chunk Number Calculation Due to Premature Check Logic
//! Description: The current logic for calculating the chunk number is based on the formula size/chunk size.
//! However, this approach is flawed as it precedes the actual check which accounts for chunk statistics.
//! Consequently, this leads to inaccurate counting of chunk numbers.
use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node;
use crate::NodeChunk;
use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest;
use std::mem::size_of;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ChunkdictChunkInfo {
pub image_reference: String,
pub version: String,
pub chunk_blob_id: String,
pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64,
pub chunk_uncompressed_offset: u64,
}
pub struct ChunkdictBlobInfo {
pub blob_id: String,
pub blob_compressed_size: u64,
pub blob_uncompressed_size: u64,
pub blob_compressor: String,
pub blob_meta_ci_compressed_size: u64,
pub blob_meta_ci_uncompressed_size: u64,
pub blob_meta_ci_offset: u64,
}
/// Struct to generate chunkdict RAFS bootstrap.
pub struct Generator {}
impl Generator {
// Generate chunkdict RAFS bootstrap.
pub fn generate(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
chunkdict_chunks_origin: Vec<ChunkdictChunkInfo>,
chunkdict_blobs: Vec<ChunkdictBlobInfo>,
) -> Result<BuildOutput> {
// Validate and remove chunks whose belonged blob sizes are smaller than a block.
let mut chunkdict_chunks = chunkdict_chunks_origin.to_vec();
Self::validate_and_remove_chunks(ctx, &mut chunkdict_chunks);
// Build root tree.
let mut tree = Self::build_root_tree(ctx)?;
// Build child tree.
let child = Self::build_child_tree(ctx, blob_mgr, &chunkdict_chunks, &chunkdict_blobs)?;
let result = vec![child];
tree.children = result;
Self::validate_tree(&tree)?;
// Build bootstrap.
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
/// Validate tree.
fn validate_tree(tree: &Tree) -> Result<()> {
let pre = &mut |t: &Tree| -> Result<()> {
let node = t.borrow_mut_node();
debug!("chunkdict tree: ");
debug!("inode: {}", node);
for chunk in &node.chunks {
debug!("\t chunk: {}", chunk);
}
Ok(())
};
tree.walk_dfs_pre(pre)?;
debug!("chunkdict tree is valid.");
Ok(())
}
/// Validates and removes chunks with a total uncompressed size smaller than the block size limit.
fn validate_and_remove_chunks(ctx: &mut BuildContext, chunkdict: &mut Vec<ChunkdictChunkInfo>) {
let mut chunk_sizes = std::collections::HashMap::new();
// Accumulate the uncompressed size for each chunk_blob_id.
for chunk in chunkdict.iter() {
*chunk_sizes.entry(chunk.chunk_blob_id.clone()).or_insert(0) +=
chunk.chunk_uncompressed_size as u64;
}
// Find all chunk_blob_ids with a total uncompressed size > v6_block_size.
let small_chunks: Vec<String> = chunk_sizes
.into_iter()
.filter(|&(_, size)| size < ctx.v6_block_size())
.inspect(|(id, _)| {
eprintln!(
"Warning: Blob with id '{}' is smaller than {} bytes.",
id,
ctx.v6_block_size()
)
})
.map(|(id, _)| id)
.collect();
// Retain only chunks with chunk_blob_id that has a total uncompressed size > v6_block_size.
chunkdict.retain(|chunk| !small_chunks.contains(&chunk.chunk_blob_id));
}
/// Build the root tree.
pub fn build_root_tree(ctx: &mut BuildContext) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(1);
inode.set_uid(1000);
inode.set_gid(1000);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFDIR as u32);
inode.set_nlink(3);
inode.set_name_size("/".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 0,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/"),
target: PathBuf::from("/"),
target_vec: vec![OsString::from("/")],
symlink: None,
xattrs: RafsXAttrs::default(),
v6_force_extended_inode: true,
};
let root_node = Node::new(inode, node_info, 0);
let tree = Tree::new(root_node);
Ok(tree)
}
/// Build the child tree.
fn build_child_tree(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(2);
inode.set_uid(0);
inode.set_gid(0);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFREG as u32);
inode.set_nlink(1);
inode.set_name_size("chunkdict".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 1,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/chunkdict"),
target: PathBuf::from("/chunkdict"),
target_vec: vec![OsString::from("/"), OsString::from("/chunkdict")],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: true,
};
let mut node = Node::new(inode, node_info, 0);
// Insert chunks.
Self::insert_chunks(ctx, blob_mgr, &mut node, chunkdict_chunks, chunkdict_blobs)?;
let node_size: u64 = node
.chunks
.iter()
.map(|chunk| chunk.inner.uncompressed_size() as u64)
.sum();
node.inode.set_size(node_size);
// Update child count.
node.inode.set_child_count(node.chunks.len() as u32);
let child = Tree::new(node);
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
Ok(child)
}
/// Insert chunks.
fn insert_chunks(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
node: &mut Node,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<()> {
for (index, chunk_info) in chunkdict_chunks.iter().enumerate() {
let chunk_size: u32 = chunk_info.chunk_compressed_size;
let file_offset = index as u64 * chunk_size as u64;
let mut chunk = ChunkWrapper::new(ctx.fs_version);
// Update blob context.
let (blob_index, blob_ctx) =
blob_mgr.get_or_cerate_blob_for_chunkdict(ctx, &chunk_info.chunk_blob_id)?;
let chunk_uncompressed_size = chunk_info.chunk_uncompressed_size;
let pre_d_offset = blob_ctx.current_uncompressed_offset;
blob_ctx.uncompressed_blob_size = pre_d_offset + chunk_uncompressed_size as u64;
blob_ctx.current_uncompressed_offset += chunk_uncompressed_size as u64;
blob_ctx.blob_meta_header.set_ci_uncompressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
blob_ctx.blob_meta_header.set_ci_compressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
let chunkdict_blob_info = chunkdict_blobs
.iter()
.find(|blob| blob.blob_id == chunk_info.chunk_blob_id)
.unwrap();
blob_ctx.blob_compressor =
Algorithm::from_str(chunkdict_blob_info.blob_compressor.as_str())?;
blob_ctx
.blob_meta_header
.set_ci_uncompressed_size(chunkdict_blob_info.blob_meta_ci_uncompressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_size(chunkdict_blob_info.blob_meta_ci_compressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_offset(chunkdict_blob_info.blob_meta_ci_offset);
blob_ctx.blob_meta_header.set_ci_compressor(Algorithm::Zstd);
// Update chunk context.
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
chunk.set_compressed_size(chunk_info.chunk_compressed_size);
chunk.set_compressed_offset(chunk_info.chunk_compressed_offset);
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk.clone()),
});
}
Ok(())
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,364 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::borrow::Cow;
use std::slice;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{toc, BlobMetaChunkArray};
use nydus_utils::digest::{self, DigestHasher, RafsDigest};
use nydus_utils::{compress, crypt};
use sha2::digest::Digest;
use super::layout::BlobLayout;
use super::node::Node;
use crate::core::context::Artifact;
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
const VALID_BLOB_ID_LENGTH: usize = 64;
/// Generator for RAFS data blob.
pub(crate) struct Blob {}
impl Blob {
/// Dump blob file and generate chunks
pub(crate) fn dump(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
match ctx.conversion_type {
ConversionType::DirectoryToRafs => {
let mut chunk_data_buf = vec![0u8; RAFS_MAX_CHUNK_SIZE as usize];
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&ctx.prefetch)?;
for (idx, node) in inodes.iter().enumerate() {
let mut node = node.borrow_mut();
let size = node
.dump_node_data(ctx, blob_mgr, blob_writer, &mut chunk_data_buf)
.context("failed to dump blob chunks")?;
if idx < prefetch_entries {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
blob_ctx.blob_prefetch_size += size;
}
}
}
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToRafs
| ConversionType::TargzToRafs
| ConversionType::EStargzToRafs => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToTarfs
| ConversionType::TarToRef
| ConversionType::TargzToRef
| ConversionType::EStargzToRef => {
// Use `sha256(tarball)` as `blob_id` for ref-type conversions.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if ctx.conversion_type == ConversionType::TarToTarfs {
blob_ctx.uncompressed_blob_size = blob_ctx.compressed_blob_size;
}
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::EStargzIndexToRef => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToStargz
| ConversionType::DirectoryToTargz
| ConversionType::DirectoryToStargz
| ConversionType::TargzToStargz => {
unimplemented!()
}
}
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
blob_ctx.set_blob_prefetch_size(ctx);
}
Ok(())
}
pub fn finalize_blob_data(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump buffered batch chunk data if exists.
if let Some(ref batch) = ctx.blob_batch_generator {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() {
let (_, compressed_size, _) = Node::write_chunk_data(
&ctx,
blob_ctx,
blob_writer,
batch.chunk_data_buf(),
)?;
batch.add_context(compressed_size);
batch.clear_chunk_data_buf();
}
}
}
if !ctx.blob_features.contains(BlobFeatures::SEPARATE)
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.external {
return Ok(());
}
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_RAW,
blob_ctx.compressed_blob_size,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
let blob_digest = RafsDigest {
data: blob_ctx.blob_hash.clone().finalize().into(),
};
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_RAW,
compress::Algorithm::None,
blob_digest,
blob_ctx.compressed_offset(),
blob_ctx.compressed_blob_size,
blob_ctx.uncompressed_blob_size,
)?;
}
}
}
// check blobs to make sure all blobs are valid.
if blob_mgr.external {
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
bail!(
"invalid blob id:{}, length:{}, index:{}",
blob_ctx.blob_id,
blob_ctx.blob_id.len(),
index
);
}
}
}
Ok(())
}
fn get_compression_algorithm_for_meta(ctx: &BuildContext) -> compress::Algorithm {
if ctx.conversion_type.is_to_ref() {
compress::Algorithm::Zstd
} else {
ctx.compressor
}
}
pub(crate) fn dump_meta_data(
ctx: &BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump blob meta for v6 when it has chunks or bootstrap is to be inlined.
if !blob_ctx.blob_meta_info_enabled || blob_ctx.uncompressed_blob_size == 0 {
return Ok(());
}
// Prepare blob meta information data.
let encrypt = ctx.cipher != crypt::Algorithm::None;
let cipher_obj = &blob_ctx.cipher_object;
let cipher_ctx = &blob_ctx.cipher_ctx;
let blob_meta_info = &blob_ctx.blob_meta_info;
let mut ci_data = blob_meta_info.as_byte_slice();
let mut inflate_buf = Vec::new();
let mut header = blob_ctx.blob_meta_header;
if let Some(ref zran) = ctx.blob_zran_generator {
let (inflate_data, inflate_count) = zran.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_zran(true);
header.set_separate_blob(true);
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if let Some(ref batch) = ctx.blob_batch_generator {
let (inflate_data, inflate_count) = batch.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_batch(true);
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if ctx.blob_tar_reader.is_some() {
header.set_separate_blob(true);
};
let mut compressor = Self::get_compression_algorithm_for_meta(ctx);
let (compressed_data, compressed) = compress::compress(ci_data, compressor)
.with_context(|| "failed to compress blob chunk info array".to_string())?;
if !compressed {
compressor = compress::Algorithm::None;
}
let encrypted_ci_data =
crypt::encrypt_with_context(&compressed_data, cipher_obj, cipher_ctx, encrypt)?;
let compressed_offset = blob_writer.pos()?;
let compressed_size = encrypted_ci_data.len() as u64;
let uncompressed_size = ci_data.len() as u64;
header.set_ci_compressor(compressor);
header.set_ci_entries(blob_meta_info.len() as u32);
header.set_ci_compressed_offset(compressed_offset);
header.set_ci_compressed_size(compressed_size as u64);
header.set_ci_uncompressed_size(uncompressed_size as u64);
header.set_aligned(true);
match blob_meta_info {
BlobMetaChunkArray::V1(_) => header.set_chunk_info_v2(false),
BlobMetaChunkArray::V2(_) => header.set_chunk_info_v2(true),
}
if ctx.features.is_enabled(Feature::BlobToc) && blob_ctx.chunk_count > 0 {
header.set_inlined_chunk_digest(true);
}
blob_ctx.blob_meta_header = header;
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.write_blob_meta(ci_data, &header)?;
}
let encrypted_header =
crypt::encrypt_with_context(header.as_bytes(), cipher_obj, cipher_ctx, encrypt)?;
let header_size = encrypted_header.len();
// Write blob meta data and header
match encrypted_ci_data {
Cow::Owned(v) => blob_ctx.write_data(blob_writer, &v)?,
Cow::Borrowed(v) => {
let buf = v.to_vec();
blob_ctx.write_data(blob_writer, &buf)?;
}
}
blob_ctx.write_data(blob_writer, &encrypted_header)?;
// Write tar header for `blob.meta`.
if ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_META,
compressed_size + header_size as u64,
)?;
}
// Generate ToC entry for `blob.meta` and write chunk digest array.
if ctx.features.is_enabled(Feature::BlobToc) {
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let ci_data = if ctx.blob_features.contains(BlobFeatures::BATCH)
|| ctx.blob_features.contains(BlobFeatures::ZRAN)
{
inflate_buf.as_slice()
} else {
blob_ctx.blob_meta_info.as_byte_slice()
};
hasher.digest_update(ci_data);
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_META,
compressor,
hasher.digest_finalize(),
compressed_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
hasher.digest_update(header.as_bytes());
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_META_HEADER,
compress::Algorithm::None,
hasher.digest_finalize(),
compressed_offset + compressed_size,
header_size as u64,
header_size as u64,
)?;
let buf = unsafe {
slice::from_raw_parts(
blob_ctx.blob_chunk_digest.as_ptr() as *const u8,
blob_ctx.blob_chunk_digest.len() * 32,
)
};
assert!(!buf.is_empty());
// The chunk digest array is almost incompressible, no need for compression.
let digest = RafsDigest::from_buf(buf, digest::Algorithm::Sha256);
let compressed_offset = blob_writer.pos()?;
let size = buf.len() as u64;
blob_writer.write_all(buf)?;
blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_DIGEST, size)?;
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_DIGEST,
compress::Algorithm::None,
digest,
compressed_offset,
size,
size,
)?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_default_compression_algorithm_for_meta_ci() {
let mut ctx = BuildContext::default();
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//EStargzIndexToRef
ctx = BuildContext {
conversion_type: ConversionType::EStargzIndexToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TargzToRef
ctx = BuildContext {
conversion_type: ConversionType::TargzToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
}
}

View File

@ -1,214 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::{Context, Error, Result};
use nydus_utils::digest::{self, RafsDigest};
use std::ops::Deref;
use nydus_rafs::metadata::layout::{RafsBlobTable, RAFS_V5_ROOT_INODE};
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig, RafsSuperFlags};
use crate::{ArtifactStorage, BlobManager, BootstrapContext, BootstrapManager, BuildContext, Tree};
/// RAFS bootstrap/meta builder.
pub struct Bootstrap {
pub(crate) tree: Tree,
}
impl Bootstrap {
/// Create a new instance of [Bootstrap].
pub fn new(tree: Tree) -> Result<Self> {
Ok(Self { tree })
}
/// Build the final view of the RAFS filesystem meta from the hierarchy `tree`.
pub fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
) -> Result<()> {
// Special handling of the root inode
let mut root_node = self.tree.borrow_mut_node();
assert!(root_node.is_dir());
let index = bootstrap_ctx.generate_next_ino();
// 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE.
assert_eq!(index, RAFS_V5_ROOT_INODE);
root_node.index = index;
root_node.inode.set_ino(index);
ctx.prefetch.insert(&self.tree.node, root_node.deref());
bootstrap_ctx.inode_map.insert(
(
root_node.layer_idx,
root_node.info.src_ino,
root_node.info.src_dev,
),
vec![self.tree.node.clone()],
);
drop(root_node);
Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?;
if ctx.fs_version.is_v6() {
let root_offset = self.tree.node.borrow().v6_offset;
Self::v6_update_dirents(&self.tree, root_offset);
}
Ok(())
}
/// Dump the RAFS filesystem meta information to meta blob.
pub fn dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_storage: &mut Option<ArtifactStorage>,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsBlobTable,
) -> Result<()> {
match blob_table {
RafsBlobTable::V5(table) => self.v5_dump(ctx, bootstrap_ctx, table)?,
RafsBlobTable::V6(table) => self.v6_dump(ctx, bootstrap_ctx, table)?,
}
if let Some(ArtifactStorage::FileDir(p)) = bootstrap_storage {
let bootstrap_data = bootstrap_ctx.writer.as_bytes()?;
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?;
let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(())
} else {
bootstrap_ctx.writer.finalize(Some(String::default()))
}
}
/// Traverse node tree, set inode index, ino, child_index and child_count etc according to the
/// RAFS metadata format, then store to nodes collection.
fn build_rafs(
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
tree: &mut Tree,
) -> Result<()> {
let parent_node = tree.node.clone();
let mut parent_node = parent_node.borrow_mut();
let parent_ino = parent_node.inode.ino();
let block_size = ctx.v6_block_size();
// In case of multi-layer building, it's possible that the parent node is not a directory.
if parent_node.is_dir() {
parent_node
.inode
.set_child_count(tree.children.len() as u32);
if ctx.fs_version.is_v5() {
parent_node
.inode
.set_child_index(bootstrap_ctx.get_next_ino() as u32);
} else if ctx.fs_version.is_v6() {
// Layout directory entries for v6.
let d_size = parent_node.v6_dirent_size(ctx, tree)?;
parent_node.v6_set_dir_offset(bootstrap_ctx, d_size, block_size)?;
}
}
let mut dirs: Vec<&mut Tree> = Vec::new();
for child in tree.children.iter_mut() {
let child_node = child.node.clone();
let mut child_node = child_node.borrow_mut();
let index = bootstrap_ctx.generate_next_ino();
child_node.index = index;
if ctx.fs_version.is_v5() {
child_node.inode.set_parent(parent_ino);
}
// Handle hardlink.
// All hardlink nodes' ino and nlink should be the same.
// We need to find hardlink node index list in the layer where the node is located
// because the real_ino may be different among different layers,
let mut v6_hardlink_offset: Option<u64> = None;
let key = (
child_node.layer_idx,
child_node.info.src_ino,
child_node.info.src_dev,
);
if let Some(indexes) = bootstrap_ctx.inode_map.get_mut(&key) {
let nlink = indexes.len() as u32 + 1;
// Update nlink for previous hardlink inodes
for n in indexes.iter() {
n.borrow_mut().inode.set_nlink(nlink);
}
let (first_ino, first_offset) = {
let first_node = indexes[0].borrow_mut();
(first_node.inode.ino(), first_node.v6_offset)
};
// set offset for rafs v6 hardlinks
v6_hardlink_offset = Some(first_offset);
child_node.inode.set_nlink(nlink);
child_node.inode.set_ino(first_ino);
indexes.push(child.node.clone());
} else {
child_node.inode.set_ino(index);
child_node.inode.set_nlink(1);
// Store inode real ino
bootstrap_ctx
.inode_map
.insert(key, vec![child.node.clone()]);
}
// update bootstrap_ctx.offset for rafs v6 non-dir nodes.
if !child_node.is_dir() && ctx.fs_version.is_v6() {
child_node.v6_set_offset(bootstrap_ctx, v6_hardlink_offset, block_size)?;
}
ctx.prefetch.insert(&child.node, child_node.deref());
if child_node.is_dir() {
dirs.push(child);
}
}
// According to filesystem semantics, a parent directory should have nlink equal to
// the number of its child directories plus 2.
if parent_node.is_dir() {
parent_node.inode.set_nlink((2 + dirs.len()) as u32);
}
for dir in dirs {
Self::build_rafs(ctx, bootstrap_ctx, dir)?;
}
Ok(())
}
/// Load a parent RAFS bootstrap and return the `Tree` object representing the filesystem.
pub fn load_parent_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<Tree> {
let rs = if let Some(path) = bootstrap_mgr.f_parent_path.as_ref() {
RafsSuper::load_from_file(path, ctx.configuration.clone(), false).map(|(rs, _)| rs)?
} else {
return Err(Error::msg("bootstrap context's parent bootstrap is null"));
};
let config = RafsSuperConfig {
compressor: ctx.compressor,
digester: ctx.digester,
chunk_size: ctx.chunk_size,
batch_size: ctx.batch_size,
explicit_uidgid: ctx.explicit_uidgid,
version: ctx.fs_version,
is_tarfs_mode: rs.meta.flags.contains(RafsSuperFlags::TARTFS_MODE),
};
config.check_compatibility(&rs.meta)?;
// Reuse lower layer blob table,
// we need to append the blob entry of upper layer to the table
blob_mgr.extend_from_blob_table(ctx, rs.superblock.get_blob_infos())?;
// Build node tree of lower layer from a bootstrap file, and add chunks
// of lower node to layered_chunk_dict for chunk deduplication on next.
Tree::from_bootstrap(&rs, &mut blob_mgr.layered_chunk_dict)
.context("failed to build tree from bootstrap")
}
}

View File

@ -1,280 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::{BTreeMap, HashMap};
use std::mem::size_of;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, Mutex};
use anyhow::{bail, Context, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::layout::v5::RafsV5ChunkInfo;
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig};
use nydus_storage::device::BlobInfo;
use nydus_utils::digest::{self, RafsDigest};
use crate::Tree;
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
/// Trait to manage chunk cache for chunk deduplication.
pub trait ChunkDict: Sync + Send + 'static {
/// Add a chunk into the cache.
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm);
/// Get a cached chunk from the cache.
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>>;
/// Get all `BlobInfo` objects referenced by cached chunks.
fn get_blobs(&self) -> Vec<Arc<BlobInfo>>;
/// Get the `BlobInfo` object with inner index `idx`.
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>>;
/// Associate an external index with the inner index.
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32);
/// Get the external index associated with an inner index.
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32>;
/// Get the digest algorithm used to generate chunk digest.
fn digester(&self) -> digest::Algorithm;
}
impl ChunkDict for () {
fn add_chunk(&mut self, _chunk: Arc<ChunkWrapper>, _digester: digest::Algorithm) {}
fn get_chunk(
&self,
_digest: &RafsDigest,
_uncompressed_size: u32,
) -> Option<&Arc<ChunkWrapper>> {
None
}
fn get_blobs(&self) -> Vec<Arc<BlobInfo>> {
Vec::new()
}
fn get_blob_by_inner_idx(&self, _idx: u32) -> Option<&Arc<BlobInfo>> {
None
}
fn set_real_blob_idx(&self, _inner_idx: u32, _out_idx: u32) {
panic!("()::set_real_blob_idx() should not be invoked");
}
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32> {
Some(inner_idx)
}
fn digester(&self) -> digest::Algorithm {
digest::Algorithm::Sha256
}
}
/// An implementation of [ChunkDict] based on [HashMap].
pub struct HashChunkDict {
m: HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)>,
blobs: Vec<Arc<BlobInfo>>,
blob_idx_m: Mutex<BTreeMap<u32, u32>>,
digester: digest::Algorithm,
}
impl ChunkDict for HashChunkDict {
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm) {
if self.digester == digester {
if let Some(e) = self.m.get(chunk.id()) {
e.1.fetch_add(1, Ordering::AcqRel);
} else {
self.m
.insert(chunk.id().to_owned(), (chunk, AtomicU32::new(1)));
}
}
}
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>> {
if let Some((chunk, _)) = self.m.get(digest) {
if chunk.uncompressed_size() == 0 || chunk.uncompressed_size() == uncompressed_size {
return Some(chunk);
}
}
None
}
fn get_blobs(&self) -> Vec<Arc<BlobInfo>> {
self.blobs.clone()
}
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>> {
self.blobs.get(idx as usize)
}
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32) {
self.blob_idx_m.lock().unwrap().insert(inner_idx, out_idx);
}
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32> {
self.blob_idx_m.lock().unwrap().get(&inner_idx).copied()
}
fn digester(&self) -> digest::Algorithm {
self.digester
}
}
impl HashChunkDict {
/// Create a new instance of [HashChunkDict].
pub fn new(digester: digest::Algorithm) -> Self {
HashChunkDict {
m: Default::default(),
blobs: vec![],
blob_idx_m: Mutex::new(Default::default()),
digester,
}
}
/// Get an immutable reference to the internal `HashMap`.
pub fn hashmap(&self) -> &HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)> {
&self.m
}
/// Parse commandline argument for chunk dictionary and load chunks into the dictionary.
pub fn from_commandline_arg(
arg: &str,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Arc<dyn ChunkDict>> {
let file_path = parse_chunk_dict_arg(arg)?;
HashChunkDict::from_bootstrap_file(&file_path, config, rafs_config)
.map(|d| Arc::new(d) as Arc<dyn ChunkDict>)
}
/// Load chunks from the RAFS filesystem into the chunk dictionary.
pub fn from_bootstrap_file(
path: &Path,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Self> {
let (rs, _) = RafsSuper::load_from_file(path, config, true)
.with_context(|| format!("failed to open bootstrap file {:?}", path))?;
let mut d = HashChunkDict {
m: HashMap::new(),
blobs: rs.superblock.get_blob_infos(),
blob_idx_m: Mutex::new(BTreeMap::new()),
digester: rafs_config.digester,
};
rafs_config.check_compatibility(&rs.meta)?;
if rs.meta.is_v5() || rs.meta.has_inlined_chunk_digest() {
Tree::from_bootstrap(&rs, &mut d).context("failed to build tree from bootstrap")?;
} else if rs.meta.is_v6() {
d.load_chunk_table(&rs)
.context("failed to load chunk table")?;
} else {
unimplemented!()
}
Ok(d)
}
fn load_chunk_table(&mut self, rs: &RafsSuper) -> Result<()> {
let size = rs.meta.chunk_table_size as usize;
if size == 0 || self.digester != rs.meta.get_digester() {
return Ok(());
}
let unit_size = size_of::<RafsV5ChunkInfo>();
if size % unit_size != 0 {
return Err(std::io::Error::from_raw_os_error(libc::EINVAL)).with_context(|| {
format!(
"load_chunk_table: invalid rafs v6 chunk table size {}",
size
)
});
}
for idx in 0..(size / unit_size) {
let chunk = rs.superblock.get_chunk_info(idx)?;
let chunk_info = Arc::new(ChunkWrapper::from_chunk_info(chunk));
self.add_chunk(chunk_info, self.digester);
}
Ok(())
}
}
/// Parse a chunk dictionary argument string.
///
/// # Argument
/// `arg` may be in inform of:
/// - type=path: type of external source and corresponding path
/// - path: type default to "bootstrap"
///
/// for example:
/// bootstrap=image.boot
/// image.boot
/// ~/image/image.boot
/// boltdb=/var/db/dict.db (not supported yet)
pub fn parse_chunk_dict_arg(arg: &str) -> Result<PathBuf> {
let (file_type, file_path) = match arg.find('=') {
None => ("bootstrap", arg),
Some(idx) => (&arg[0..idx], &arg[idx + 1..]),
};
debug!("parse chunk dict argument {}={}", file_type, file_path);
match file_type {
"bootstrap" => Ok(PathBuf::from(file_path)),
_ => bail!("invalid chunk dict type {}", file_type),
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_utils::{compress, digest};
use std::path::PathBuf;
#[test]
fn test_null_dict() {
let mut dict = Box::new(()) as Box<dyn ChunkDict>;
let chunk = Arc::new(ChunkWrapper::new(RafsVersion::V5));
dict.add_chunk(chunk.clone(), digest::Algorithm::Sha256);
assert!(dict.get_chunk(chunk.id(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 0);
assert_eq!(dict.get_real_blob_idx(5).unwrap(), 5);
}
#[test]
fn test_chunk_dict() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path = PathBuf::from(root_dir);
source_path.push("../tests/texture/bootstrap/rafs-v5.boot");
let path = source_path.to_str().unwrap();
let rafs_config = RafsSuperConfig {
version: RafsVersion::V5,
compressor: compress::Algorithm::Lz4Block,
digester: digest::Algorithm::Blake3,
chunk_size: 0x100000,
batch_size: 0,
explicit_uidgid: true,
is_tarfs_mode: false,
};
let dict =
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
.unwrap();
assert!(dict.get_chunk(&RafsDigest::default(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 18);
dict.set_real_blob_idx(0, 10);
assert_eq!(dict.get_real_blob_idx(0), Some(10));
assert_eq!(dict.get_real_blob_idx(1), None);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,94 +0,0 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashSet;
use std::convert::TryFrom;
use anyhow::{bail, Result};
const ERR_UNSUPPORTED_FEATURE: &str = "unsupported feature";
/// Feature flags to control behavior of RAFS filesystem builder.
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum Feature {
/// Append a Table Of Content footer to RAFS v6 data blob, to help locate data sections.
BlobToc,
}
impl TryFrom<&str> for Feature {
type Error = anyhow::Error;
fn try_from(f: &str) -> Result<Self> {
match f {
"blob-toc" => Ok(Self::BlobToc),
_ => bail!(
"{} `{}`, please try upgrading to the latest nydus-image",
ERR_UNSUPPORTED_FEATURE,
f,
),
}
}
}
/// A set of enabled feature flags to control behavior of RAFS filesystem builder
#[derive(Clone, Debug)]
pub struct Features(HashSet<Feature>);
impl Default for Features {
fn default() -> Self {
Self::new()
}
}
impl Features {
/// Create a new instance of [Features].
pub fn new() -> Self {
Self(HashSet::new())
}
/// Check whether a feature is enabled or not.
pub fn is_enabled(&self, feature: Feature) -> bool {
self.0.contains(&feature)
}
}
impl TryFrom<&str> for Features {
type Error = anyhow::Error;
fn try_from(features: &str) -> Result<Self> {
let mut list = Features::new();
for feat in features.trim().split(',') {
if !feat.is_empty() {
let feature = Feature::try_from(feat.trim())?;
list.0.insert(feature);
}
}
Ok(list)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_feature() {
assert_eq!(Feature::try_from("blob-toc").unwrap(), Feature::BlobToc);
Feature::try_from("unknown-feature-bit").unwrap_err();
}
#[test]
fn test_features() {
let features = Features::try_from("blob-toc").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc,").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc, ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from(" blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
}
}

View File

@ -1,62 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::Result;
use std::ops::Deref;
use super::node::Node;
use crate::{Overlay, Prefetch, TreeNode};
#[derive(Clone)]
pub struct BlobLayout {}
impl BlobLayout {
pub fn layout_blob_simple(prefetch: &Prefetch) -> Result<(Vec<TreeNode>, usize)> {
let (pre, non_pre) = prefetch.get_file_nodes();
let mut inodes: Vec<TreeNode> = pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let mut non_prefetch_inodes: Vec<TreeNode> = non_pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let prefetch_entries = inodes.len();
inodes.append(&mut non_prefetch_inodes);
Ok((inodes, prefetch_entries))
}
#[inline]
fn should_dump_node(node: &Node) -> bool {
node.overlay == Overlay::UpperAddition || node.overlay == Overlay::UpperModification
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{core::node::NodeInfo, Tree};
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
#[test]
fn test_layout_blob_simple() {
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let mut node1 = Node::new(inode.clone(), NodeInfo::default(), 1);
node1.overlay = Overlay::UpperAddition;
let tree = Tree::new(node1);
let mut prefetch = Prefetch::default();
prefetch.insert(&tree.node, tree.node.borrow().deref());
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap();
assert_eq!(inodes.len(), 1);
assert_eq!(prefetch_entries, 0);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,361 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021-2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Execute file/directory whiteout rules when merging multiple RAFS filesystems
//! according to the OCI or Overlayfs specifications.
use std::ffi::{OsStr, OsString};
use std::fmt::{self, Display, Formatter};
use std::os::unix::ffi::OsStrExt;
use std::str::FromStr;
use anyhow::{anyhow, Error, Result};
use super::node::Node;
/// Prefix for OCI whiteout file.
pub const OCISPEC_WHITEOUT_PREFIX: &str = ".wh.";
/// Prefix for OCI whiteout opaque.
pub const OCISPEC_WHITEOUT_OPAQUE: &str = ".wh..wh..opq";
/// Extended attribute key for Overlayfs whiteout opaque.
pub const OVERLAYFS_WHITEOUT_OPAQUE: &str = "trusted.overlay.opaque";
/// RAFS filesystem overlay specifications.
///
/// When merging multiple RAFS filesystems into one, special rules are needed to white out
/// files/directories in lower/parent filesystems. The whiteout specification defined by the
/// OCI image specification and Linux Overlayfs are widely adopted, so both of them are supported
/// by RAFS filesystem.
///
/// # Overlayfs Whiteout
///
/// In order to support rm and rmdir without changing the lower filesystem, an overlay filesystem
/// needs to record in the upper filesystem that files have been removed. This is done using
/// whiteouts and opaque directories (non-directories are always opaque).
///
/// A whiteout is created as a character device with 0/0 device number. When a whiteout is found
/// in the upper level of a merged directory, any matching name in the lower level is ignored,
/// and the whiteout itself is also hidden.
///
/// A directory is made opaque by setting the xattr “trusted.overlay.opaque” to “y”. Where the upper
/// filesystem contains an opaque directory, any directory in the lower filesystem with the same
/// name is ignored.
///
/// # OCI Image Whiteout
/// - A whiteout file is an empty file with a special filename that signifies a path should be
/// deleted.
/// - A whiteout filename consists of the prefix .wh. plus the basename of the path to be deleted.
/// - As files prefixed with .wh. are special whiteout markers, it is not possible to create a
/// filesystem which has a file or directory with a name beginning with .wh..
/// - Once a whiteout is applied, the whiteout itself MUST also be hidden.
/// - Whiteout files MUST only apply to resources in lower/parent layers.
/// - Files that are present in the same layer as a whiteout file can only be hidden by whiteout
/// files in subsequent layers.
/// - In addition to expressing that a single entry should be removed from a lower layer, layers
/// may remove all of the children using an opaque whiteout entry.
/// - An opaque whiteout entry is a file with the name .wh..wh..opq indicating that all siblings
/// are hidden in the lower layer.
#[derive(Clone, Copy, PartialEq)]
pub enum WhiteoutSpec {
/// Overlay whiteout rules according to the OCI image specification.
///
/// https://github.com/opencontainers/image-spec/blob/master/layer.md#whiteouts
Oci,
/// Overlay whiteout rules according to the Linux Overlayfs specification.
///
/// "whiteouts and opaque directories" in https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt
Overlayfs,
/// No whiteout, keep all content from lower/parent filesystems.
None,
}
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec {
fn default() -> Self {
Self::Oci
}
}
impl FromStr for WhiteoutSpec {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.to_lowercase().as_str() {
"oci" => Ok(Self::Oci),
"overlayfs" => Ok(Self::Overlayfs),
"none" => Ok(Self::None),
_ => Err(anyhow!("invalid whiteout spec")),
}
}
}
/// RAFS filesystem overlay operation types.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WhiteoutType {
OciOpaque,
OciRemoval,
OverlayFsOpaque,
OverlayFsRemoval,
}
impl WhiteoutType {
pub fn is_removal(&self) -> bool {
*self == WhiteoutType::OciRemoval || *self == WhiteoutType::OverlayFsRemoval
}
}
/// RAFS filesystem node overlay state.
#[allow(dead_code)]
#[derive(Clone, Debug, PartialEq)]
pub enum Overlay {
Lower,
UpperAddition,
UpperModification,
}
impl Overlay {
pub fn is_lower_layer(&self) -> bool {
self == &Overlay::Lower
}
}
impl Display for Overlay {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
match self {
Overlay::Lower => write!(f, "LOWER"),
Overlay::UpperAddition => write!(f, "ADDED"),
Overlay::UpperModification => write!(f, "MODIFIED"),
}
}
}
impl Node {
/// Check whether the inode is a special overlayfs whiteout file.
pub fn is_overlayfs_whiteout(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs {
return false;
}
self.inode.is_chrdev()
&& nydus_utils::compact::major_dev(self.info.rdev) == 0
&& nydus_utils::compact::minor_dev(self.info.rdev) == 0
}
/// Check whether the inode (directory) is a overlayfs whiteout opaque.
pub fn is_overlayfs_opaque(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs || !self.is_dir() {
return false;
}
// A directory is made opaque by setting the xattr "trusted.overlay.opaque" to "y".
if let Some(v) = self
.info
.xattrs
.get(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE))
{
if let Ok(v) = std::str::from_utf8(v.as_slice()) {
return v == "y";
}
}
false
}
/// Get whiteout type to process the inode.
pub fn whiteout_type(&self, spec: WhiteoutSpec) -> Option<WhiteoutType> {
if self.overlay == Overlay::Lower {
return None;
}
match spec {
WhiteoutSpec::Oci => {
if let Some(name) = self.name().to_str() {
if name == OCISPEC_WHITEOUT_OPAQUE {
return Some(WhiteoutType::OciOpaque);
} else if name.starts_with(OCISPEC_WHITEOUT_PREFIX) {
return Some(WhiteoutType::OciRemoval);
}
}
}
WhiteoutSpec::Overlayfs => {
if self.is_overlayfs_whiteout(spec) {
return Some(WhiteoutType::OverlayFsRemoval);
} else if self.is_overlayfs_opaque(spec) {
return Some(WhiteoutType::OverlayFsOpaque);
}
}
WhiteoutSpec::None => {
return None;
}
}
None
}
/// Get original filename from a whiteout filename.
pub fn origin_name(&self, t: WhiteoutType) -> Option<&OsStr> {
if let Some(name) = self.name().to_str() {
if t == WhiteoutType::OciRemoval {
// the whiteout filename prefixes the basename of the path to be deleted with ".wh.".
return Some(OsStr::from_bytes(
name[OCISPEC_WHITEOUT_PREFIX.len()..].as_bytes(),
));
} else if t == WhiteoutType::OverlayFsRemoval {
// the whiteout file has the same name as the file to be deleted.
return Some(name.as_ref());
}
}
None
}
}
#[cfg(test)]
mod tests {
use nydus_rafs::metadata::{inode::InodeWrapper, layout::v5::RafsV5Inode};
use crate::core::node::NodeInfo;
use super::*;
#[test]
fn test_white_spec_from_str() {
let spec = WhiteoutSpec::default();
assert!(matches!(spec, WhiteoutSpec::Oci));
assert!(WhiteoutSpec::from_str("oci").is_ok());
assert!(WhiteoutSpec::from_str("overlayfs").is_ok());
assert!(WhiteoutSpec::from_str("none").is_ok());
assert!(WhiteoutSpec::from_str("foo").is_err());
}
#[test]
fn test_white_type_removal_check() {
let t1 = WhiteoutType::OciOpaque;
let t2 = WhiteoutType::OciRemoval;
let t3 = WhiteoutType::OverlayFsOpaque;
let t4 = WhiteoutType::OverlayFsRemoval;
assert!(!t1.is_removal());
assert!(t2.is_removal());
assert!(!t3.is_removal());
assert!(t4.is_removal());
}
#[test]
fn test_overlay_low_layer_check() {
let t1 = Overlay::Lower;
let t2 = Overlay::UpperAddition;
let t3 = Overlay::UpperModification;
assert!(t1.is_lower_layer());
assert!(!t2.is_lower_layer());
assert!(!t3.is_lower_layer());
}
#[test]
fn test_node() {
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, NodeInfo::default(), 0);
assert!(!node.is_overlayfs_whiteout(WhiteoutSpec::None));
assert!(node.is_overlayfs_whiteout(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsRemoval
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info: NodeInfo = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsOpaque
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let inode = InodeWrapper::V5(RafsV5Inode::default());
let info = NodeInfo::default();
let mut node = Node::new(inode, info, 0);
assert_eq!(node.whiteout_type(WhiteoutSpec::None), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Oci), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
node.overlay = Overlay::Lower;
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
let name = OCISPEC_WHITEOUT_PREFIX.to_string() + "foo";
info.target_vec.push(name.clone().into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciRemoval
);
assert_eq!(node.origin_name(WhiteoutType::OciRemoval).unwrap(), "foo");
assert_eq!(node.origin_name(WhiteoutType::OciOpaque), None);
assert_eq!(
node.origin_name(WhiteoutType::OverlayFsRemoval).unwrap(),
OsStr::new(&name)
);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
info.target_vec.push(OCISPEC_WHITEOUT_OPAQUE.into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciOpaque
);
}
}

View File

@ -1,391 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::path::PathBuf;
use std::str::FromStr;
use anyhow::{anyhow, Context, Error, Result};
use indexmap::IndexMap;
use nydus_rafs::metadata::layout::v5::RafsV5PrefetchTable;
use nydus_rafs::metadata::layout::v6::{calculate_nid, RafsV6PrefetchTable};
use super::node::Node;
use crate::core::tree::TreeNode;
/// Filesystem data prefetch policy.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum PrefetchPolicy {
None,
/// Prefetch will be issued from Fs layer, which leverages inode/chunkinfo to prefetch data
/// from blob no matter where it resides(OSS/Localfs). Basically, it is willing to cache the
/// data into blobcache(if exists). It's more nimble. With this policy applied, image builder
/// currently puts prefetch files' data into a continuous region within blob which behaves very
/// similar to `Blob` policy.
Fs,
/// Prefetch will be issued directly from backend/blob layer
Blob,
}
impl Default for PrefetchPolicy {
fn default() -> Self {
Self::None
}
}
impl FromStr for PrefetchPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"none" => Ok(Self::None),
"fs" => Ok(Self::Fs),
"blob" => Ok(Self::Blob),
_ => Err(anyhow!("invalid prefetch policy")),
}
}
}
/// Gather prefetch patterns from STDIN line by line.
///
/// Input format:
/// printf "/relative/path/to/rootfs/1\n/relative/path/to/rootfs/2"
///
/// It does not guarantee that specified path exist in local filesystem because the specified path
/// may exist in parent image/layers.
fn get_patterns() -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let stdin = std::io::stdin();
let mut patterns = Vec::new();
loop {
let mut file = String::new();
let size = stdin
.read_line(&mut file)
.context("failed to read prefetch pattern")?;
if size == 0 {
return generate_patterns(patterns);
}
patterns.push(file);
}
}
fn generate_patterns(input: Vec<String>) -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let mut patterns = IndexMap::new();
for file in &input {
let file_trimmed: PathBuf = file.trim().into();
// Sanity check for the list format.
if !file_trimmed.is_absolute() {
warn!(
"Illegal file path {} specified, should be absolute path",
file
);
continue;
}
let mut current_path = file_trimmed.clone();
let mut skip = patterns.contains_key(&current_path);
while !skip && current_path.pop() {
if patterns.contains_key(&current_path) {
skip = true;
break;
}
}
if skip {
warn!(
"prefetch pattern {} is covered by previous pattern and thus omitted",
file
);
} else {
debug!(
"prefetch pattern: {}, trimmed file name {:?}",
file, file_trimmed
);
patterns.insert(file_trimmed, None);
}
}
Ok(patterns)
}
/// Manage filesystem data prefetch configuration and state for builder.
#[derive(Default, Clone)]
pub struct Prefetch {
pub policy: PrefetchPolicy,
pub disabled: bool,
// Patterns to generate prefetch inode array, which will be put into the prefetch array
// in the RAFS bootstrap. It may access directory or file inodes.
patterns: IndexMap<PathBuf, Option<TreeNode>>,
// File list to help optimizing layout of data blobs.
// Files from this list may be put at the head of data blob for better prefetch performance,
// The index of matched prefetch pattern is stored in `usize`,
// which will help to sort the prefetch files in the final layout.
// It only stores regular files.
files_prefetch: Vec<(TreeNode, usize)>,
// It stores all non-prefetch files that is not stored in `prefetch_files`,
// including regular files, dirs, symlinks, etc.,
// with the same order of BFS traversal of file tree.
files_non_prefetch: Vec<TreeNode>,
}
impl Prefetch {
/// Create a new instance of [Prefetch].
pub fn new(policy: PrefetchPolicy) -> Result<Self> {
let patterns = if policy != PrefetchPolicy::None {
get_patterns().context("failed to get prefetch patterns")?
} else {
IndexMap::new()
};
Ok(Self {
policy,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10000),
files_non_prefetch: Vec::with_capacity(10000),
})
}
/// Insert node into the prefetch Vector if it matches prefetch rules,
/// while recording the index of matched prefetch pattern,
/// or insert it into non-prefetch Vector.
pub fn insert(&mut self, obj: &TreeNode, node: &Node) {
// Newly created root inode of this rafs has zero size
if self.policy == PrefetchPolicy::None
|| self.disabled
|| (node.inode.is_reg() && node.inode.size() == 0)
{
self.files_non_prefetch.push(obj.clone());
return;
}
let mut path = node.target().clone();
let mut exact_match = true;
loop {
if let Some((idx, _, v)) = self.patterns.get_full_mut(&path) {
if exact_match {
*v = Some(obj.clone());
}
if node.is_reg() {
self.files_prefetch.push((obj.clone(), idx));
} else {
self.files_non_prefetch.push(obj.clone());
}
return;
}
// If no exact match, try to match parent dir until root.
if !path.pop() {
self.files_non_prefetch.push(obj.clone());
return;
}
exact_match = false;
}
}
/// Get node Vector of files in the prefetch list and non-prefetch list.
/// The order of prefetch files is the same as the order of prefetch patterns.
/// The order of non-prefetch files is the same as the order of BFS traversal of file tree.
pub fn get_file_nodes(&self) -> (Vec<TreeNode>, Vec<TreeNode>) {
let mut p_files = self.files_prefetch.clone();
p_files.sort_by_key(|k| k.1);
let p_files = p_files.into_iter().map(|(s, _)| s).collect();
(p_files, self.files_non_prefetch.clone())
}
/// Get the number of ``valid`` prefetch rules.
pub fn fs_prefetch_rule_count(&self) -> u32 {
if self.policy == PrefetchPolicy::Fs {
self.patterns.values().filter(|v| v.is_some()).count() as u32
} else {
0
}
}
/// Generate filesystem layer prefetch list for RAFS v5.
pub fn get_v5_prefetch_table(&mut self) -> Option<RafsV5PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV5PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
assert!(node.inode.ino() < u32::MAX as u64);
prefetch_table.add_entry(node.inode.ino() as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Generate filesystem layer prefetch list for RAFS v6.
pub fn get_v6_prefetch_table(&mut self, meta_addr: u64) -> Option<RafsV6PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV6PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
let ino = node.inode.ino();
debug_assert!(ino > 0);
let nid = calculate_nid(node.v6_offset, meta_addr);
// 32bit nid can represent 128GB bootstrap, it is large enough, no need
// to worry about casting here
assert!(nid < u32::MAX as u64);
trace!(
"v6 prefetch table: map node index {} to offset {} nid {} path {:?} name {:?}",
ino,
node.v6_offset,
nid,
node.path(),
node.name()
);
prefetch_table.add_entry(nid as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Disable filesystem data prefetch.
pub fn disable(&mut self) {
self.disabled = true;
}
/// Reset to initialization state.
pub fn clear(&mut self) {
self.disabled = false;
self.patterns.clear();
self.files_prefetch.clear();
self.files_non_prefetch.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::node::NodeInfo;
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
use std::cell::RefCell;
#[test]
fn test_generate_pattern() {
let input = vec![
"/a/b".to_string(),
"/a/b/c".to_string(),
"/a/b/d".to_string(),
"/a/b/d/e".to_string(),
"/f".to_string(),
"/h/i".to_string(),
];
let patterns = generate_patterns(input).unwrap();
assert_eq!(patterns.len(), 3);
assert!(patterns.contains_key(&PathBuf::from("/a/b")));
assert!(patterns.contains_key(&PathBuf::from("/f")));
assert!(patterns.contains_key(&PathBuf::from("/h/i")));
assert!(!patterns.contains_key(&PathBuf::from("/")));
assert!(!patterns.contains_key(&PathBuf::from("/a")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/c")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d/e")));
assert!(!patterns.contains_key(&PathBuf::from("/k")));
}
#[test]
fn test_prefetch_policy() {
let policy = PrefetchPolicy::from_str("fs").unwrap();
assert_eq!(policy, PrefetchPolicy::Fs);
let policy = PrefetchPolicy::from_str("blob").unwrap();
assert_eq!(policy, PrefetchPolicy::Blob);
let policy = PrefetchPolicy::from_str("none").unwrap();
assert_eq!(policy, PrefetchPolicy::None);
PrefetchPolicy::from_str("").unwrap_err();
PrefetchPolicy::from_str("invalid").unwrap_err();
}
#[test]
fn test_prefetch() {
let input = vec![
"/a/b".to_string(),
"/f".to_string(),
"/h/i".to_string(),
"/k".to_string(),
];
let patterns = generate_patterns(input).unwrap();
let mut prefetch = Prefetch {
policy: PrefetchPolicy::Fs,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10),
files_non_prefetch: Vec::with_capacity(10),
};
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let info = NodeInfo::default();
let mut info1 = info.clone();
info1.target = PathBuf::from("/f");
let node1 = Node::new(inode.clone(), info1, 1);
let node1 = TreeNode::new(RefCell::from(node1));
prefetch.insert(&node1, &node1.borrow());
let inode2 = inode.clone();
let mut info2 = info.clone();
info2.target = PathBuf::from("/a/b");
let node2 = Node::new(inode2, info2, 1);
let node2 = TreeNode::new(RefCell::from(node2));
prefetch.insert(&node2, &node2.borrow());
let inode3 = inode.clone();
let mut info3 = info.clone();
info3.target = PathBuf::from("/h/i/j");
let node3 = Node::new(inode3, info3, 1);
let node3 = TreeNode::new(RefCell::from(node3));
prefetch.insert(&node3, &node3.borrow());
let inode4 = inode.clone();
let mut info4 = info.clone();
info4.target = PathBuf::from("/z");
let node4 = Node::new(inode4, info4, 1);
let node4 = TreeNode::new(RefCell::from(node4));
prefetch.insert(&node4, &node4.borrow());
let inode5 = inode.clone();
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_size(0);
let mut info5 = info;
info5.target = PathBuf::from("/a/b/d");
let node5 = Node::new(inode5, info5, 1);
let node5 = TreeNode::new(RefCell::from(node5));
prefetch.insert(&node5, &node5.borrow());
// node1, node2
assert_eq!(prefetch.fs_prefetch_rule_count(), 2);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 4);
assert_eq!(non_pre.len(), 1);
let pre_str: Vec<String> = pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]);
let non_pre_str: Vec<String> = non_pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(non_pre_str, vec!["/z"]);
prefetch.clear();
assert_eq!(prefetch.fs_prefetch_rule_count(), 0);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 0);
assert_eq!(non_pre.len(), 0);
}
}

View File

@ -1,533 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! An in-memory tree structure to maintain information for filesystem metadata.
//!
//! Steps to build the first layer for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Traverse the upper tree (FileSystemTree) to dump bootstrap and data blobs.
//!
//! Steps to build the second and following on layers for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Load the lower tree (MetadataTree) from a metadata blob.
//! - Merge the final tree (OverlayTree) by applying the upper tree (FileSystemTree) to the
//! lower tree (MetadataTree).
//! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs.
use std::cell::{RefCell, RefMut};
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use anyhow::{bail, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::{bytes_to_os_str, RafsXAttrs};
use nydus_rafs::metadata::{Inode, RafsInodeExt, RafsSuper};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer};
use super::node::{ChunkSource, Node, NodeChunk, NodeInfo};
use super::overlay::{Overlay, WhiteoutType};
use crate::core::overlay::OVERLAYFS_WHITEOUT_OPAQUE;
use crate::{BuildContext, ChunkDict};
/// Type alias for tree internal node.
pub type TreeNode = Rc<RefCell<Node>>;
/// An in-memory tree structure to maintain information and topology of filesystem nodes.
#[derive(Clone)]
pub struct Tree {
/// Filesystem node.
pub node: TreeNode,
/// Cached base name.
name: Vec<u8>,
/// Children tree nodes.
pub children: Vec<Tree>,
}
impl Tree {
/// Create a new instance of `Tree` from a filesystem node.
pub fn new(node: Node) -> Self {
let name = node.name().as_bytes().to_vec();
Tree {
node: Rc::new(RefCell::new(node)),
name,
children: Vec::new(),
}
}
/// Load a `Tree` from a bootstrap file, and optionally caches chunk information.
pub fn from_bootstrap<T: ChunkDict>(rs: &RafsSuper, chunk_dict: &mut T) -> Result<Self> {
let tree_builder = MetadataTreeBuilder::new(rs);
let root_ino = rs.superblock.root_ino();
let root_inode = rs.get_extended_inode(root_ino, true)?;
let root_node = MetadataTreeBuilder::parse_node(rs, root_inode, PathBuf::from("/"))?;
let mut tree = Tree::new(root_node);
tree.children = timing_tracer!(
{ tree_builder.load_children(root_ino, Option::<PathBuf>::None, chunk_dict, true,) },
"load_tree_from_bootstrap"
)?;
Ok(tree)
}
/// Get name of the tree node.
pub fn name(&self) -> &[u8] {
&self.name
}
/// Set `Node` associated with the tree node.
pub fn set_node(&mut self, node: Node) {
self.node.replace(node);
}
/// Get mutably borrowed value to access the associated `Node` object.
pub fn borrow_mut_node(&self) -> RefMut<'_, Node> {
self.node.as_ref().borrow_mut()
}
/// Walk all nodes in DFS mode.
pub fn walk_dfs<F1, F2>(&self, pre: &mut F1, post: &mut F2) -> Result<()>
where
F1: FnMut(&Tree) -> Result<()>,
F2: FnMut(&Tree) -> Result<()>,
{
pre(self)?;
for child in &self.children {
child.walk_dfs(pre, post)?;
}
post(self)?;
Ok(())
}
/// Walk all nodes in pre DFS mode.
pub fn walk_dfs_pre<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(cb, &mut |_t| Ok(()))
}
/// Walk all nodes in post DFS mode.
pub fn walk_dfs_post<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(&mut |_t| Ok(()), cb)
}
/// Walk the tree in BFS mode.
pub fn walk_bfs<F>(&self, handle_self: bool, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
if handle_self {
cb(self)?;
}
let mut dirs = Vec::with_capacity(32);
for child in &self.children {
cb(child)?;
if child.borrow_mut_node().is_dir() {
dirs.push(child);
}
}
for dir in dirs {
dir.walk_bfs(false, cb)?;
}
Ok(())
}
/// Insert a new child node into the tree.
pub fn insert_child(&mut self, child: Tree) {
if let Err(idx) = self
.children
.binary_search_by_key(&&child.name, |n| &n.name)
{
self.children.insert(idx, child);
}
}
/// Get index of child node with specified `name`.
pub fn get_child_idx(&self, name: &[u8]) -> Option<usize> {
self.children.binary_search_by_key(&name, |n| &n.name).ok()
}
/// Get the tree node corresponding to the path.
pub fn get_node(&self, path: &Path) -> Option<&Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
for name in &target_vec[1..] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &tree.children[idx],
None => return None,
}
}
Some(tree)
}
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes());
assert_eq!(upper.name, "/".as_bytes());
// Handle the root node.
upper.borrow_mut_node().overlay = Overlay::UpperModification;
self.node = upper.node.clone();
self.merge_children(ctx, &upper)?;
lazy_drop(upper);
Ok(())
}
fn merge_children(&mut self, ctx: &BuildContext, upper: &Tree) -> Result<()> {
// Handle whiteout nodes in the first round, and handle other nodes in the second round.
let mut modified = Vec::with_capacity(upper.children.len());
for u in upper.children.iter() {
let mut u_node = u.borrow_mut_node();
match u_node.whiteout_type(ctx.whiteout_spec) {
Some(WhiteoutType::OciRemoval) => {
if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) {
if let Some(idx) = self.get_child_idx(origin_name.as_bytes()) {
self.children.remove(idx);
}
}
}
Some(WhiteoutType::OciOpaque) => {
self.children.clear();
}
Some(WhiteoutType::OverlayFsRemoval) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children.remove(idx);
}
}
Some(WhiteoutType::OverlayFsOpaque) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children[idx].children.clear();
}
u_node.remove_xattr(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE));
modified.push(u);
}
None => modified.push(u),
}
}
let mut dirs = Vec::new();
for u in modified {
let mut u_node = u.borrow_mut_node();
if let Some(idx) = self.get_child_idx(&u.name) {
u_node.overlay = Overlay::UpperModification;
self.children[idx].node = u.node.clone();
} else {
u_node.overlay = Overlay::UpperAddition;
self.insert_child(Tree {
node: u.node.clone(),
name: u.name.clone(),
children: vec![],
});
}
if u_node.is_dir() {
dirs.push(u);
}
}
for dir in dirs {
if let Some(idx) = self.get_child_idx(&dir.name) {
self.children[idx].merge_children(ctx, dir)?;
} else {
bail!("builder: can not find directory in merged tree");
}
}
Ok(())
}
}
pub struct MetadataTreeBuilder<'a> {
rs: &'a RafsSuper,
}
impl<'a> MetadataTreeBuilder<'a> {
fn new(rs: &'a RafsSuper) -> Self {
Self { rs }
}
/// Build node tree by loading bootstrap file
fn load_children<T: ChunkDict, P: AsRef<Path>>(
&self,
ino: Inode,
parent: Option<P>,
chunk_dict: &mut T,
validate_digest: bool,
) -> Result<Vec<Tree>> {
let inode = self.rs.get_extended_inode(ino, validate_digest)?;
if !inode.is_dir() {
return Ok(Vec::new());
}
let parent_path = if let Some(parent) = parent {
parent.as_ref().join(inode.name())
} else {
PathBuf::from("/")
};
let blobs = self.rs.superblock.get_blob_infos();
let child_count = inode.get_child_count();
let mut children = Vec::with_capacity(child_count as usize);
for idx in 0..child_count {
let child = inode.get_child_by_index(idx)?;
let child_path = parent_path.join(child.name());
let child = Self::parse_node(self.rs, child.clone(), child_path)?;
if child.is_reg() {
for chunk in &child.chunks {
let blob_idx = chunk.inner.blob_index();
if let Some(blob) = blobs.get(blob_idx as usize) {
chunk_dict.add_chunk(chunk.inner.clone(), blob.digester());
}
}
}
let child = Tree::new(child);
children.push(child);
}
children.sort_unstable_by(|a, b| a.name.cmp(&b.name));
for child in children.iter_mut() {
let child_node = child.borrow_mut_node();
if child_node.is_dir() {
let child_ino = child_node.inode.ino();
drop(child_node);
child.children =
self.load_children(child_ino, Some(&parent_path), chunk_dict, validate_digest)?;
}
}
Ok(children)
}
/// Convert a `RafsInode` object to an in-memory `Node` object.
pub fn parse_node(rs: &RafsSuper, inode: Arc<dyn RafsInodeExt>, path: PathBuf) -> Result<Node> {
let chunks = if inode.is_reg() {
let chunk_count = inode.get_chunk_count();
let mut chunks = Vec::with_capacity(chunk_count as usize);
for i in 0..chunk_count {
let cki = inode.get_chunk_info(i)?;
chunks.push(NodeChunk {
source: ChunkSource::Parent,
inner: Arc::new(ChunkWrapper::from_chunk_info(cki)),
});
}
chunks
} else {
Vec::new()
};
let symlink = if inode.is_symlink() {
Some(inode.get_symlink()?)
} else {
None
};
let mut xattrs = RafsXAttrs::new();
for name in inode.get_xattrs()? {
let name = bytes_to_os_str(&name);
let value = inode.get_xattr(name)?;
xattrs.add(name.to_os_string(), value.unwrap_or_default())?;
}
// Nodes loaded from bootstrap will only be used as `Overlay::Lower`, so make `dev` invalid
// to avoid breaking hardlink detecting logic.
let src_dev = u64::MAX;
let rdev = inode.rdev() as u64;
let inode = InodeWrapper::from_inode_info(inode.clone());
let source = PathBuf::from("/");
let target = Node::generate_target(&path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: rs.meta.explicit_uidgid(),
src_ino: inode.ino(),
src_dev,
rdev,
path,
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
Ok(Node {
info: Arc::new(info),
index: 0,
layer_idx: 0,
overlay: Overlay::Lower,
inode,
chunks,
v6_offset: 0,
v6_dirents: Vec::new(),
v6_datalayout: 0,
v6_compact_inode: false,
v6_dirents_offset: 0,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::RAFS_DEFAULT_CHUNK_SIZE;
use vmm_sys_util::tempdir::TempDir;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_set_lock_node() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes());
let node1 = tree.borrow_mut_node();
drop(node1);
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
tree.set_node(node);
let node2 = tree.borrow_mut_node();
assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap());
}
#[test]
fn test_walk_tree() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
let tmpfile2 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree2 = Tree::new(node);
tree.insert_child(tree2);
let tmpfile3 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree3 = Tree::new(node);
tree.insert_child(tree3);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 3);
let mut count = 0;
tree.walk_bfs(false, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 2);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
bail!("test")
})
.unwrap_err();
assert_eq!(count, 1);
let idx = tree
.get_child_idx(tmpfile2.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
let idx = tree
.get_child_idx(tmpfile3.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
}
}

View File

@ -1,266 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::convert::TryFrom;
use std::mem::size_of;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::v5::{
RafsV5BlobTable, RafsV5ChunkInfo, RafsV5InodeTable, RafsV5InodeWrapper, RafsV5SuperBlock,
RafsV5XAttrsTable,
};
use nydus_rafs::metadata::{RafsStore, RafsVersion};
use nydus_rafs::RafsIoWrite;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{div_round_up, root_tracer, timing_tracer, try_round_up_4k};
use super::node::Node;
use crate::{Bootstrap, BootstrapContext, BuildContext, Tree};
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for directory
// entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we don't
// have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5 directory
// inode.
const RAFS_V5_VIRTUAL_ENTRY_SIZE: u64 = 8;
impl Node {
/// Dump RAFS v5 inode metadata to meta blob.
pub fn dump_bootstrap_v5(
&self,
ctx: &mut BuildContext,
f_bootstrap: &mut dyn RafsIoWrite,
) -> Result<()> {
trace!("[{}]\t{}", self.overlay, self);
if let InodeWrapper::V5(raw_inode) = &self.inode {
// Dump inode info
let name = self.name();
let inode = RafsV5InodeWrapper {
name,
symlink: self.info.symlink.as_deref(),
inode: raw_inode,
};
inode
.store(f_bootstrap)
.context("failed to dump inode to bootstrap")?;
// Dump inode xattr
if !self.info.xattrs.is_empty() {
self.info
.xattrs
.store_v5(f_bootstrap)
.context("failed to dump xattr to bootstrap")?;
ctx.has_xattr = true;
}
// Dump chunk info
if self.is_reg() && self.inode.child_count() as usize != self.chunks.len() {
bail!("invalid chunk count {}: {}", self.chunks.len(), self);
}
for chunk in &self.chunks {
chunk
.inner
.store(f_bootstrap)
.context("failed to dump chunk info to bootstrap")?;
trace!("\t\tchunk: {} compressor {}", chunk, ctx.compressor,);
}
Ok(())
} else {
bail!("dump_bootstrap_v5() encounters non-v5-inode");
}
}
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for
// directory entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we
// don't have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5
// directory inode.
pub fn v5_set_dir_size(&mut self, fs_version: RafsVersion, children: &[Tree]) {
if !self.is_dir() || !fs_version.is_v5() {
return;
}
let mut d_size = 0u64;
for child in children.iter() {
d_size += child.borrow_mut_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
}
if d_size == 0 {
self.inode.set_size(4096);
} else {
// Safe to unwrap() because we have u32 for child count.
self.inode.set_size(try_round_up_4k(d_size).unwrap());
}
self.v5_set_inode_blocks();
}
/// Calculate and set `i_blocks` for inode.
///
/// In order to support repeatable build, we can't reuse `i_blocks` from source filesystems,
/// so let's calculate it by ourself for stable `i_block`.
///
/// Normal filesystem includes the space occupied by Xattr into the directory size,
/// let's follow the normal behavior.
pub fn v5_set_inode_blocks(&mut self) {
// Set inode blocks for RAFS v5 inode, v6 will calculate it at runtime.
if let InodeWrapper::V5(_) = self.inode {
self.inode.set_blocks(div_round_up(
self.inode.size() + self.info.xattrs.aligned_size_v5() as u64,
512,
));
}
}
}
impl Bootstrap {
/// Calculate inode digest for directory.
fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) {
let mut node = tree.borrow_mut_node();
// We have set digest for non-directory inode in the previous dump_blob workflow.
if node.is_dir() {
let mut inode_hasher = RafsDigest::hasher(ctx.digester);
for child in tree.children.iter() {
let child = child.borrow_mut_node();
inode_hasher.digest_update(child.inode.digest().as_ref());
}
node.inode.set_digest(inode_hasher.digest_finalize());
}
}
/// Dump bootstrap and blob file, return (Vec<blob_id>, blob_size)
pub(crate) fn v5_dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsV5BlobTable,
) -> Result<()> {
// Set inode digest, use reverse iteration order to reduce repeated digest calculations.
self.tree.walk_dfs_post(&mut |t| {
self.v5_digest_node(ctx, t);
Ok(())
})?;
// Set inode table
let super_block_size = size_of::<RafsV5SuperBlock>();
let inode_table_entries = bootstrap_ctx.get_next_ino() as u32 - 1;
let mut inode_table = RafsV5InodeTable::new(inode_table_entries as usize);
let inode_table_size = inode_table.size();
// Set prefetch table
let (prefetch_table_size, prefetch_table_entries) =
if let Some(prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
(prefetch_table.size(), prefetch_table.len() as u32)
} else {
(0, 0u32)
};
// Set blob table, use sha256 string (length 64) as blob id if not specified
let prefetch_table_offset = super_block_size + inode_table_size;
let blob_table_offset = prefetch_table_offset + prefetch_table_size;
let blob_table_size = blob_table.size();
let extended_blob_table_offset = blob_table_offset + blob_table_size;
let extended_blob_table_size = blob_table.extended.size();
let extended_blob_table_entries = blob_table.extended.entries();
// Set super block
let mut super_block = RafsV5SuperBlock::new();
let inodes_count = bootstrap_ctx.inode_map.len() as u64;
super_block.set_inodes_count(inodes_count);
super_block.set_inode_table_offset(super_block_size as u64);
super_block.set_inode_table_entries(inode_table_entries);
super_block.set_blob_table_offset(blob_table_offset as u64);
super_block.set_blob_table_size(blob_table_size as u32);
super_block.set_extended_blob_table_offset(extended_blob_table_offset as u64);
super_block.set_extended_blob_table_entries(u32::try_from(extended_blob_table_entries)?);
super_block.set_prefetch_table_offset(prefetch_table_offset as u64);
super_block.set_prefetch_table_entries(prefetch_table_entries);
super_block.set_compressor(ctx.compressor);
super_block.set_digester(ctx.digester);
super_block.set_chunk_size(ctx.chunk_size);
if ctx.explicit_uidgid {
super_block.set_explicit_uidgid();
}
// Set inodes and chunks
let mut inode_offset = (super_block_size
+ inode_table_size
+ prefetch_table_size
+ blob_table_size
+ extended_blob_table_size) as u32;
let mut has_xattr = false;
self.tree.walk_dfs_pre(&mut |t| {
let node = t.borrow_mut_node();
inode_table.set(node.index, inode_offset)?;
// Add inode size
inode_offset += node.inode.inode_size() as u32;
if node.inode.has_xattr() {
has_xattr = true;
if !node.info.xattrs.is_empty() {
inode_offset += (size_of::<RafsV5XAttrsTable>()
+ node.info.xattrs.aligned_size_v5())
as u32;
}
}
// Add chunks size
if node.is_reg() {
inode_offset += node.inode.child_count() * size_of::<RafsV5ChunkInfo>() as u32;
}
Ok(())
})?;
if has_xattr {
super_block.set_has_xattr();
}
// Dump super block
super_block
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store superblock")?;
// Dump inode table
inode_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store inode table")?;
// Dump prefetch table
if let Some(mut prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
prefetch_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store prefetch table")?;
}
// Dump blob table
blob_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store blob table")?;
// Dump extended blob table
blob_table
.store_extended(bootstrap_ctx.writer.as_mut())
.context("failed to store extended blob table")?;
// Dump inodes and chunks
timing_tracer!(
{
self.tree.walk_dfs_pre(&mut |t| {
t.borrow_mut_node()
.dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut())
.context("failed to dump bootstrap")
})
},
"dump_bootstrap"
)?;
Ok(())
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,267 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fs;
use std::fs::DirEntry;
use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
};
use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
struct FilesystemTreeBuilder {}
impl FilesystemTreeBuilder {
fn new() -> Self {
Self {}
}
#[allow(clippy::only_used_in_recursion)]
/// Walk directory to build node tree by DFS
fn load_children(
&self,
ctx: &mut BuildContext,
parent: &TreeNode,
layer_idx: u16,
) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut trees = Vec::new();
let mut external_trees = Vec::new();
let parent = parent.borrow();
if !parent.is_dir() {
return Ok((trees.clone(), external_trees));
}
let children = fs::read_dir(parent.path())
.with_context(|| format!("failed to read dir {:?}", parent.path()))?;
let children = children.collect::<Result<Vec<DirEntry>, std::io::Error>>()?;
event_tracer!("load_from_directory", +children.len());
for child in children {
let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
file_size,
parent.info.explicit_uidgid,
true,
)
.with_context(|| format!("failed to create node {:?}", path))?;
child.layer_idx = layer_idx;
// as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers.
if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec)
{
continue;
}
let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
}
trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok((trees, external_trees))
}
}
#[derive(Default)]
pub struct DirectoryBuilder {}
impl DirectoryBuilder {
pub fn new() -> Self {
Self {}
}
/// Build node tree from a filesystem directory
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
let node = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
ctx.source_path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
0,
ctx.explicit_uidgid,
true,
)?;
let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new();
let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory"
)?;
tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok((tree, external_tree))
}
fn one_build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> {
// Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
}
}

View File

@ -1,411 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Builder to create RAFS filesystems from directories and tarballs.
#[macro_use]
extern crate log;
use crate::core::context::Artifact;
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::meta::toc;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, digest, root_tracer, timing_tracer};
use sha2::Digest;
use self::core::node::{Node, NodeInfo};
pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{
ArtifactStorage, ArtifactWriter, BlobCacheGenerator, BlobContext, BlobManager,
BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
pub use self::core::feature::{Feature, Features};
pub use self::core::node::{ChunkSource, NodeChunk};
pub use self::core::overlay::{Overlay, WhiteoutSpec};
pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator;
mod compact;
mod core;
mod directory;
mod merge;
mod optimize_prefetch;
mod stargz;
mod tarball;
/// Trait to generate a RAFS filesystem from the source.
pub trait Builder {
fn build(
&mut self,
build_ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput>;
}
fn build_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
blob_mgr: &mut BlobManager,
mut tree: Tree,
) -> Result<Bootstrap> {
// For multi-layer build, merge the upper layer and lower layer with overlay whiteout applied.
if bootstrap_ctx.layered {
let mut parent = Bootstrap::load_parent_bootstrap(ctx, bootstrap_mgr, blob_mgr)?;
timing_tracer!({ parent.merge_overaly(ctx, tree) }, "merge_bootstrap")?;
tree = parent;
}
let mut bootstrap = Bootstrap::new(tree)?;
timing_tracer!({ bootstrap.build(ctx, bootstrap_ctx) }, "build_bootstrap")?;
Ok(bootstrap)
}
fn dump_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
bootstrap: &mut Bootstrap,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Make sure blob id is updated according to blob hash if not specified by user.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.blob_id.is_empty() {
// `Blob::dump()` should have set `blob_ctx.blob_id` to referenced OCI tarball for
// ref-type conversion.
assert!(!ctx.conversion_type.is_to_ref());
if ctx.blob_inline_meta {
// Set special blob id for blob with inlined meta.
blob_ctx.blob_id = "x".repeat(64);
} else {
blob_ctx.blob_id = format!("{:x}", blob_ctx.blob_hash.clone().finalize());
}
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
}
// Dump bootstrap file
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, bootstrap_ctx, &blob_table)?;
// Dump RAFS meta to data blob if inline meta is enabled.
if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob.
let blob_ctx = if blob_mgr.external {
&mut blob_mgr.new_blob_ctx(ctx)?
} else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len();
let uncompressed_digest =
RafsDigest::from_buf(&uncompressed_bootstrap, digest::Algorithm::Sha256);
// Output uncompressed data for backward compatibility and compressed data for new format.
let (bootstrap_data, compressor) = if ctx.features.is_enabled(Feature::BlobToc) {
let mut compressor = compress::Algorithm::Zstd;
let (compressed_data, compressed) =
compress::compress(&uncompressed_bootstrap, compressor)
.with_context(|| "failed to compress bootstrap".to_string())?;
blob_ctx.write_data(blob_writer, &compressed_data)?;
if !compressed {
compressor = compress::Algorithm::None;
}
(compressed_data, compressor)
} else {
blob_ctx.write_data(blob_writer, &uncompressed_bootstrap)?;
(uncompressed_bootstrap, compress::Algorithm::None)
};
let compressed_size = bootstrap_data.len();
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BOOTSTRAP,
compressed_size as u64,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BOOTSTRAP,
compressor,
uncompressed_digest,
bootstrap_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
}
}
Ok(())
}
fn dump_toc(
ctx: &mut BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if ctx.features.is_enabled(Feature::BlobToc) {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let data = blob_ctx.entry_list.as_bytes().to_vec();
let toc_size = data.len() as u64;
blob_ctx.write_data(blob_writer, &data)?;
hasher.digest_update(&data);
let header = blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_TOC, toc_size)?;
hasher.digest_update(header.as_bytes());
blob_ctx.blob_toc_digest = hasher.digest_finalize().data;
blob_ctx.blob_toc_size = toc_size as u32 + header.as_bytes().len() as u32;
}
Ok(())
}
fn finalize_blob(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let is_tarfs = ctx.conversion_type == ConversionType::TarToTarfs;
if !is_tarfs {
dump_toc(ctx, blob_ctx, blob_writer)?;
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
if ctx.blob_inline_meta && blob_ctx.blob_id == "x".repeat(64) {
blob_ctx.blob_id = String::new();
}
let hash = blob_ctx.blob_hash.clone().finalize();
let blob_meta_id = if ctx.blob_id.is_empty() {
format!("{:x}", hash)
} else {
assert!(!ctx.conversion_type.is_to_ref() || is_tarfs);
ctx.blob_id.clone()
};
if ctx.conversion_type.is_to_ref() {
if blob_ctx.blob_id.is_empty() {
// Use `sha256(tarball)` as `blob_id`. A tarball without files will fall through
// this path because `Blob::dump()` hasn't generated `blob_ctx.blob_id`.
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
// Tarfs mode only has tar stream and meta blob, there's no data blob.
if !ctx.blob_inline_meta && !is_tarfs {
blob_ctx.blob_meta_digest = hash.into();
blob_ctx.blob_meta_size = blob_writer.pos()?;
}
} else if blob_ctx.blob_id.is_empty() {
// `blob_ctx.blob_id` should be RAFS blob id.
blob_ctx.blob_id = blob_meta_id.clone();
}
// Tarfs mode directly use the tar file as RAFS data blob, so no need to generate the data
// blob file.
if !is_tarfs {
blob_writer.finalize(Some(blob_meta_id))?;
}
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.finalize(&blob_ctx.blob_id)?;
}
}
Ok(())
}
/// Helper for TarballBuilder/StargzBuilder to build the filesystem tree.
pub struct TarBuilder {
pub explicit_uidgid: bool,
pub layer_idx: u16,
pub version: RafsVersion,
next_ino: Inode,
}
impl TarBuilder {
/// Create a new instance of [TarBuilder].
pub fn new(explicit_uidgid: bool, layer_idx: u16, version: RafsVersion) -> Self {
TarBuilder {
explicit_uidgid,
layer_idx,
next_ino: 0,
version,
}
}
/// Allocate an inode number.
pub fn next_ino(&mut self) -> Inode {
self.next_ino += 1;
self.next_ino
}
/// Insert a node into the tree, creating any missing intermediate directories.
pub fn insert_into_tree(&mut self, tree: &mut Tree, node: Node) -> Result<()> {
let target_paths = node.target_vec();
let target_paths_len = target_paths.len();
if target_paths_len == 1 {
// Handle root node modification
assert_eq!(node.path(), Path::new("/"));
tree.set_node(node);
} else {
let mut tmp_tree = tree;
for idx in 1..target_paths.len() {
match tmp_tree.get_child_idx(target_paths[idx].as_bytes()) {
Some(i) => {
if idx == target_paths_len - 1 {
tmp_tree.children[i].set_node(node);
break;
} else {
tmp_tree = &mut tmp_tree.children[i];
}
}
None => {
if idx == target_paths_len - 1 {
tmp_tree.insert_child(Tree::new(node));
break;
} else {
let node = self.create_directory(&target_paths[..=idx])?;
tmp_tree.insert_child(Tree::new(node));
let last_idx = tmp_tree.children.len() - 1;
tmp_tree = &mut tmp_tree.children[last_idx];
}
}
}
}
}
Ok(())
}
/// Create a new node for a directory.
pub fn create_directory(&mut self, target_paths: &[OsString]) -> Result<Node> {
let ino = self.next_ino();
let name = &target_paths[target_paths.len() - 1];
let mut inode = InodeWrapper::new(self.version);
inode.set_ino(ino);
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_nlink(2);
inode.set_name_size(name.len());
inode.set_rdev(u32::MAX);
let source = PathBuf::from("/");
let target_vec = target_paths.to_vec();
let mut target = PathBuf::new();
for name in target_paths.iter() {
target = target.join(name);
}
let info = NodeInfo {
explicit_uidgid: self.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: u64::MAX,
path: target.clone(),
source,
target,
target_vec,
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: false,
};
Ok(Node::new(inode, info, self.layer_idx))
}
/// Check whether the path is a eStargz special file.
pub fn is_stargz_special_files(&self, path: &Path) -> bool {
path == Path::new("/stargz.index.json")
|| path == Path::new("/.prefetch.landmark")
|| path == Path::new("/.no.prefetch.landmark")
}
}
#[cfg(test)]
mod tests {
use vmm_sys_util::tempdir::TempDir;
use super::*;
#[test]
fn test_tar_builder_is_stargz_special_files() {
let builder = TarBuilder::new(true, 0, RafsVersion::V6);
let path = Path::new("/stargz.index.json");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.no.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/no.prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/tar.index.json");
assert!(!builder.is_stargz_special_files(&path));
}
#[test]
fn test_tar_builder_create_directory() {
let tmp_dir = TempDir::new().unwrap();
let target_paths = [OsString::from(tmp_dir.as_path())];
let mut builder = TarBuilder::new(true, 0, RafsVersion::V6);
let node = builder.create_directory(&target_paths);
assert!(node.is_ok());
let node = node.unwrap();
println!("Node: {}", node);
assert_eq!(node.file_type(), "dir");
assert_eq!(node.target(), tmp_dir.as_path());
assert_eq!(builder.next_ino, 1);
assert_eq!(builder.next_ino(), 2);
}
}

View File

@ -1,440 +0,0 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{anyhow, bail, ensure, Context, Result};
use hex::FromHex;
use nydus_api::ConfigV2;
use nydus_rafs::metadata::{RafsSuper, RafsVersion};
use nydus_storage::device::{BlobFeatures, BlobInfo};
use nydus_utils::crypt;
use super::{
ArtifactStorage, BlobContext, BlobManager, Bootstrap, BootstrapContext, BuildContext,
BuildOutput, ChunkSource, ConversionType, Overlay, Tree,
};
/// Struct to generate the merged RAFS bootstrap for an image from per layer RAFS bootstraps.
///
/// A container image contains one or more layers, a RAFS bootstrap is built for each layer.
/// Those per layer bootstraps could be mounted by overlayfs to form the container rootfs.
/// To improve performance by avoiding overlayfs, an image level bootstrap is generated by
/// merging per layer bootstrap with overlayfs rules applied.
pub struct Merger {}
impl Merger {
fn get_string_from_list(
original_ids: &Option<Vec<String>>,
idx: usize,
) -> Result<Option<String>> {
Ok(if let Some(id) = &original_ids {
let id_string = id
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(id_string.clone())
} else {
None
})
}
fn get_digest_from_list(digests: &Option<Vec<String>>, idx: usize) -> Result<Option<[u8; 32]>> {
Ok(if let Some(digests) = &digests {
let digest = digests
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(<[u8; 32]>::from_hex(digest)?)
} else {
None
})
}
fn get_size_from_list(sizes: &Option<Vec<u64>>, idx: usize) -> Result<Option<u64>> {
Ok(if let Some(sizes) = &sizes {
let size = sizes
.get(idx)
.ok_or_else(|| anyhow!("unmatched size index {}", idx))?;
Some(*size)
} else {
None
})
}
/// Overlay multiple RAFS filesystems into a merged RAFS filesystem.
///
/// # Arguments
/// - sources: contains one or more per layer bootstraps in order of lower to higher.
/// - chunk_dict: contain the chunk dictionary used to build per layer boostrap, or None.
#[allow(clippy::too_many_arguments)]
pub fn merge(
ctx: &mut BuildContext,
parent_bootstrap_path: Option<String>,
sources: Vec<PathBuf>,
blob_digests: Option<Vec<String>>,
original_blob_ids: Option<Vec<String>>,
blob_sizes: Option<Vec<u64>>,
blob_toc_digests: Option<Vec<String>>,
blob_toc_sizes: Option<Vec<u64>>,
target: ArtifactStorage,
chunk_dict: Option<PathBuf>,
config_v2: Arc<ConfigV2>,
) -> Result<BuildOutput> {
if sources.is_empty() {
bail!("source bootstrap list is empty , at least one bootstrap is required");
}
if let Some(digests) = blob_digests.as_ref() {
ensure!(
digests.len() == sources.len(),
"number of blob digest entries {} doesn't match number of sources {}",
digests.len(),
sources.len(),
);
}
if let Some(original_ids) = original_blob_ids.as_ref() {
ensure!(
original_ids.len() == sources.len(),
"number of original blob id entries {} doesn't match number of sources {}",
original_ids.len(),
sources.len(),
);
}
if let Some(sizes) = blob_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of blob size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
if let Some(toc_digests) = blob_toc_digests.as_ref() {
ensure!(
toc_digests.len() == sources.len(),
"number of toc digest entries {} doesn't match number of sources {}",
toc_digests.len(),
sources.len(),
);
}
if let Some(sizes) = blob_toc_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of toc size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester, false);
let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0;
// Load parent bootstrap
if let Some(parent_bootstrap_path) = &parent_bootstrap_path {
let (rs, _) =
RafsSuper::load_from_file(parent_bootstrap_path, config_v2.clone(), false)
.context(format!("load parent bootstrap {:?}", parent_bootstrap_path))?;
let blobs = rs.superblock.get_blob_infos();
for blob in &blobs {
let blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
blob_idx_map.insert(blob_ctx.blob_id.clone(), blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
parent_layers = blobs.len();
tree = Some(Tree::from_bootstrap(&rs, &mut ())?);
}
// Get the blobs come from chunk dictionary.
let mut chunk_dict_blobs = HashSet::new();
let mut config = None;
if let Some(chunk_dict_path) = &chunk_dict {
let (rs, _) = RafsSuper::load_from_file(chunk_dict_path, config_v2.clone(), false)
.context(format!("load chunk dict bootstrap {:?}", chunk_dict_path))?;
config = Some(rs.meta.get_config());
for blob in rs.superblock.get_blob_infos() {
chunk_dict_blobs.insert(blob.blob_id().to_string());
}
}
let mut fs_version = RafsVersion::V6;
let mut chunk_size = None;
for (layer_idx, bootstrap_path) in sources.iter().enumerate() {
let (rs, _) = RafsSuper::load_from_file(bootstrap_path, config_v2.clone(), false)
.context(format!("load bootstrap {:?}", bootstrap_path))?;
config
.get_or_insert_with(|| rs.meta.get_config())
.check_compatibility(&rs.meta)?;
fs_version = RafsVersion::try_from(rs.meta.version)
.context("failed to get RAFS version number")?;
ctx.compressor = rs.meta.get_compressor();
ctx.digester = rs.meta.get_digester();
// If any RAFS filesystems are encrypted, the merged boostrap will be marked as encrypted.
match rs.meta.get_cipher() {
crypt::Algorithm::None => (),
crypt::Algorithm::Aes128Xts => ctx.cipher = crypt::Algorithm::Aes128Xts,
_ => bail!("invalid per layer bootstrap, only supports aes-128-xts"),
}
ctx.explicit_uidgid = rs.meta.explicit_uidgid();
if config.as_ref().unwrap().is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToTarfs;
ctx.blob_features |= BlobFeatures::TARFS;
}
let mut parent_blob_added = false;
let blobs = &rs.superblock.get_blob_infos();
for blob in blobs {
let mut blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
if let Some(chunk_size) = chunk_size {
ensure!(
chunk_size == blob_ctx.chunk_size,
"can not merge bootstraps with inconsistent chunk size, current bootstrap {:?} with chunk size {:x}, expected {:x}",
bootstrap_path,
blob_ctx.chunk_size,
chunk_size,
);
} else {
chunk_size = Some(blob_ctx.chunk_size);
}
if !chunk_dict_blobs.contains(&blob.blob_id()) {
// It is assumed that the `nydus-image create` at each layer and `nydus-image merge` commands
// use the same chunk dict bootstrap. So the parent bootstrap includes multiple blobs, but
// only at most one new blob, the other blobs should be from the chunk dict image.
if parent_blob_added {
bail!("invalid per layer bootstrap, having multiple associated data blobs");
}
parent_blob_added = true;
if ctx.configuration.internal.blob_accessible()
|| ctx.conversion_type == ConversionType::TarToTarfs
{
// `blob.blob_id()` should have been fixed when loading the bootstrap.
blob_ctx.blob_id = blob.blob_id();
} else {
// The blob id (blob sha256 hash) in parent bootstrap is invalid for nydusd
// runtime, should change it to the hash of whole tar blob.
if let Some(original_id) =
Self::get_string_from_list(&original_blob_ids, layer_idx)?
{
blob_ctx.blob_id = original_id;
} else {
blob_ctx.blob_id =
BlobInfo::get_blob_id_from_meta_path(bootstrap_path)?;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_digests, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_digest = digest;
} else {
blob_ctx.blob_id = hex::encode(digest);
}
}
if let Some(size) = Self::get_size_from_list(&blob_sizes, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_size = size;
} else {
blob_ctx.compressed_blob_size = size;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_toc_digests, layer_idx)?
{
blob_ctx.blob_toc_digest = digest;
}
if let Some(size) = Self::get_size_from_list(&blob_toc_sizes, layer_idx)? {
blob_ctx.blob_toc_size = size as u32;
}
}
if let Entry::Vacant(e) = blob_idx_map.entry(blob.blob_id()) {
e.insert(blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
}
let upper = Tree::from_bootstrap(&rs, &mut ())?;
upper.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = blobs[origin_blob_index].as_ref();
if let Some(blob_index) = blob_idx_map.get(&blob_ctx.blob_id()) {
// Set the blob index of chunk to real index in blob table of final bootstrap.
chunk.set_blob_index(*blob_index as u32);
}
}
// Set node's layer index to distinguish same inode number (from bootstrap)
// between different layers.
let idx = u16::try_from(layer_idx).context(format!(
"too many layers {}, limited to {}",
layer_idx,
u16::MAX
))?;
if parent_layers + idx as usize > u16::MAX as usize {
bail!("too many layers {}, limited to {}", layer_idx, u16::MAX);
}
node.layer_idx = idx + parent_layers as u16;
node.overlay = Overlay::UpperAddition;
Ok(())
})?;
if let Some(tree) = &mut tree {
tree.merge_overaly(ctx, upper)?;
} else {
tree = Some(upper);
}
}
if ctx.conversion_type == ConversionType::TarToTarfs {
if parent_layers > 0 {
bail!("merging RAFS in TARFS mode conflicts with `--parent-bootstrap`");
}
if !chunk_dict_blobs.is_empty() {
bail!("merging RAFS in TARFS mode conflicts with `--chunk-dict`");
}
}
// Safe to unwrap because there is at least one source bootstrap.
let tree = tree.unwrap();
ctx.fs_version = fs_version;
if let Some(chunk_size) = chunk_size {
ctx.chunk_size = chunk_size;
}
// After merging all trees, we need to re-calculate the blob index of
// referenced blobs, as the upper tree might have deleted some files
// or directories by opaques, and some blobs are dereferenced.
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
let origin_blobs = blob_mgr.get_blobs();
tree.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = origin_blobs[origin_blob_index].clone();
let origin_blob_id = blob_ctx.blob_id();
let new_blob_index = if let Some(new_blob_index) = used_blobs.get(&origin_blob_id) {
*new_blob_index
} else {
let new_blob_index = used_blob_mgr.len();
used_blobs.insert(origin_blob_id, new_blob_index);
used_blob_mgr.add_blob(blob_ctx);
new_blob_index
};
chunk.set_blob_index(new_blob_index as u32);
}
Ok(())
})?;
let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = used_blob_mgr.to_blob_table(ctx)?;
let mut bootstrap_storage = Some(target.clone());
bootstrap
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
.context(format!("dump bootstrap to {:?}", target.display()))?;
BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use nydus_utils::digest;
use vmm_sys_util::tempfile::TempFile;
use super::*;
#[test]
fn test_merger_get_string_from_list() {
let res = Merger::get_string_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "string2".to_owned()];
let original_ids = Some(original_ids);
let res = Merger::get_string_from_list(&original_ids, 0);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some("string1".to_owned()));
assert!(Merger::get_string_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_digest_from_list() {
let res = Merger::get_digest_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "12ab".repeat(16)];
let original_ids = Some(original_ids);
let res = Merger::get_digest_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(
res.unwrap(),
Some([
18u8, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171,
18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171
])
);
assert!(Merger::get_digest_from_list(&original_ids, 0).is_err());
assert!(Merger::get_digest_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_size_from_list() {
let res = Merger::get_size_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec![1u64, 2, 3, 4];
let original_ids = Some(original_ids);
let res = Merger::get_size_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some(2u64));
assert!(Merger::get_size_from_list(&original_ids, 4).is_err());
}
#[test]
fn test_merger_merge() {
let mut ctx = BuildContext::default();
ctx.configuration.internal.set_blob_accessible(false);
ctx.digester = digest::Algorithm::Sha256;
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path1 = PathBuf::from(root_dir);
source_path1.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let mut source_path2 = PathBuf::from(root_dir);
source_path2.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let tmp_file = TempFile::new().unwrap();
let target = ArtifactStorage::SingleFile(tmp_file.as_path().to_path_buf());
let blob_toc_digests = Some(vec![
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855".to_owned(),
"4cf0c409788fc1c149afbf4c81276b92427ae41e46412334ca495991b8526650".to_owned(),
]);
let build_output = Merger::merge(
&mut ctx,
None,
vec![source_path1, source_path2],
Some(vec!["a70f".repeat(16), "9bd3".repeat(16)]),
Some(vec!["blob_id".to_owned(), "blob_id2".to_owned()]),
Some(vec![16u64, 32u64]),
blob_toc_digests,
Some(vec![64u64, 128]),
target,
None,
Arc::new(ConfigV2::new("config_v2")),
);
assert!(build_output.is_ok());
let build_output = build_output.unwrap();
println!("BuildOutput: {}", build_output);
assert_eq!(build_output.blob_size, Some(16));
}
}

View File

@ -1,302 +0,0 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,744 +0,0 @@
// Copyright 2022 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate RAFS filesystem from a tarball.
//!
//! It support generating RAFS filesystem from a tar/targz/stargz file with or without data blob.
//!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as:
//! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufReader, Read, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::sync::Mutex;
use anyhow::{anyhow, bail, Context, Result};
use tar::{Archive, Entry, EntryType, Header};
use nydus_api::enosys;
use nydus_rafs::metadata::inode::{InodeWrapper, RafsInodeFlags, RafsV6Inode};
use nydus_rafs::metadata::layout::v5::RafsV5Inode;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::ZranContextGenerator;
use nydus_storage::RAFS_MAX_CHUNKS_PER_BLOB;
use nydus_utils::compact::makedev;
use nydus_utils::compress::zlib_random::{ZranReader, ZRAN_READER_BUF_SIZE};
use nydus_utils::compress::ZlibDecoder;
use nydus_utils::digest::RafsDigest;
use nydus_utils::{div_round_up, lazy_drop, root_tracer, timing_tracer, BufReaderInfo, ByteSize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
use super::core::node::{Node, NodeInfo};
use super::core::tree::Tree;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, TarBuilder};
enum CompressionType {
None,
Gzip,
}
enum TarReader {
File(File),
BufReader(BufReader<File>),
BufReaderInfo(BufReaderInfo<File>),
BufReaderInfoSeekable(BufReaderInfo<File>),
TarGzFile(Box<ZlibDecoder<File>>),
TarGzBufReader(Box<ZlibDecoder<BufReader<File>>>),
ZranReader(ZranReader<File>),
}
impl Read for TarReader {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
match self {
TarReader::File(f) => f.read(buf),
TarReader::BufReader(f) => f.read(buf),
TarReader::BufReaderInfo(b) => b.read(buf),
TarReader::BufReaderInfoSeekable(b) => b.read(buf),
TarReader::TarGzFile(f) => f.read(buf),
TarReader::TarGzBufReader(b) => b.read(buf),
TarReader::ZranReader(f) => f.read(buf),
}
}
}
impl TarReader {
fn seekable(&self) -> bool {
matches!(
self,
TarReader::File(_) | TarReader::BufReaderInfoSeekable(_)
)
}
}
impl Seek for TarReader {
fn seek(&mut self, pos: SeekFrom) -> std::io::Result<u64> {
match self {
TarReader::File(f) => f.seek(pos),
TarReader::BufReaderInfoSeekable(b) => b.seek(pos),
_ => Err(enosys!("seek() not supported!")),
}
}
}
struct TarballTreeBuilder<'a> {
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
buf: Vec<u8>,
builder: TarBuilder,
}
impl<'a> TarballTreeBuilder<'a> {
/// Create a new instance of `TarballBuilder`.
pub fn new(
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
layer_idx: u16,
) -> Self {
let builder = TarBuilder::new(ctx.explicit_uidgid, layer_idx, ctx.fs_version);
Self {
ty,
ctx,
blob_mgr,
buf: Vec::new(),
blob_writer,
builder,
}
}
fn build_tree(&mut self) -> Result<Tree> {
let file = OpenOptions::new()
.read(true)
.open(self.ctx.source_path.clone())
.context("tarball: can not open source file for conversion")?;
let mut is_file = match file.metadata() {
Ok(md) => md.file_type().is_file(),
Err(_) => false,
};
let reader = match self.ty {
ConversionType::EStargzToRef
| ConversionType::TargzToRef
| ConversionType::TarToRef => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
let generator = ZranContextGenerator::from_buf_reader(buf_reader)?;
let reader = generator.reader();
self.ctx.blob_zran_generator = Some(Mutex::new(generator));
self.ctx.blob_features.insert(BlobFeatures::ZRAN);
TarReader::ZranReader(reader)
}
(CompressionType::None, buf_reader) => {
self.ty = ConversionType::TarToRef;
let reader = BufReaderInfo::from_buf_reader(buf_reader);
self.ctx.blob_tar_reader = Some(reader.clone());
TarReader::BufReaderInfo(reader)
}
},
ConversionType::EStargzToRafs
| ConversionType::TargzToRafs
| ConversionType::TarToRafs => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::TarGzFile(Box::new(ZlibDecoder::new(file)))
} else {
TarReader::TarGzBufReader(Box::new(ZlibDecoder::new(buf_reader)))
}
}
(CompressionType::None, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::File(file)
} else {
TarReader::BufReader(buf_reader)
}
}
},
ConversionType::TarToTarfs => {
let mut reader = BufReaderInfo::from_buf_reader(BufReader::new(file));
self.ctx.blob_tar_reader = Some(reader.clone());
if !self.ctx.blob_id.is_empty() {
reader.enable_digest_calculation(false);
} else {
// Disable seek when need to calculate hash value.
is_file = false;
}
// only enable seek when hash computing is disabled.
if is_file {
TarReader::BufReaderInfoSeekable(reader)
} else {
TarReader::BufReaderInfo(reader)
}
}
_ => return Err(anyhow!("tarball: unsupported image conversion type")),
};
let is_seekable = reader.seekable();
let mut tar = Archive::new(reader);
tar.set_ignore_zeros(true);
tar.set_preserve_mtime(true);
tar.set_preserve_permissions(true);
tar.set_unpack_xattrs(true);
// Prepare scratch buffer for dumping file data.
if self.buf.len() < self.ctx.chunk_size as usize {
self.buf = vec![0u8; self.ctx.chunk_size as usize];
}
// Generate the root node in advance, it may be overwritten by entries from the tar stream.
let root = self.builder.create_directory(&[OsString::from("/")])?;
let mut tree = Tree::new(root);
// Generate RAFS node for each tar entry, and optionally adding missing parents.
let entries = if is_seekable {
tar.entries_with_seek()
.context("tarball: failed to read entries from tar")?
} else {
tar.entries()
.context("tarball: failed to read entries from tar")?
};
for entry in entries {
let mut entry = entry.context("tarball: failed to read entry from tar")?;
let path = entry
.path()
.context("tarball: failed to to get path from tar entry")?;
let path = PathBuf::from("/").join(path);
let path = path.components().as_path();
if !self.builder.is_stargz_special_files(path) {
self.parse_entry(&mut tree, &mut entry, path)?;
}
}
// Update directory size for RAFS V5 after generating the tree.
if self.ctx.fs_version.is_v5() {
Self::set_v5_dir_size(&mut tree);
}
Ok(tree)
}
fn parse_entry<R: Read>(
&mut self,
tree: &mut Tree,
entry: &mut Entry<R>,
path: &Path,
) -> Result<()> {
let header = entry.header();
let entry_type = header.entry_type();
if entry_type.is_gnu_longname() {
return Err(anyhow!("tarball: unsupported gnu_longname from tar header"));
} else if entry_type.is_gnu_longlink() {
return Err(anyhow!("tarball: unsupported gnu_longlink from tar header"));
} else if entry_type.is_pax_local_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_local_extensions from tar header"
));
} else if entry_type.is_pax_global_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_global_extensions from tar header"
));
} else if entry_type.is_contiguous() {
return Err(anyhow!(
"tarball: unsupported contiguous entry type from tar header"
));
} else if entry_type.is_gnu_sparse() {
return Err(anyhow!(
"tarball: unsupported gnu sparse file extension from tar header"
));
}
let mut file_size = entry.size();
let name = Self::get_file_name(path)?;
let mode = Self::get_mode(header)?;
let (uid, gid) = Self::get_uid_gid(self.ctx, header)?;
let mtime = header.mtime().unwrap_or_default();
let mut flags = match self.ctx.fs_version {
RafsVersion::V5 => RafsInodeFlags::default(),
RafsVersion::V6 => RafsInodeFlags::default(),
};
// Parse special files
let rdev = if entry_type.is_block_special()
|| entry_type.is_character_special()
|| entry_type.is_fifo()
{
let major = header
.device_major()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get major device from tar entry"))?;
let minor = header
.device_minor()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get minor device from tar entry"))?;
makedev(major as u64, minor as u64) as u32
} else {
u32::MAX
};
// Parse symlink
let (symlink, symlink_size) = if entry_type.is_symlink() {
let symlink_link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let symlink_size = symlink_link_path.as_os_str().byte_size();
if symlink_size > u16::MAX as usize {
bail!("tarball: symlink target from tar entry is too big");
}
file_size = symlink_size as u64;
flags |= RafsInodeFlags::SYMLINK;
(
Some(symlink_link_path.as_os_str().to_owned()),
symlink_size as u16,
)
} else {
(None, 0)
};
let mut child_count = 0;
if entry_type.is_file() {
child_count = div_round_up(file_size, self.ctx.chunk_size as u64);
if child_count > RAFS_MAX_CHUNKS_PER_BLOB as u64 {
bail!("tarball: file size 0x{:x} is too big", file_size);
}
}
// Handle hardlink ino
let mut hardlink_target = None;
let ino = if entry_type.is_hard_link() {
let link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let link_path = PathBuf::from("/").join(link_path);
let link_path = link_path.components().as_path();
let targets = Node::generate_target_vec(link_path);
assert!(!targets.is_empty());
let mut tmp_tree: &Tree = tree;
for name in &targets[1..] {
match tmp_tree.get_child_idx(name.as_bytes()) {
Some(idx) => tmp_tree = &tmp_tree.children[idx],
None => {
bail!(
"tarball: unknown target {} for hardlink {}",
link_path.display(),
path.display()
);
}
}
}
let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() {
bail!(
"tarball: target {} for hardlink {} is not a regular file",
link_path.display(),
path.display()
);
}
hardlink_target = Some(tmp_tree);
flags |= RafsInodeFlags::HARDLINK;
tmp_node.inode.set_has_hardlink(true);
tmp_node.inode.ino()
} else {
self.builder.next_ino()
};
// Parse xattrs
let mut xattrs = RafsXAttrs::new();
if let Some(exts) = entry.pax_extensions()? {
for p in exts {
match p {
Ok(pax) => {
let prefix = b"SCHILY.xattr.";
let key = pax.key_bytes();
if key.starts_with(prefix) {
let x_key = OsStr::from_bytes(&key[prefix.len()..]);
xattrs.add(x_key.to_os_string(), pax.value_bytes().to_vec())?;
}
}
Err(e) => {
return Err(anyhow!(
"tarball: failed to parse PaxExtension from tar header, {}",
e
))
}
}
}
}
let mut inode = match self.ctx.fs_version {
RafsVersion::V5 => InodeWrapper::V5(RafsV5Inode {
i_digest: RafsDigest::default(),
i_parent: 0,
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_index: 0,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
i_reserved: [0; 8],
}),
RafsVersion::V6 => InodeWrapper::V6(RafsV6Inode {
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
}),
};
inode.set_has_xattr(!xattrs.is_empty());
let source = PathBuf::from("/");
let target = Node::generate_target(path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: self.ctx.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: rdev as u64,
path: path.to_path_buf(),
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
let mut node = Node::new(inode, info, self.builder.layer_idx);
// Special handling of hardlink.
// Tar hardlink header has zero file size and no file data associated, so copy value from
// the associated regular file.
if let Some(t) = hardlink_target {
let n = t.borrow_mut_node();
if n.inode.is_v5() {
node.inode.set_digest(n.inode.digest().to_owned());
}
node.inode.set_size(n.inode.size());
node.inode.set_child_count(n.inode.child_count());
node.chunks = n.chunks.clone();
node.set_xattr(n.info.xattrs.clone());
} else {
node.dump_node_data_with_reader(
self.ctx,
self.blob_mgr,
self.blob_writer,
Some(entry),
&mut self.buf,
)?;
}
// Update inode.i_blocks for RAFS v5.
if self.ctx.fs_version == RafsVersion::V5 && !entry_type.is_dir() {
node.v5_set_inode_blocks();
}
self.builder.insert_into_tree(tree, node)
}
fn get_uid_gid(ctx: &BuildContext, header: &Header) -> Result<(u32, u32)> {
let uid = if ctx.explicit_uidgid {
header.uid().unwrap_or_default()
} else {
0
};
let gid = if ctx.explicit_uidgid {
header.gid().unwrap_or_default()
} else {
0
};
if uid > u32::MAX as u64 || gid > u32::MAX as u64 {
bail!(
"tarball: uid {:x} or gid {:x} from tar entry is out of range",
uid,
gid
);
}
Ok((uid as u32, gid as u32))
}
fn get_mode(header: &Header) -> Result<u32> {
let mode = header
.mode()
.context("tarball: failed to get permission/mode from tar entry")?;
let ty = match header.entry_type() {
EntryType::Regular | EntryType::Link => libc::S_IFREG,
EntryType::Directory => libc::S_IFDIR,
EntryType::Symlink => libc::S_IFLNK,
EntryType::Block => libc::S_IFBLK,
EntryType::Char => libc::S_IFCHR,
EntryType::Fifo => libc::S_IFIFO,
_ => bail!("tarball: unsupported tar entry type"),
};
Ok((mode & !libc::S_IFMT as u32) | ty as u32)
}
fn get_file_name(path: &Path) -> Result<&OsStr> {
let name = if path == Path::new("/") {
path.as_os_str()
} else {
path.file_name().ok_or_else(|| {
anyhow!(
"tarball: failed to get file name from tar entry with path {}",
path.display()
)
})?
};
if name.len() > u16::MAX as usize {
bail!(
"tarball: file name {} from tar entry is too long",
name.to_str().unwrap_or_default()
);
}
Ok(name)
}
fn set_v5_dir_size(tree: &mut Tree) {
for c in &mut tree.children {
Self::set_v5_dir_size(c);
}
let mut node = tree.borrow_mut_node();
node.v5_set_dir_size(RafsVersion::V5, &tree.children);
}
fn detect_compression_algo(file: File) -> Result<(CompressionType, BufReader<File>)> {
// Use 64K buffer to keep consistence with zlib-random.
let mut buf_reader = BufReader::with_capacity(ZRAN_READER_BUF_SIZE, file);
let mut buf = [0u8; 3];
buf_reader.read_exact(&mut buf)?;
if buf[0] == 0x1f && buf[1] == 0x8b && buf[2] == 0x08 {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::Gzip, buf_reader))
} else {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::None, buf_reader))
}
}
}
/// Builder to create RAFS filesystems from tarballs.
pub struct TarballBuilder {
ty: ConversionType,
}
impl TarballBuilder {
/// Create a new instance of [TarballBuilder] to build a RAFS filesystem from a tarball.
pub fn new(conversion_type: ConversionType) -> Self {
Self {
ty: conversion_type,
}
}
}
impl Builder for TarballBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer: Box<dyn Artifact> = match self.ty {
ConversionType::EStargzToRafs
| ConversionType::EStargzToRef
| ConversionType::TargzToRafs
| ConversionType::TargzToRef
| ConversionType::TarToRafs
| ConversionType::TarToTarfs => {
if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
}
}
_ => {
return Err(anyhow!(
"tarball: unsupported image conversion type '{}'",
self.ty
))
}
};
let mut tree_builder =
TarballTreeBuilder::new(self.ty, ctx, blob_mgr, blob_writer.as_mut(), layer_idx);
let tree = timing_tracer!({ tree_builder.build_tree() }, "build_tree")?;
// Build bootstrap
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest};
#[test]
fn test_build_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
#[test]
fn test_build_encrypted_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
true,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
}

View File

@ -1,28 +0,0 @@
[package]
name = "nydus-clib"
version = "0.1.0"
description = "C wrapper library for Nydus SDK"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[lib]
name = "nydus_clib"
crate-type = ["cdylib", "staticlib"]
[dependencies]
libc = "0.2.137"
log = "0.4.17"
fuse-backend-rs = "^0.12.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage" }
[features]
baekend-s3 = ["nydus-storage/backend-s3"]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = ["nydus-storage/backend-localdisk"]

View File

@ -1 +0,0 @@
../LICENSE-APACHE

View File

@ -1,20 +0,0 @@
#include <stdio.h>
#include "../nydus.h"
int main(int argc, char **argv)
{
char *bootstrap = "../../tests/texture/repeatable/sha256-nocompress-repeatable";
char *config = "version = 2\nid = \"my_id\"\n[backend]\ntype = \"localfs\"\n[backend.localfs]\ndir = \"../../tests/texture/repeatable/blobs\"\n[cache]\ntype = \"dummycache\"\n[rafs]";
NydusFsHandle fs_handle;
fs_handle = nydus_open_rafs(bootstrap, config);
if (fs_handle == NYDUS_INVALID_FS_HANDLE) {
printf("failed to open rafs filesystem from ../../tests/texture/repeatable/sha256-nocompress-repeatable\n");
return -1;
}
printf("succeed to open rafs filesystem from ../../tests/texture/repeatable/sha256-nocompress-repeatable\n");
nydus_close_rafs(fs_handle);
return 0;
}

View File

@ -1,70 +0,0 @@
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
/**
* Magic number for Nydus file handle.
*/
#define NYDUS_FILE_HANDLE_MAGIC 17148644263605784967ull
/**
* Value representing an invalid Nydus file handle.
*/
#define NYDUS_INVALID_FILE_HANDLE 0
/**
* Magic number for Nydus filesystem handle.
*/
#define NYDUS_FS_HANDLE_MAGIC 17148643159786606983ull
/**
* Value representing an invalid Nydus filesystem handle.
*/
#define NYDUS_INVALID_FS_HANDLE 0
/**
* Handle representing a Nydus file object.
*/
typedef uintptr_t NydusFileHandle;
/**
* Handle representing a Nydus filesystem object.
*/
typedef uintptr_t NydusFsHandle;
/**
* Open the file with `path` in readonly mode.
*
* The `NydusFileHandle` returned should be freed by calling `nydus_close()`.
*/
NydusFileHandle nydus_fopen(NydusFsHandle fs_handle, const char *path);
/**
* Close the file handle returned by `nydus_fopen()`.
*/
void nydus_fclose(NydusFileHandle handle);
/**
* Open a RAFS filesystem and return a handle to the filesystem object.
*
* The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
* it will cause memory leak.
*/
NydusFsHandle nydus_open_rafs(const char *bootstrap, const char *config);
/**
* Open a RAFS filesystem with default configuration and return a handle to the filesystem object.
*
* The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
* it will cause memory leak.
*/
NydusFsHandle nydus_open_rafs_default(const char *bootstrap, const char *dir_path);
/**
* Close the RAFS filesystem returned by `nydus_open_rafs()` and friends.
*
* All `NydusFileHandle` objects created from the `NydusFsHandle` should be freed before calling
* `nydus_close_rafs()`, otherwise it may cause panic.
*/
void nydus_close_rafs(NydusFsHandle handle);

View File

@ -1,90 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Implement file operations for RAFS filesystem in userspace.
//!
//! Provide following file operation functions to access files in a RAFS filesystem:
//! - fopen:
//! - fclose:
//! - fread:
//! - fwrite:
//! - fseek:
//! - ftell
use std::os::raw::c_char;
use std::ptr::null_mut;
use fuse_backend_rs::api::filesystem::{Context, FileSystem};
use crate::{set_errno, FileSystemState, Inode, NydusFsHandle};
/// Magic number for Nydus file handle.
pub const NYDUS_FILE_HANDLE_MAGIC: u64 = 0xedfc_3919_afc3_5187;
/// Value representing an invalid Nydus file handle.
pub const NYDUS_INVALID_FILE_HANDLE: usize = 0;
/// Handle representing a Nydus file object.
pub type NydusFileHandle = usize;
#[repr(C)]
pub(crate) struct FileState {
magic: u64,
ino: Inode,
pos: u64,
fs_handle: NydusFsHandle,
}
/// Open the file with `path` in readonly mode.
///
/// The `NydusFileHandle` returned should be freed by calling `nydus_close()`.
///
/// # Safety
/// Caller needs to ensure `fs_handle` and `path` are valid, otherwise it may cause memory access
/// violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_fopen(
fs_handle: NydusFsHandle,
path: *const c_char,
) -> NydusFileHandle {
if path.is_null() {
set_errno(libc::EINVAL);
return null_mut::<FileState>() as NydusFileHandle;
}
let fs = match FileSystemState::try_from_handle(fs_handle) {
Err(e) => {
set_errno(e);
return null_mut::<FileState>() as NydusFileHandle;
}
Ok(v) => v,
};
////////////////////////////////////////////////////////////
// TODO: open file;
//////////////////////////////////////////////////////////////////////////
let file = Box::new(FileState {
magic: NYDUS_FILE_HANDLE_MAGIC,
ino: fs.root_ino,
pos: 0,
fs_handle,
});
Box::into_raw(file) as NydusFileHandle
}
/// Close the file handle returned by `nydus_fopen()`.
///
/// # Safety
/// Caller needs to ensure `fs_handle` is valid, otherwise it may cause memory access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_fclose(handle: NydusFileHandle) {
let mut file = Box::from_raw(handle as *mut FileState);
assert_eq!(file.magic, NYDUS_FILE_HANDLE_MAGIC);
let ctx = Context::default();
let fs = FileSystemState::from_handle(file.fs_handle);
fs.rafs.forget(&ctx, file.ino, 1);
file.magic -= 0x4fdf_ae34_9d9a_03cd;
}

View File

@ -1,251 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Provide structures and functions to open/close/access a filesystem instance.
use std::ffi::CStr;
use std::os::raw::c_char;
use std::path::Path;
use std::ptr::{null, null_mut};
use std::str::FromStr;
use std::sync::Arc;
use nydus_api::ConfigV2;
use nydus_rafs::fs::Rafs;
use crate::{cstr_to_str, set_errno, Inode};
/// Magic number for Nydus filesystem handle.
pub const NYDUS_FS_HANDLE_MAGIC: u64 = 0xedfc_3818_af03_5187;
/// Value representing an invalid Nydus filesystem handle.
pub const NYDUS_INVALID_FS_HANDLE: usize = 0;
/// Handle representing a Nydus filesystem object.
pub type NydusFsHandle = usize;
#[repr(C)]
pub(crate) struct FileSystemState {
magic: u64,
pub(crate) root_ino: Inode,
pub(crate) rafs: Rafs,
}
impl FileSystemState {
/// Caller needs to ensure the lifetime of returned reference.
pub(crate) unsafe fn from_handle(hdl: NydusFsHandle) -> &'static mut Self {
let fs = &mut *(hdl as *const FileSystemState as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
fs
}
/// Caller needs to ensure the lifetime of returned reference.
pub(crate) unsafe fn try_from_handle(hdl: NydusFsHandle) -> Result<&'static mut Self, i32> {
if hdl == null::<FileSystemState>() as usize {
return Err(libc::EINVAL);
}
let fs = &mut *(hdl as *const FileSystemState as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
Ok(fs)
}
}
fn fs_error_einval() -> NydusFsHandle {
set_errno(libc::EINVAL);
null_mut::<FileSystemState>() as NydusFsHandle
}
fn default_localfs_rafs_config(dir: &str) -> String {
format!(
r#"
version = 2
id = "my_id"
[backend]
type = "localfs"
[backend.localfs]
dir = "{}"
[cache]
type = "dummycache"
[rafs]
"#,
dir
)
}
fn do_nydus_open_rafs(bootstrap: &str, config: &str) -> NydusFsHandle {
let cfg = match ConfigV2::from_str(config) {
Ok(v) => v,
Err(e) => {
warn!("failed to parse configuration info: {}", e);
return fs_error_einval();
}
};
let cfg = Arc::new(cfg);
let (mut rafs, reader) = match Rafs::new(&cfg, &cfg.id, Path::new(bootstrap)) {
Err(e) => {
warn!(
"failed to open filesystem from bootstrap {}, {}",
bootstrap, e
);
return fs_error_einval();
}
Ok(v) => v,
};
if let Err(e) = rafs.import(reader, None) {
warn!("failed to import RAFS filesystem, {}", e);
return fs_error_einval();
}
let root_ino = rafs.metadata().root_inode;
let fs = Box::new(FileSystemState {
magic: NYDUS_FS_HANDLE_MAGIC,
root_ino,
rafs,
});
Box::into_raw(fs) as NydusFsHandle
}
/// Open a RAFS filesystem and return a handle to the filesystem object.
///
/// The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
/// it will cause memory leak.
///
/// # Safety
/// Caller needs to ensure `bootstrap` and `config` are valid, otherwise it may cause memory access
/// violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_open_rafs(
bootstrap: *const c_char,
config: *const c_char,
) -> NydusFsHandle {
if bootstrap.is_null() || config.is_null() {
return fs_error_einval();
}
let bootstrap = cstr_to_str!(bootstrap, null_mut::<FileSystemState>() as NydusFsHandle);
let config = cstr_to_str!(config, null_mut::<FileSystemState>() as NydusFsHandle);
do_nydus_open_rafs(bootstrap, config)
}
/// Open a RAFS filesystem with default configuration and return a handle to the filesystem object.
///
/// The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
/// it will cause memory leak.
///
/// # Safety
/// Caller needs to ensure `bootstrap` and `dir_path` are valid, otherwise it may cause memory
/// access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_open_rafs_default(
bootstrap: *const c_char,
dir_path: *const c_char,
) -> NydusFsHandle {
if bootstrap.is_null() || dir_path.is_null() {
return fs_error_einval();
}
let bootstrap = cstr_to_str!(bootstrap, null_mut::<FileSystemState>() as NydusFsHandle);
let dir_path = cstr_to_str!(dir_path, null_mut::<FileSystemState>() as NydusFsHandle);
let p_tmp;
let mut path = Path::new(bootstrap);
if path.parent().is_none() {
p_tmp = Path::new(dir_path).join(bootstrap);
path = &p_tmp
}
let bootstrap = match path.to_str() {
Some(v) => v,
None => {
warn!("invalid bootstrap path '{}'", bootstrap);
return fs_error_einval();
}
};
let config = default_localfs_rafs_config(dir_path);
do_nydus_open_rafs(bootstrap, &config)
}
/// Close the RAFS filesystem returned by `nydus_open_rafs()` and friends.
///
/// All `NydusFileHandle` objects created from the `NydusFsHandle` should be freed before calling
/// `nydus_close_rafs()`, otherwise it may cause panic.
///
/// # Safety
/// Caller needs to ensure `handle` is valid, otherwise it may cause memory access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_close_rafs(handle: NydusFsHandle) {
let mut fs = Box::from_raw(handle as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
fs.magic -= 0x4fdf_03cd_ae34_9d9a;
fs.rafs.destroy().unwrap();
}
#[cfg(test)]
mod tests {
use super::*;
use std::ffi::CString;
use std::io::Error;
use std::path::PathBuf;
use std::ptr::null;
pub(crate) fn open_file_system() -> NydusFsHandle {
let ret = unsafe { nydus_open_rafs(null(), null()) };
assert_eq!(ret, NYDUS_INVALID_FS_HANDLE);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::EINVAL)
);
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let bootstrap = PathBuf::from(root_dir)
.join("../tests/texture/repeatable/sha256-nocompress-repeatable");
let bootstrap = bootstrap.to_str().unwrap();
let bootstrap = CString::new(bootstrap).unwrap();
let blob_dir = PathBuf::from(root_dir).join("../tests/texture/repeatable/blobs");
let config = format!(
r#"
version = 2
id = "my_id"
[backend]
type = "localfs"
[backend.localfs]
dir = "{}"
[cache]
type = "dummycache"
[rafs]
"#,
blob_dir.display()
);
let config = CString::new(config).unwrap();
let fs = unsafe {
nydus_open_rafs(
bootstrap.as_ptr() as *const c_char,
config.as_ptr() as *const c_char,
)
};
assert_ne!(fs, NYDUS_INVALID_FS_HANDLE);
fs
}
#[test]
fn test_open_rafs() {
let fs = open_file_system();
unsafe { nydus_close_rafs(fs) };
}
#[test]
fn test_open_rafs_default() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let bootstrap = PathBuf::from(root_dir)
.join("../tests/texture/repeatable/sha256-nocompress-repeatable");
let bootstrap = bootstrap.to_str().unwrap();
let bootstrap = CString::new(bootstrap).unwrap();
let blob_dir = PathBuf::from(root_dir).join("../tests/texture/repeatable/blobs");
let blob_dir = blob_dir.to_str().unwrap();
let fs = unsafe {
nydus_open_rafs_default(bootstrap.as_ptr(), blob_dir.as_ptr() as *const c_char)
};
unsafe { nydus_close_rafs(fs) };
}
}

View File

@ -1,80 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! SDK C wrappers to access `nydus-rafs` and `nydus-storage` functionalities.
//!
//! # Generate Header File
//! Please use cbindgen to generate `nydus.h` header file from rust source code by:
//! ```
//! cargo install cbindgen
//! cbindgen -l c -v -o include/nydus.h
//! ```
//!
//! # Run C Test
//! ```
//! gcc -o nydus -L ../../target/debug/ -lnydus_clib nydus_rafs.c
//! ```
#[macro_use]
extern crate log;
extern crate core;
pub use file::*;
pub use fs::*;
mod file;
mod fs;
/// Type for RAFS filesystem inode number.
pub type Inode = u64;
/// Helper to set libc::errno
#[cfg(target_os = "linux")]
fn set_errno(errno: i32) {
unsafe { *libc::__errno_location() = errno };
}
/// Helper to set libc::errno
#[cfg(target_os = "macos")]
fn set_errno(errno: i32) {
unsafe { *libc::__error() = errno };
}
/// Macro to convert C `char *` into rust `&str`.
#[macro_export]
macro_rules! cstr_to_str {
($var: ident, $ret: expr) => {{
let s = CStr::from_ptr($var);
match s.to_str() {
Ok(v) => v,
Err(_e) => {
set_errno(libc::EINVAL);
return $ret;
}
}
}};
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Error;
#[test]
fn test_set_errno() {
assert_eq!(Error::raw_os_error(&Error::last_os_error()), Some(0));
set_errno(libc::EINVAL);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::EINVAL)
);
set_errno(libc::ENOSYS);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::ENOSYS)
);
set_errno(0);
assert_eq!(Error::raw_os_error(&Error::last_os_error()), Some(0));
}
}

1
contrib/ctr-remote/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
bin/

View File

@ -8,7 +8,7 @@ linters:
- goimports
- revive
- ineffassign
- govet
- vet
- unused
- misspell
disable:
@ -16,3 +16,6 @@ linters:
run:
deadline: 4m
skip-dirs:
- misc

View File

@ -0,0 +1,27 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/ctr-remote ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/ctr-remote ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -0,0 +1,65 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"os"
"github.com/containerd/containerd/cmd/ctr/app"
"github.com/containerd/containerd/pkg/seed"
"github.com/dragonflyoss/image-service/contrib/ctr-remote/commands"
"github.com/urfave/cli"
)
func init() {
seed.WithTimeAndRand()
}
func main() {
customCommands := []cli.Command{commands.RpullCommand}
app := app.New()
app.Description = "NOTE: Enhanced for nydus-snapshotter\n" + app.Description
for i := range app.Commands {
if app.Commands[i].Name == "images" {
sc := map[string]cli.Command{}
for _, subcmd := range customCommands {
sc[subcmd.Name] = subcmd
}
// First, replace duplicated subcommands
for j := range app.Commands[i].Subcommands {
for name, subcmd := range sc {
if name == app.Commands[i].Subcommands[j].Name {
app.Commands[i].Subcommands[j] = subcmd
delete(sc, name)
}
}
}
// Next, append all new sub commands
for _, subcmd := range sc {
app.Commands[i].Subcommands = append(app.Commands[i].Subcommands, subcmd)
}
break
}
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "ctr-remote: %v\n", err)
os.Exit(1)
}
}

View File

@ -0,0 +1,103 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"context"
"fmt"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cmd/ctr/commands"
"github.com/containerd/containerd/cmd/ctr/commands/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/log"
"github.com/containerd/nydus-snapshotter/pkg/label"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
const (
remoteSnapshotterName = "nydus"
)
var RpullCommand = cli.Command{
Name: "rpull",
Usage: "pull an image from a registry leveraging nydus-snapshotter",
ArgsUsage: "[flags] <ref>",
Description: `Fetch and prepare an image for use in containerd leveraging nydus-snapshotter.
After pulling an image, it should be ready to use the same reference in a run command.`,
Flags: append(commands.RegistryFlags, commands.LabelFlag),
Action: func(context *cli.Context) error {
var (
ref = context.Args().First()
config = &rPullConfig{}
)
if ref == "" {
return fmt.Errorf("please provide an image reference to pull")
}
client, ctx, cancel, err := commands.NewClient(context)
if err != nil {
return err
}
defer cancel()
ctx, done, err := client.WithLease(ctx)
if err != nil {
return err
}
defer done(ctx)
fc, err := content.NewFetchConfig(ctx, context)
if err != nil {
return err
}
config.FetchConfig = fc
return pull(ctx, client, ref, config)
},
}
type rPullConfig struct {
*content.FetchConfig
}
func pull(ctx context.Context, client *containerd.Client, ref string, config *rPullConfig) error {
pCtx := ctx
h := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.MediaType != images.MediaTypeDockerSchema1Manifest {
fmt.Printf("fetching %v... %v\n", desc.Digest.String()[:15], desc.MediaType)
}
return nil, nil
})
log.G(pCtx).WithField("image", ref).Debug("fetching")
configLabels := commands.LabelArgs(config.Labels)
if _, err := client.Pull(pCtx, ref, []containerd.RemoteOpt{
containerd.WithPullLabels(configLabels),
containerd.WithResolver(config.Resolver),
containerd.WithImageHandler(h),
containerd.WithSchema1Conversion,
containerd.WithPullUnpack,
containerd.WithPullSnapshotter(remoteSnapshotterName),
containerd.WithImageHandlerWrapper(label.AppendLabelsHandlerWrapper(ref)),
}...); err != nil {
return err
}
return nil
}

63
contrib/ctr-remote/go.mod Normal file
View File

@ -0,0 +1,63 @@
module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.18
require (
github.com/containerd/containerd v1.6.6
github.com/containerd/nydus-snapshotter v0.3.0-alpha.1
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799
github.com/urfave/cli v1.22.5
)
require (
github.com/Microsoft/go-winio v0.5.1 // indirect
github.com/Microsoft/hcsshim v0.9.3 // indirect
github.com/cilium/ebpf v0.7.0 // indirect
github.com/containerd/cgroups v1.0.3 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.2.2 // indirect
github.com/containerd/fifo v1.0.0 // indirect
github.com/containerd/go-cni v1.1.6 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/ttrpc v1.1.0 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/containernetworking/cni v1.1.1 // indirect
github.com/containernetworking/plugins v1.1.1 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/godbus/dbus/v5 v5.0.6 // indirect
github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/uuid v1.2.0 // indirect
github.com/klauspost/compress v1.15.1 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.5.0 // indirect
github.com/moby/sys/signal v0.6.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.2 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
github.com/opencontainers/selinux v1.10.1 // indirect
github.com/pelletier/go-toml v1.9.3 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/russross/blackfriday/v2 v2.0.1 // indirect
github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad // indirect
golang.org/x/text v0.3.7 // indirect
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
)
replace (
github.com/opencontainers/image-spec => github.com/opencontainers/image-spec v1.0.2-0.20211117181255-693428a734f5
github.com/opencontainers/runc => github.com/opencontainers/runc v1.1.2
)

1113
contrib/ctr-remote/go.sum Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
/bin

View File

@ -0,0 +1,21 @@
# https://golangci-lint.run/usage/configuration#config-file
linters:
enable:
- staticcheck
- unconvert
- gofmt
- goimports
- revive
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
- misc

View File

@ -0,0 +1,13 @@
FROM golang:1.18
ARG GOPROXY="https://goproxy.cn,direct"
RUN mkdir -p /app
WORKDIR /app
COPY . ./
RUN CGO_ENABLED=0 GOOS=linux go build -v .
FROM alpine:3.13.6
RUN mkdir -p /plugin; mkdir -p /nydus
ARG NYDUSD_PATH=./nydusd
COPY --from=0 /app/nydus_graphdriver /plugin/nydus_graphdriver
COPY ${NYDUSD_PATH} /nydus
ENTRYPOINT [ "/plugin/nydus_graphdriver" ]

View File

@ -0,0 +1,27 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/nydus_graphdriver .
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus_graphdriver .
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -1,3 +1,67 @@
# Docker Nydus Graph Driver
Moved to [docker-nydus-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver).
Docker supports remote graph driver as a plugin. With the nydus graph driver, you can start a container from previously converted nydus image. The initial intent to build the graph driver is to provide a way to help users quickly experience the speed starting a container from nydus image. So it is **not ready for productive usage**. If you think docker is important in your use case, a PR is welcomed to listen to your story. We might enhance this in the future.
Chinese: [使用 Docker 启动容器](../../docs/chinese_docker_graph_driver_guide.md)
## Architecture
---
![Docker Info](../../docs/images/docker_graphdriver_arch.png)
## Procedures
### 1 Configure Nydus
Put your nydus configuration into path `/var/lib/nydus/config.json`, where nydus remote backend is also specified.
### 2 Install Graph Driver Plugin
#### Install from DockerHub
```
$ docker plugin install gechangwei/docker-nydus-graphdriver:0.2.0
```
### 3 Enable the Graph Driver
Before facilitating nydus graph driver to start container, the plugin must be enabled.
```
$ sudo docker plugin enable gechangwei/docker-nydus-graphdriver:0.2.0
```
### 4 Switch to Docker Graph Driver
By default, docker manages all images by build-in `overlay` graph driver which can be switched to another like nydus graph driver by specifying a new one in its
daemon configuration file.
```
{
"experimental": true,
"storage-driver": "gechangwei/docker-nydus-graphdriver:0.2.0"
}
```
### 5 Restart Docker Service
```
$ sudo systemctl restart docker
```
## Verification
Execute `docker info` to verify above steps were all done and nydus graph driver works normally.
![Docker Info](../../docs/images/docker_info_storage_driver.png)
## Start Container
Now, just `run` container or `pull` image like what you are used to
## Limitation
1. docker's version >=20.10.2. Lower version probably works well, but it is not tested yet
2. When converting images through `nydusify`, backend must be specified as `oss`.
3. Nydus graph driver is not compatible with classic oci images. So you have to switch back to build-in graphdriver to use those images.

View File

@ -0,0 +1,43 @@
{
"description": "nydus image service plugin for Docker",
"documentation": "https://docs.docker.com/engine/extend/plugins/",
"entrypoint": [
"/plugin/nydus_graphdriver"
],
"network": {
"type": "host"
},
"interface": {
"types": [
"docker.graphdriver/1.0"
],
"socket": "plugin.sock"
},
"linux": {
"capabilities": [
"CAP_SYS_ADMIN",
"CAP_SYS_RESOURCE"
],
"Devices": [
{
"Path": "/dev/fuse"
}
]
},
"PropagatedMount": "/home",
"Mounts": [
{
"Name": "NYDUS_CONFIG",
"Source": "/var/lib/nydus/config.json",
"Destination": "/nydus/config.json",
"Type": "none",
"Options": [
"bind",
"ro"
],
"Settable": [
"source"
]
}
]
}

View File

@ -0,0 +1,44 @@
module github.com/dragonflyoss/image-service/contrib/nydus_graphdriver
go 1.18
require (
github.com/docker/docker v20.10.3-0.20211206061157-934f955e3d62+incompatible
github.com/docker/go-plugins-helpers v0.0.0-20211224144127-6eecb7beb651
github.com/moby/sys/mountinfo v0.5.0
github.com/opencontainers/selinux v1.10.1
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.8.1
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad
)
require (
github.com/Microsoft/go-winio v0.5.1 // indirect
github.com/containerd/containerd v1.6.6 // indirect
github.com/containerd/continuity v0.2.2 // indirect
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/klauspost/compress v1.11.13 // indirect
github.com/moby/sys/mount v0.3.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/opencontainers/runc v1.1.2 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
github.com/vbatts/tar-split v0.11.1 // indirect
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f // indirect
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
)
replace (
github.com/containerd/go-runc => github.com/containerd/go-runc v1.0.0
github.com/docker/distribution => github.com/docker/distribution v2.8.1+incompatible
github.com/opencontainers/image-spec => github.com/opencontainers/image-spec v1.0.2
github.com/opencontainers/runc => github.com/opencontainers/runc v1.1.2
)

View File

@ -0,0 +1,330 @@
bazil.org/fuse v0.0.0-20200407214033-5883e5a4b512/go.mod h1:FbcW6z/2VytnFDhZfumh8Ss8zxHE6qpMP5sHTRe0EaM=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Microsoft/go-winio v0.5.1 h1:aPJp2QD7OOrhO5tQXqQoGSJc+DjDtWTGLOmNyAm6FgY=
github.com/Microsoft/go-winio v0.5.1/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/hcsshim v0.9.3 h1:k371PzBuRrz2b+ebGuI2nVgVhgsVX60jMfSw80NECxo=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/containerd v1.6.6 h1:xJNPhbrmz8xAMDNoVjHy9YHtWwEQNS+CDkcIRh7t8Y0=
github.com/containerd/containerd v1.6.6/go.mod h1:ZoP1geJldzCVY3Tonoz7b1IXk8rIX0Nltt5QE4OMNk0=
github.com/containerd/continuity v0.2.2 h1:QSqfxcn8c+12slxwu00AtzXrsami0MJb/MQs9lOLHLA=
github.com/containerd/continuity v0.2.2/go.mod h1:pWygW9u7LtS1o4N/Tn0FoCFDIXZ7rxcMX7HX1Dmibvk=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.3-0.20211206061157-934f955e3d62+incompatible h1:zOc/xrISG6HmrZoMs10Jrzeqbm4Zfop2CmeDoBRynfI=
github.com/docker/docker v20.10.3-0.20211206061157-934f955e3d62+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-plugins-helpers v0.0.0-20211224144127-6eecb7beb651 h1:YcvzLmdrP/b8kLAGJ8GT7bdncgCAiWxJZIlt84D+RJg=
github.com/docker/go-plugins-helpers v0.0.0-20211224144127-6eecb7beb651/go.mod h1:LFyLie6XcDbyKGeVK6bHe+9aJTYCxWLBg5IrJZOaXKA=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.11.13 h1:eSvu8Tmq6j2psUJqJrLcWH6K3w5Dwc+qipbaA6eVEN4=
github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/sys/mount v0.3.0 h1:bXZYMmq7DBQPwHRxH/MG+u9+XF90ZOwoXpHTOznMGp0=
github.com/moby/sys/mount v0.3.0/go.mod h1:U2Z3ur2rXPFrFmy4q6WMwWrBOAQGYtYTRVM8BIvzbwk=
github.com/moby/sys/mountinfo v0.5.0 h1:2Ks8/r6lopsxWi9m58nlwjaeSzUX9iiL1vj5qB/9ObI=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/symlink v0.2.0 h1:tk1rOM+Ljp0nFmfOIBtlV3rTDlWOwFRhjEeAhZB0nZc=
github.com/moby/sys/symlink v0.2.0/go.mod h1:7uZVF2dqJjG/NsClqul95CqKOBRQyYSNnJ6BMgR/gFs=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v1.1.2 h1:2VSZwLx5k/BfsBxMMipG/LYUnmqOD/BPkIVgQUcTlLw=
github.com/opencontainers/runc v1.1.2/go.mod h1:Tj1hFw6eFWp/o33uxGf5yF2BX5yz2Z6iptFpuvbbKqc=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 h1:3snG66yBm59tKhhSPQrQ/0bCrv1LQbKt40LnUPiUxdc=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opencontainers/selinux v1.10.1 h1:09LIPVRP3uuZGQvgR+SgMSNBd1Eb3vlRbGqQpoHsF8w=
github.com/opencontainers/selinux v1.10.1/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/seccomp/libseccomp-golang v0.9.2-0.20210429002308-3879420cc921/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vbatts/tar-split v0.11.1 h1:0Odu65rhcZ3JZaPHxl7tCI3V/C/Q9Zf82UFravl02dE=
github.com/vbatts/tar-split v0.11.1/go.mod h1:LEuURwDEiWjRjwu46yU3KVGuUdVv/dcnpcEPSzR8z6g=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f h1:hEYJvxw1lSnWIl8X9ofsYMklzaDs90JI2az5YMd4fPM=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad h1:ntjMns5wyP/fN65tdBD4g8J5w8n015+iIIs9rtjXkY0=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa h1:I0YcKz0I7OAhddo7ya8kMnvprhcWM045PmkBdMO9zN0=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.43.0 h1:Eeu7bZtDZ2DpRCsLhUlcrLnvYaMK1Gz86a+hMVvELmM=
google.golang.org/grpc v1.43.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@ -0,0 +1,16 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package main
import (
"github.com/docker/go-plugins-helpers/graphdriver/shim"
"github.com/dragonflyoss/image-service/contrib/nydus_graphdriver/plugin/nydus"
)
func main() {
handler := shim.NewHandlerFromGraphDriver(nydus.Init)
handler.ServeUnix("plugin", 0)
}

View File

@ -0,0 +1,133 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package nydus
import (
"context"
"encoding/json"
"io/ioutil"
"net"
"net/http"
"os"
"os/exec"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
NudusdConfigPath = "/nydus/config.json"
NydusdBin = "/nydus/nydusd"
NydusdSocket = "/nydus/api.sock"
)
type Nydus struct {
command *exec.Cmd
}
func New() *Nydus {
return &Nydus{}
}
type DaemonInfo struct {
ID string `json:"id"`
State string `json:"state"`
}
type errorMessage struct {
Code string `json:"code"`
Message string `json:"message"`
}
func getDaemonStatus(socket string) error {
transport := http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
dialer := &net.Dialer{
Timeout: 5 * time.Second,
KeepAlive: 5 * time.Second,
}
return dialer.DialContext(ctx, "unix", socket)
},
}
client := http.Client{Transport: &transport, Timeout: 30 * time.Second}
resp, err := client.Get("http://unix/api/v1/daemon")
if err != nil {
return err
}
defer resp.Body.Close()
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
if resp.StatusCode >= 400 {
var message errorMessage
json.Unmarshal(b, &message)
return errors.Errorf("request error, status = %d, message %s", resp.StatusCode, message)
}
var info DaemonInfo
if err = json.Unmarshal(b, &info); err != nil {
return err
}
if info.State != "RUNNING" {
return errors.Errorf("nydus is not ready. current stat %s", info.State)
}
return nil
}
func (nydus *Nydus) Mount(bootstrap, mountpoint string) error {
args := []string{
"--apisock", NydusdSocket,
"--log-level", "info",
"--thread-num", "4",
"--bootstrap", bootstrap,
"--config", NudusdConfigPath,
"--mountpoint", mountpoint,
}
cmd := exec.Command(NydusdBin, args...)
logrus.Infof("Start nydusd. %s", cmd.String())
// Redirect logs from nydusd daemon to a proper place.
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
if err := cmd.Start(); err != nil {
return errors.Wrapf(err, "start nydusd")
}
nydus.command = cmd
ready := false
// return error if nydusd does not reach normal state after elapse.
for i := 0; i < 30; i++ {
err := getDaemonStatus(NydusdSocket)
if err == nil {
ready = true
break
} else {
logrus.Error(err)
time.Sleep(100 * time.Millisecond)
}
}
if !ready {
logrus.Errorf("It take too long until nydusd gets RUNNING")
cmd.Process.Kill()
cmd.Wait()
}
return nil
}

View File

@ -0,0 +1,496 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package nydus
import (
"context"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
"github.com/pkg/errors"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/containerfs"
"github.com/docker/docker/pkg/directory"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/system"
"github.com/moby/sys/mountinfo"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
// With nydus image layer, there won't be plenty of layers that need to be stacked.
const (
diffDirName = "diff"
workDirName = "work"
mergedDirName = "merged"
lowerFile = "lower"
nydusDirName = "nydus"
nydusMetaRelapath = "image/image.boot"
parentFile = "parent"
)
var backingFs = "<unknown>"
func isFileExisted(file string) (bool, error) {
if _, err := os.Stat(file); err == nil {
return true, nil
} else if os.IsNotExist(err) {
return false, nil
} else {
return false, err
}
}
// Nydus graphdriver contains information about the home directory and the list of active
// mounts that are created using this driver.
type Driver struct {
home string
nydus *Nydus
NydusMountpoint string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
ctr *graphdriver.RefCounter
}
func (d *Driver) dir(id string) string {
return path.Join(d.home, id)
}
func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
os.MkdirAll(home, os.ModePerm)
fsMagic, err := graphdriver.GetFSMagic(home)
if err != nil {
return nil, err
}
if fsName, ok := graphdriver.FsNames[fsMagic]; ok {
backingFs = fsName
}
// check if they are running over btrfs, aufs, zfs, overlay, or ecryptfs
switch fsMagic {
case graphdriver.FsMagicBtrfs, graphdriver.FsMagicAufs, graphdriver.FsMagicZfs, graphdriver.FsMagicOverlay, graphdriver.FsMagicEcryptfs:
logrus.Errorf("'overlay2' is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
}
return &Driver{
home: home,
uidMaps: uidMaps,
gidMaps: gidMaps,
ctr: graphdriver.NewRefCounter(graphdriver.NewFsChecker(graphdriver.FsMagicOverlay))}, nil
}
// Status returns current driver information in a two dimensional string array.
// Output contains "Backing Filesystem" used in this implementation.
func (d *Driver) Status() [][2]string {
return [][2]string{
{"Backing Filesystem", backingFs},
// TODO: Add nydusd working status and version here.
{"Nydusd", "TBD"},
}
}
func (d *Driver) String() string {
return "Nydus graph driver"
}
// GetMetadata returns meta data about the overlay driver such as
// LowerDir, UpperDir, WorkDir and MergeDir used to store data.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
}
metadata := map[string]string{
"WorkDir": path.Join(dir, "work"),
"MergedDir": path.Join(dir, "merged"),
"UpperDir": path.Join(dir, "diff"),
}
lowerDirs, err := d.getLowerDirs(id)
if err != nil {
return nil, err
}
if len(lowerDirs) > 0 {
metadata["LowerDir"] = strings.Join(lowerDirs, ":")
}
return metadata, nil
}
// Cleanup any state created by overlay which should be cleaned when daemon
// is being shutdown. For now, we just have to unmount the bind mounted
// we had created.
func (d *Driver) Cleanup() error {
if d.nydus != nil {
d.nydus.command.Process.Signal(os.Interrupt)
d.nydus.command.Wait()
}
return nil
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {
logrus.Infof("Create read write - id %s parent %s", id, parent)
return d.Create(id, parent, opts)
}
// Create is used to create the upper, lower, and merged directories required for
// overlay fs for a given id.
// The parent filesystem is used to configure these directories for the overlay.
func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr error) {
logrus.Infof("Create. id %s, parent %s", id, parent)
dir := d.dir(id)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return err
}
root := idtools.Identity{UID: rootUID, GID: rootGID}
if err := idtools.MkdirAllAndChown(path.Dir(dir), 0700, root); err != nil {
return err
}
if err := idtools.MkdirAndChown(dir, 0700, root); err != nil {
return err
}
defer func() {
// Clean up on failure
if retErr != nil {
os.RemoveAll(dir)
}
}()
if err := idtools.MkdirAndChown(path.Join(dir, diffDirName), 0755, root); err != nil {
return err
}
// if no parent directory, done
if parent == "" {
return nil
}
if err := idtools.MkdirAndChown(path.Join(dir, mergedDirName), 0700, root); err != nil {
return err
}
if err := idtools.MkdirAndChown(path.Join(dir, workDirName), 0700, root); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(dir, parentFile), []byte(parent), 0666); err != nil {
return err
}
if parentLowers, err := d.getLowerDirs(parent); err == nil {
lowers := strings.Join(append(parentLowers, parent), ":")
lowerFilePath := path.Join(d.dir(id), lowerFile)
if len(lowers) > 0 {
if err := ioutil.WriteFile(lowerFilePath, []byte(lowers), 0666); err != nil {
return err
}
}
} else {
return err
}
return nil
}
func (d *Driver) getLowerDirs(id string) ([]string, error) {
var lowersArray []string
lowers, err := ioutil.ReadFile(path.Join(d.dir(id), lowerFile))
if err == nil {
lowersArray = strings.Split(string(lowers), ":")
} else if !os.IsNotExist(err) {
return nil, err
}
return lowersArray, nil
}
// Remove cleans the directories that are created for this id.
func (d *Driver) Remove(id string) error {
logrus.Infof("Remove %s", id)
dir := d.dir(id)
if err := system.EnsureRemoveAll(dir); err != nil && !os.IsNotExist(err) {
return errors.Errorf("Can't remove %s", dir)
}
return nil
}
// Get creates and mounts the required file system for the given id and returns the mount path.
// The `id` is mount-id.
func (d *Driver) Get(id, mountLabel string) (fs containerfs.ContainerFS, retErr error) {
logrus.Infof("Mount layer - id %s, label %s", id, mountLabel)
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
}
var lowers []string
lowers, retErr = d.getLowerDirs(id)
if retErr != nil {
return
}
newLowers := make([]string, 0)
for _, l := range lowers {
if l == id {
newLowers = append(newLowers, id)
break
}
// Encounter nydus layer, start nydusd daemon, thus to mount rafs as
// overlay lower dir for later use.
if isNydus, err := d.isNydusLayer(l); isNydus {
if mounted, err := d.isNydusMounted(l); !mounted {
bootstrapPath := path.Join(d.dir(l), diffDirName, nydusMetaRelapath)
absMountpoint := path.Join(d.dir(l), nydusDirName)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return nil, err
}
root := idtools.Identity{UID: rootUID, GID: rootGID}
if err := idtools.MkdirAllAndChown(absMountpoint, 0700, root); err != nil {
return nil, errors.Wrap(err, "failed in creating nydus mountpoint")
}
nydus := New()
// Keep it, so we can wait for process termination.
d.nydus = nydus
if e := nydus.Mount(bootstrapPath, absMountpoint); e != nil {
return nil, e
}
} else if err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
// Relative path
nydusRelaMountpoint := path.Join(l, nydusDirName)
if _, err := os.Stat(path.Join(d.home, nydusRelaMountpoint)); err == nil {
newLowers = append(newLowers, nydusRelaMountpoint)
} else {
diffDir := path.Join(l, "diff")
if _, err := os.Stat(diffDir); err == nil {
newLowers = append(newLowers, diffDir)
}
}
}
mergedDir := path.Join(dir, mergedDirName)
if count := d.ctr.Increment(mergedDir); count > 1 {
return containerfs.NewLocalContainerFS(mergedDir), nil
}
defer func() {
if retErr != nil {
if c := d.ctr.Decrement(mergedDir); c <= 0 {
if err := unix.Unmount(mergedDir, 0); err != nil {
logrus.Warnf("unmount error %v: %v", mergedDir, err)
}
if err := unix.Rmdir(mergedDir); err != nil && !os.IsNotExist(err) {
logrus.Warnf("failed to remove %s: %v", id, err)
}
}
}
}()
os.Chdir(path.Join(d.home))
upperDir := path.Join(id, diffDirName)
workDir := path.Join(id, workDirName)
opts := "lowerdir=" + strings.Join(newLowers, ":") + ",upperdir=" + upperDir + ",workdir=" + workDir
mountData := label.FormatMountLabel(opts, mountLabel)
mount := unix.Mount
mountTarget := mergedDir
logrus.Infof("mount options %s, target %s", opts, mountTarget)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return nil, err
}
if err := idtools.MkdirAndChown(mergedDir, 0700, idtools.Identity{UID: rootUID, GID: rootGID}); err != nil {
return nil, err
}
pageSize := unix.Getpagesize()
if len(mountData) > pageSize {
return nil, fmt.Errorf("cannot mount layer, mount label too large %d", len(mountData))
}
if err := mount("overlay", mountTarget, "overlay", 0, mountData); err != nil {
return nil, fmt.Errorf("error creating overlay mount to %s: %v", mergedDir, err)
}
// chown "workdir/work" to the remapped root UID/GID. Overlay fs inside a
// user namespace requires this to move a directory from lower to upper.
if err := os.Chown(path.Join(workDir, workDirName), rootUID, rootGID); err != nil {
return nil, err
}
return containerfs.NewLocalContainerFS(mergedDir), nil
}
func (d *Driver) isNydusLayer(id string) (bool, error) {
dir := d.dir(id)
bootstrapPath := path.Join(dir, diffDirName, nydusMetaRelapath)
return isFileExisted(bootstrapPath)
}
func (d *Driver) isNydusMounted(id string) (bool, error) {
if isNydus, err := d.isNydusLayer(id); !isNydus {
return isNydus, err
}
mp := path.Join(d.dir(id), nydusDirName)
if exited, err := isFileExisted(mp); !exited {
return exited, err
}
if mounted, err := mountinfo.Mounted(mp); !mounted {
return mounted, err
}
return true, nil
}
// Put unmounts the mount path created for the give id.
func (d *Driver) Put(id string) error {
if mounted, _ := d.isNydusMounted(id); mounted {
if d.nydus != nil {
// Signal to nydusd causes it umount itself before terminating.
// So we don't have to invoke os/umount here.
// Note: this only umount nydusd fuse mount point rather than overlay merged dir
d.nydus.command.Process.Signal(os.Interrupt)
d.nydus.command.Wait()
}
}
dir := d.dir(id)
mountpoint := path.Join(dir, mergedDirName)
if count := d.ctr.Decrement(mountpoint); count > 0 {
return nil
}
if err := unix.Unmount(mountpoint, unix.MNT_DETACH); err != nil {
return errors.Wrapf(err, "failed to unmount from %s", mountpoint)
}
if err := unix.Rmdir(mountpoint); err != nil && !os.IsNotExist(err) {
return errors.Wrapf(err, "failed in removing %s", mountpoint)
}
return nil
}
// Exists checks to see if the id is already mounted.
func (d *Driver) Exists(id string) bool {
logrus.Info("Execute `Exists()`")
_, err := os.Stat(d.dir(id))
return err == nil
}
// isParent returns if the passed in parent is the direct parent of the passed in layer
func (d *Driver) isParent(id, parent string) bool {
lowers, err := d.getLowerDirs(id)
if err != nil || len(lowers) == 0 && parent != "" {
return false
}
if parent == "" {
return len(lowers) == 0
}
return parent == lowers[len(lowers)-1]
}
// ApplyDiff applies the new layer into a root
func (d *Driver) ApplyDiff(id, parent string, diff io.Reader) (size int64, err error) {
if !d.isParent(id, parent) {
return 0, errors.Errorf("Parent %s is not true parent of id %s", parent, id)
}
applyDir := path.Join(d.dir(id), diffDirName)
if err := archive.Unpack(diff, applyDir, &archive.TarOptions{
UIDMaps: d.uidMaps,
GIDMaps: d.gidMaps,
WhiteoutFormat: archive.OverlayWhiteoutFormat,
InUserNS: false,
}); err != nil {
return 0, err
}
parentLowers, err := d.getLowerDirs(parent)
if err != nil {
return 0, err
}
newLowers := strings.Join(append(parentLowers, parent), ":")
lowerFilePath := path.Join(d.dir(id), lowerFile)
if len(newLowers) > 0 {
ioutil.WriteFile(lowerFilePath, []byte(newLowers), 0666)
}
return directory.Size(context.TODO(), applyDir)
}
// DiffSize calculates the changes between the specified id
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
return 0, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
func (d *Driver) Diff(id, parent string) (io.ReadCloser, error) {
return nil, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
return nil, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}

View File

@ -1,8 +0,0 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,19 @@
[package]
name = "nydus-backend-proxy"
version = "0.2.0"
version = "0.1.0"
authors = ["The Nydus Developers"]
description = "A simple HTTP server to provide a fake container registry for nydusd"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
license = "Apache-2.0"
[dependencies]
rocket = "0.5.0"
http-range = "0.1.5"
nix = { version = "0.28", features = ["uio"] }
clap = "4.4"
once_cell = "1.19.0"
rocket = "0.5.0-rc"
http-range = "0.1.3"
nix = ">=0.23.0"
clap = "2.33"
once_cell = "1.10.0"
lazy_static = "1.4"
[workspace]

View File

@ -2,22 +2,29 @@
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate lazy_static;
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use std::collections::HashMap;
use std::env;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::{fs, io};
use clap::*;
use clap::{App, Arg};
use http_range::HttpRange;
use lazy_static::lazy_static;
use nix::sys::uio;
use rocket::fs::{FileServer, NamedFile};
use rocket::futures::lock::{Mutex, MutexGuard};
use rocket::http::Status;
use rocket::request::{self, FromRequest, Outcome};
use rocket::response::{self, stream::ReaderStream, Responder};
use rocket::*;
use rocket::{Request, Response};
lazy_static! {
static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend {
@ -158,12 +165,12 @@ impl<'r> Responder<'r, 'static> for RangeStream {
let mut read = 0u64;
let startpos = self.start as i64;
let size = self.len;
let file = self.file.clone();
let raw_fd = self.file.as_raw_fd();
Response::build()
.streamed_body(ReaderStream! {
while read < size {
match uio::pread(file.as_ref(), &mut buf, startpos + read as i64) {
match uio::pread(raw_fd, &mut buf, startpos + read as i64) {
Ok(mut n) => {
n = std::cmp::min(n, (size - read) as usize);
read += n as u64;
@ -261,31 +268,20 @@ async fn fetch(
#[rocket::main]
async fn main() {
let cmd = Command::new("nydus-backend-proxy")
.author(env!("CARGO_PKG_AUTHORS"))
.version(env!("CARGO_PKG_VERSION"))
let cmd = App::new("nydus-backend-proxy")
.author(crate_authors!())
.version(crate_version!())
.about("A simple HTTP server to provide a fake container registry for nydusd.")
.arg(
Arg::new("blobsdir")
.short('b')
Arg::with_name("blobsdir")
.short("b")
.long("blobsdir")
.required(true)
.takes_value(true)
.help("path to directory hosting nydus blob files"),
)
.help_template(
"\
{before-help}{name} {version}
{author-with-newline}{about-with-newline}
{usage-heading} {usage}
{all-args}{after-help}
",
)
.get_matches();
// Safe to unwrap() because `blobsdir` takes a value.
let path = cmd
.get_one::<String>("blobsdir")
.expect("required argument");
let path = cmd.value_of("blobsdir").unwrap();
init_blob_backend(Path::new(path)).await;

View File

@ -8,14 +8,14 @@ linters:
- goimports
- revive
- ineffassign
- govet
- vet
- unused
- misspell
disable:
- errcheck
run:
timeout: 5m
issues:
exclude-dirs:
- misc
deadline: 4m
skip-dirs:
- misc

View File

@ -1,8 +1,8 @@
GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?=
GOARCH ?= amd64
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
@ -13,17 +13,15 @@ endif
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go
test: build
go vet $(PACKAGES)
go test -v -cover ${PACKAGES}
lint:
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -8,16 +8,12 @@ import (
"syscall"
"github.com/pkg/errors"
cli "github.com/urfave/cli/v2"
"github.com/urfave/cli/v2"
"golang.org/x/sys/unix"
)
const (
// Extra mount option to pass Nydus specific information from snapshotter to runtime through containerd.
extraOptionKey = "extraoption="
// Kata virtual volume infmation passed from snapshotter to runtime through containerd, superset of `extraOptionKey`.
// Please refer to `KataVirtualVolume` in https://github.com/kata-containers/kata-containers/blob/main/src/libs/kata-types/src/mount.rs
kataVolumeOptionKey = "io.katacontainers.volume="
)
var (
@ -48,7 +44,7 @@ func parseArgs(args []string) (*mountArgs, error) {
}
if args[2] == "-o" && len(args[3]) != 0 {
for _, opt := range strings.Split(args[3], ",") {
if strings.HasPrefix(opt, extraOptionKey) || strings.HasPrefix(opt, kataVolumeOptionKey) {
if strings.HasPrefix(opt, extraOptionKey) {
// filter extraoption
continue
}

View File

@ -1,15 +1,15 @@
module github.com/dragonflyoss/nydus/contrib/nydus-overlayfs
module github.com/dragonflyoss/image-service/contrib/nydus-overlayfs
go 1.21
go 1.18
require (
github.com/pkg/errors v0.9.1
github.com/urfave/cli/v2 v2.27.1
golang.org/x/sys v0.15.0
github.com/urfave/cli/v2 v2.3.0
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac
)
require (
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect
github.com/russross/blackfriday/v2 v2.0.1 // indirect
github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect
)

View File

@ -1,10 +1,17 @@
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e h1:+SOyEddqYF09QP7vr7CgJ1eti3pY9Fn3LHO1M1r/0sI=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac h1:oN6lz7iLW/YC7un8pq+9bOLyXrprv2+DKfkJY+2LJJw=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

Some files were not shown because too many files have changed in this diff Show More