Compare commits

...

62 Commits

Author SHA1 Message Date
zyfjeff e8c324687a add --original-blob-ids args for merge
the default merge command is to get the name of the original
blob from the bootstrap name, and add a cli args for it

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 16:45:22 +08:00
zyfjeff 7833d84b17 bugfix: do not fill 0 buffer, and skip validate features
1. Buffer reset to 0 will cause race during concurrency.

2. Previously, the second validate_header did not actually take effect. Now
it is repaired, and it is found that the features of blob info do not
set the --inline-bootstrap position to true, so the check of features is
temporarily skipped. Essentially needs to be fixed from nydus-image from
upstream.

Signed-off-by: zhaoshang <zhaoshangsjtu@linux.alibaba.com>
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 16:45:22 +08:00
zyfjeff 37f9af882f Support use /dev/stdin as SOURCE path for image build
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 16:45:22 +08:00
Yan Song 847725c176 docs: add nydusify copy usage
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-08-25 18:04:19 +08:00
Yan Song 5f17cff4fd nydusify: introduce copy subcommand
`nydusify copy` copies an image from source registry to target
registry, it also supports to specify a source backend storage.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-25 18:04:19 +08:00
David Baird f9ab2be073 Fix image-create with ACLs. Fixes #1394.
Signed-off-by: David Baird <dhbaird@gmail.com>
2023-08-17 14:16:58 +08:00
Yan Song 3faf95a1c9 storage: adjust token refresh interval automatically
- Make registry mirror log pretty;
- Adjust token refresh interval automatically;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-17 10:30:11 +08:00
Yan Song 193b7a14f2 storage: remove auth_through option for registry mirror
The auth_through option adds user burden to configure the mirror
and understand its meaning, and since we have optimized handling
of concurrent token requests, this option can now be removed.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-17 10:30:11 +08:00
Yan Song 04bc601e7e storage: implement simpler first token request
Nydusd uses a registry backend which generates a surge of blob requests without
auth tokens on initial startup. This caused mirror backends (e.g. dragonfly)
to process very slowly, the commit fixes this problem.

It implements waiting for the first blob request to complete before making other
blob requests, this ensures the first request caches a valid registry auth token,
and subsequent concurrent blob requests can reuse the cached token.

This change is worthwhile to reduce concurrent token requests, it also makes the
behavior consistent with containerd, which first requests the image manifest and
caches the token before concurrently requesting blobs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-17 10:30:11 +08:00
Qinqi Qu accf15297e deps: change tar-rs to upstream version
Since upstream tar-rs merged our fix for reading large uids/gids from
the PAX extension, so change tar-rs back to the upstream version.

Update tar-rs dependency xattr to 1.0.1 as well.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-09 17:22:04 +08:00
Xuewei Niu c9792e2dd7 deps: Bump dependent crate versions
This pull request is mainly for updating vm-memory and vmm-sys-util.

The affacted crates include:

- vm-memory: from 0.9.0 to 0.10.0
- vmm-sys-util: from 0.10.0 to 0.11.0
- vhost: from 0.5.0 to 0.6.0
- virtio-queue: from 0.6.0 to 0.7.0
- fuse-backend-rs: from 0.10.4 to 0.10.5
- vhost-user-backend: from 0.7.0 to 0.8.0

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2023-08-04 15:02:06 +08:00
Qinqi Qu b2376dfca7 deps: update tar-rs to handle very large uid/gid in image unpack
update tar-rs to support read large uid/gid from PAX extensions to
fix very large UIDs/GIDs (>=2097151, limit of USTAR tar) lost in
PAX style tar during unpack.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-04 15:01:12 +08:00
Yan Song 53c38c005a nydusify: support --with-referrer option
With this option, we can track all nydus images associated with
an OCI image. For example, in Harbor we can cascade to show nydus
images linked to an OCI image, deleting the OCI image can also delete
the corresponding nydus images. At runtime, nydus snapshotter can also
automatically upgrade an OCI image run to nydus image.

Prior to this PR, we had enabled this feature by default. However,
it is now known that Docker Hub does not yet support Referrer.

Therefore, adding this option to disable this feature by default,
to ensure broad compatibility with various image registries.

Fix https://github.com/dragonflyoss/image-service/issues/1363.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-04 15:00:48 +08:00
dependabot[bot] 1b66204987 dep: upgrade dependencies in /contrib/nydusify
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-04 15:00:48 +08:00
Bin Tang e7624dac7a fs: add test for filling auth
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-28 09:46:13 +08:00
Bin Tang eb102644c4 docs: introduce IMAGE_PULL_AUTH env
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-28 09:46:13 +08:00
Bin Tang d8799e6e40 nydusd: parse image pull auth from env
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-28 09:46:13 +08:00
xwb1136021767 19d5b12bb0 nydus-image: add unit test for setting default compression algorithm
Signed-off-by: xwb1136021767 <1136021767@qq.com>
2023-07-15 16:46:56 +08:00
Jiang Liu 3181b313db rafs: avoid a debug_assert related to v5 amplify io
In function RafsSuper::amplify_io(), is the next inode `ni` is
zero-sized, the debug assertion in function calculate_bio_chunk_index()
(rafs/src/metadata/layout/v5.rs) will get triggered. So zero-sized
file should be skipped by amplify_io().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-07-14 09:55:10 +08:00
ccx1024cc b6ee7bb34e fix: amplify io is too large to hold in fuse buffer (#1311)
* fix: amplify io is too large to hold in fuse buffer

Fuse request buffer is fixed by `FUSE_KERN_BUF_SIZE * pagesize() + FUSE_HEADER_ SIZE`. When amplify io is larger than it, FuseDevWriter suffers from smaller buffer. As a result, invalid data error is returned.

Reproduction:
    run nydusd with 3MB amplify_io
    error from random io:
        reply error header OutHeader { len: 16, error: -5, unique: 108 }, error Custom { kind: InvalidData, error: "data out of range, available 1052656 requested 1250066" }

Details:
    size of fuse buffer = 1052656 + 16 (size of inner header) = 256(page number) * 4096(page size) + 4096(fuse header)
    let amplify_io = min(user_specified, fuseWriter.available_bytes())

Resolution:
    This pr is not best implements, but independent of modification to [fuse-backend-rs]("https://github.com/cloud-hypervisor/fuse-backend-rs").
    In future, evalucation of amplify_io will be replaced with [ZeroCopyWriter.available_bytes()]("https://github.com/cloud-hypervisor/fuse-backend-rs/pull/135").

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

* feat: e2e for amplify io larger than fuse buffer

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

---------

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
Co-authored-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-13 10:23:25 +08:00
泰友 7e39a5d8f1 fix: large files broke prefetch
Files larger than 4G leads to prefetch panic, because the max blob io
range is smaller than 4G. This pr changes blob io max size from u32 to
u64.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-13 10:23:25 +08:00
泰友 14de0912af feat: add more types of file to smoke
Including:
    * regular file with chinese name
    * regular with long name
    * symbolic link of deleted file
    * large regular file of 13MB
    * regular file with hole at both head and tail
    * empty regular file

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-13 10:23:25 +08:00
Yiqun Leng 6b61aade61 change a new nydus image for ci test
The network is not stable when pulling the old image, which may result in
ci test failure, so use the new image instead.

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-13 10:19:52 +08:00
YanSong 4707593d3a action: fix checkout on pull_request_target
The `pull_request_target` trigger will checkout the master branch
codes by default, but we need to use the new PR codes on smoke test.

See: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-11 15:08:33 +08:00
泰友 b9ceb71657 dep: openssl from 0.10.48 to 0.10.55
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 15:08:33 +08:00
泰友 a613f4876f fix: deprecated docker field leads to failure of nydusify check
`NydusImage.Config.Config.ArgsEscaped` is present only for legacy compatibility
with Docker and should not be used by new image builders. Nydusify (1.6 and
above) ignores it, which is an expected behavior.

This pr ignores comparision of it in nydusify checking, which leads to failure.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 15:08:33 +08:00
泰友 9e266281e4 fix: merge io from same blob panic
When merging io from same blob with different id, assertion breaks. The
images without blob deduplication suffers from it.

This pr removes the assertion that requires merging in same blob index.
By design, it makes sense, because different blob layer may share same
blob file. A continuous read from same blob for different layer is
helpful for performance.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 15:08:33 +08:00
Jiang Liu 0dda5dd1f1 dep: upgrade base64 to v0.21
Upgrade base64 to v0.21, to avoid multiple versions of the base64
crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-11 15:08:33 +08:00
Jiang Liu dd82282391 dep: upgrade openssl to 0.10.55 to fix cve warnings
error[vulnerability]: `openssl` `X509VerifyParamRef::set_host` buffer over-read
    ┌─ /github/workspace/Cargo.lock:122:1
    │
122 │ openssl 0.10.48 registry+https://github.com/rust-lang/crates.io-index
    │ --------------------------------------------------------------------- security vulnerability detected
    │
    = ID: RUSTSEC-2023-0044
    = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0044
    = When this function was passed an empty string, `openssl` would attempt to call `strlen` on it, reading arbitrary memory until it reached a NUL byte.
    = Announcement: https://github.com/sfackler/rust-openssl/issues/1965
    = Solution: Upgrade to >=0.10.55

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-11 15:08:33 +08:00
Yiqun Leng 67a7addb15 fix incidental bugs in ci test
1. sleep for a while after restart containerd
2. only show detailed logs when test failed

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-10 16:52:27 +08:00
lihuahua123 a508dddd16 Nydusify: fix some bug about the subcommand mount of nydusify
- The `nydusify mount` subcomend don't require `--backend-type` and `--backend-config` options when the backend is registry.
    - The methord to resolve it is we can get the `--backend-type` and `--backend-config` options  from the docker configuration.
    - Also, we have refractor the code of checker module in order to reuse the code

Signed-off-by: lihuahua123 <771725652@qq.com>
2023-06-19 15:53:50 +08:00
Huang Jianan da501f758e builder: set the default compression algorithm for meta ci to lz4
We set the compression algorithm of meta ci to zstd by default, but there
is no option for nydus-image to configure it.

This could cause compatibility problems on the nydus version that does
not support zstd. Let's reset it to lz4 by default.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
2023-06-12 09:54:46 +08:00
Jiang Liu e33e68b9cb dep: update dependency to fix a CVE warning
error[vulnerability]: Resource exhaustion vulnerability in h2 may lead to Denial of Service (DoS)
   ┌─ /github/workspace/Cargo.lock:68:1
   │
68 │ h2 0.3.13 registry+https://github.com/rust-lang/crates.io-index
   │ --------------------------------------------------------------- security vulnerability detected
   │
   = ID: RUSTSEC-2023-0034
   = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0034
   = If an attacker is able to flood the network with pairs of `HEADERS`/`RST_STREAM` frames, such that the `h2` application is not able to accept them faster than the bytes are received, the pending accept queue can grow in memory usage. Being able to do this consistently can result in excessive memory use, and eventually trigger Out Of Memory.

     This flaw is corrected in [hyperium/h2#668](https://github.com/hyperium/h2/pull/668), which restricts remote reset stream count by default.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-12 09:54:46 +08:00
Huang Jianan cf6a216f02 contrib: support nydus-overlayfs and ctr-remote on different platforms
Otherwise, the binary we compiled cannot run on other platforms such as
arm.

Signed-off-by: Huang Jianan <jnhuang@linux.alibaba.com>
2023-05-16 15:35:46 +08:00
Yan Song 04fb92c5aa action: fix smoke test for branch pattern
To match `master` and `stable/*` branches at least.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-20 18:32:45 +08:00
Yan Song 8c9054264c action: upgrade golangci-lint to v1.51.2
To resolve the panic when run golangci-lint:

```
panic: load embedded ruleguard rules: rules/rules.go:13: can't load fmt
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-17 11:43:46 +08:00
imeoer 154bbbf4c7
Merge pull request #1215 from jiangliu/v2.2-backport
Backports two bugfixes from master into stable/v2.2
2023-04-17 11:02:07 +08:00
Jiang Liu 8482792dab rafs: fix a regression caused by commit 2616fb2c05
Fix a regression caused by commit 2616fb2c05.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-14 18:04:04 +08:00
Jiang Liu 27fd2b4925 rafs: fix a possible bug in v6_dirent_size()
Function Node::v6_dirent_size() may return wrong result when "." and
".." are not at the first and second entries in the sorted dirent array.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-14 18:04:00 +08:00
imeoer 72da69cb3d
Merge pull request #1195 from jiangliu/is_present
nydus: fix a possible panic caused by SubCmdArgs::is_present()
2023-04-10 15:26:40 +08:00
imeoer 460454a635
Merge pull request #1199 from taoohong/mushu/stable/v2.2
service: add README for nydus-service
2023-04-10 10:15:04 +08:00
taohong c0293263ec service: add README for nydus-service
Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-04-07 16:51:37 +08:00
Jiang Liu 5153260d7a nydus: fix a possible panic caused by SubCmdArgs::is_present()
Fix a possible panic caused by SubCmdArgs::is_present().

Fixes: https://github.com/dragonflyoss/image-service/issues/1194

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-05 10:54:41 +08:00
Jiang Liu 41a8e11c80
Merge pull request #1191 from adamqqqplay/v2.2-backport
[backport] contrib: upgrade runc to v1.1.5
2023-03-31 16:32:22 +08:00
Qinqi Qu 5ac2a5b666 contrib: upgrade runc to v1.1.5
Runc v1.1.5 fixes three CVEs, we should upgrade it.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-31 15:01:29 +08:00
Jiang Liu 4bcccd7ccd deny: fix cargo deny warnings related to openssl
Fix cargo deny warnings related to openssl.

https://github.com/dragonflyoss/image-service/actions/runs/4522515576/jobs/7965040490

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-31 15:01:01 +08:00
Jiang Liu d2bbd82149
Merge pull request #1171 from ccx1024cc/morgan/backport
backport fix/feature to stable 2.2
2023-03-24 22:33:24 +08:00
Qinqi Qu 3031f7573a deps: bump tempfile version to 3.4.0
Update tempfile related crates to fix https://github.com/advisories/GHSA-mc8h-8q98-g5hr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-23 18:04:25 +08:00
Yiqun Leng 3c4ceb6118 ci test: fix bug of compiling nydus-snapshotter
Since developers changed "make clear" to "make clean" in the Makefile
in nydus-snapshotter, it also needs to be updated in ci test.
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-03-23 18:04:20 +08:00
泰友 6973d9db3e fix: ci: actions are not triggered for stable/v2.2
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-23 15:27:50 +08:00
Yan Song d885d1a25b nydusify: cleanup work directory when conversion finish
Remove the work directory to clean up the temporary image
blob data after the conversion is finished.

We should only clean up when the work directory not exists
before, otherwise it may delete user data by mistake.

Fix: https://github.com/dragonflyoss/image-service/issues/1162

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:26:53 +08:00
Yan Song 009443b91e nydusify: fix oci media type handle
Bump nydus snapshotter v0.7.3 and bring some fixups:

1. If the original image is already an OCI type, we should forcibly set the bootstrap layer to the OCI type.
2. We need to append history item for bootstrap layer, to ensure the history consistency, see: e5d5810851/manifest/schema1/config_builder.go (L136)

Related PR: https://github.com/containerd/nydus-snapshotter/pull/427, https://github.com/goharbor/acceleration-service/pull/119

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:26:25 +08:00
泰友 62d213e6fa rafs: fix amplify can not be skipped
``` json
{
    "device":{
        "backend":{
            "type":"registry",
            "config":{
                "readahead":false,
                "host":"dockerhub.kubekey.local",
                "repo":"dfns/alpine",
                "auth":"YWRtaw46SGFyYm9VMTIZNDU=",
                "scheme":"https",
                "skip_verify":true,
                "proxy":{
                    "fallback":false
                }
            }
        },
        "cache":{
            "type":"",
            "config":{
                "work_dir":"/var/lib/containerd-nydus/cache",
                "disable_indexed_map":false
            }
        }
    },
    "mode":"direct",
    "digest_validate":false,
    "jostats_files":true,
    "enable_xattr":true,
    "access_pattern":true,
    "latest_read_files":true,
    "batch_size":0,
    "amplify_io":0,
    "fs_prefetch":{
        "enable":false,
        "prefetch_all":false,
        "threads_count":10,
        "merging_size":131072,
        "bandwidth_rate":1048576,
        "batch_size":0,
        "amplify_io":0
    }
}
```
`{.fs_prefetch.merging_size}` is used, instead of `{.amplify_io}`

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-23 15:25:58 +08:00
Yan Song 3fb31b91c2 nydusify: forcibly enabled `--oci` option when `--oci-ref` be enabled
We need to forcibly enable `--oci` option for allowing to append
related annotation for zran image, otherwise an error be thrown:

```
merge nydus layers: invalid label containerd.io/snapshot/nydus-ref=: invalid checksum digest format
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song 37e382c72d nydusify: fix unnecessary golang-lint error
```
golangci-lint run
Error: pkg/converter/provider/ported.go:47:64: SA1019: rCtx.ConvertSchema1 is deprecated: use Schema 2 or OCI images. (staticcheck)
	if desc.MediaType == images.MediaTypeDockerSchema1Manifest && rCtx.ConvertSchema1 {
	                                                              ^
Error: pkg/converter/provider/ported.go:20:2: SA1019: "github.com/containerd/containerd/remotes/docker/schema1" is deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1. (staticcheck)
	"github.com/containerd/containerd/remotes/docker/schema1"
	^
```

Disabled the check, it's unnecessary to check the ported codes.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song b60e92ae6a nydusify: fix `--oci` option for convert subcommand
The `--oci` option is not working, we make it reverse before,
this patch fix it and keep compatibility with the old option
`--docker-v2-format`.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song da8083c550 nydusify: fix pulling all platforms of source image
We should only handle specific platform for pulling by
`platforms.MatchComparer`, otherwise nydusify will pull
the layer data of all platforms for an source image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:25:36 +08:00
Yan Song b0f5edbbc7 rafs: do not fix blob id for old bootstrap
In fact, there is no way to tell if a separate old bootstrap file
was inline to the blob, for example, for an old merged bootstrap,
we can't set the blob id it references to as the filename, otherwise
it will break blob table on loading rafs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:20:06 +08:00
Yan Song a2ad16d4d2 smoke: add `--parent-bootstrap` for merge test
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:20:06 +08:00
Yan Song 7e6502711f builder: support `--parent-bootstrap` for merge
This option allows merging multiple bootstraps of upper layer with
the bootstrap of a parent image, so that we can implement container
commit operation for nydus image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-23 15:20:03 +08:00
Jiang Liu 115525298f
Merge pull request #1133 from jiangliu/v2.2-fix-get-compressed-size
nydus-image: fix a underflow issue in get_compressed_size()
2023-03-03 11:20:46 +08:00
Jiang Liu 6e0f69b673 nydus-image: fix a underflow issue in get_compressed_size()
Fix a underflow issue in get_compressed_size() by skipping generating
useless Tar/Toc headers.

Fixes: https://github.com/dragonflyoss/image-service/issues/1129

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-03 10:22:19 +08:00
83 changed files with 3664 additions and 4573 deletions

View File

@ -34,7 +34,7 @@ jobs:
${{ runner.os }}-golang-
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.47.3
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.51.2
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@master

View File

@ -2,10 +2,10 @@ name: Smoke Test
on:
push:
branches: ["*"]
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["*"]
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
@ -36,7 +36,7 @@ jobs:
${{ runner.os }}-golang-
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.47.3
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2
make -e DOCKER=false nydusify-release
make -e DOCKER=false contrib-test
- name: Upload Nydusify
@ -125,7 +125,7 @@ jobs:
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.47.3
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2
sudo -E make smoke-only
nydus-unit-test:

177
Cargo.lock generated
View File

@ -99,9 +99,9 @@ dependencies = [
[[package]]
name = "base64"
version = "0.13.0"
version = "0.21.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "904dfeac50f3cdaba28fc6f57fdcddb75f49ed61346676a78c4ffe55877802fd"
checksum = "a4a4ddaa51a5bc52a6948f74c06d20aaaddb71924eab79b8c97a8c556e942d6a"
[[package]]
name = "bitflags"
@ -376,6 +376,27 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "errno"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f639046355ee4f37944e44f60642c6f3a7efa3cf6b78c78a0d989a8ce6c396a1"
dependencies = [
"errno-dragonfly",
"libc",
"winapi",
]
[[package]]
name = "errno-dragonfly"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa68f1b12764fab894d2755d2518754e71b4fd80ecfb822714a1206c2aab39bf"
dependencies = [
"cc",
"libc",
]
[[package]]
name = "fastrand"
version = "1.7.0"
@ -458,9 +479,9 @@ dependencies = [
[[package]]
name = "fuse-backend-rs"
version = "0.10.1"
version = "0.10.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91aa2575daf333bef77ad5026d28d5ebdb04ec8e5d7a4ac9b1bb03976c2d673a"
checksum = "f85357722be4bf3d0b7548bedf7499686c77628c2c61cb99c6519463f7a9e5f0"
dependencies = [
"arc-swap",
"bitflags",
@ -471,10 +492,9 @@ dependencies = [
"log",
"mio",
"nix",
"tokio-uring",
"vhost",
"virtio-queue",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
]
@ -614,9 +634,9 @@ dependencies = [
[[package]]
name = "h2"
version = "0.3.13"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37a82c6d637fc9515a4694bbf1cb2457b79d81ce52b3108bdeea58b07dd34a57"
checksum = "17f8a914c2987b688368b5138aa05321db91f4090cf26118185672ad588bce21"
dependencies = [
"bytes",
"fnv",
@ -806,13 +826,13 @@ dependencies = [
]
[[package]]
name = "io-uring"
version = "0.5.9"
name = "io-lifetimes"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ba34abb5175052fc1a2227a10d2275b7386c9990167de9786c0b88d8b062330"
checksum = "e7d6c6f8c91b4b9ed43484ad1a938e393caf35960fce7f82a040497207bd8e9e"
dependencies = [
"bitflags",
"libc",
"windows-sys",
]
[[package]]
@ -864,9 +884,9 @@ dependencies = [
[[package]]
name = "libc"
version = "0.2.139"
version = "0.2.147"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "201de327520df007757c1f0adce6e827fe8562fbc28bfd9c15571c66ca1f5f79"
checksum = "b4668fb0ea861c1df094127ac5f1da3409a82116a4ba74fca2e58ef927159bb3"
[[package]]
name = "libz-sys"
@ -890,6 +910,12 @@ dependencies = [
"cc",
]
[[package]]
name = "linux-raw-sys"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f051f77a7c8e6957c0696eac88f26b0117e54f52d3fc682ab19397a8812846a4"
[[package]]
name = "lock_api"
version = "0.4.7"
@ -989,9 +1015,9 @@ dependencies = [
[[package]]
name = "native-tls"
version = "0.2.10"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd7e2f3618557f980e0b17e8856252eee3c97fa12c54dff0ca290fb6266ca4a9"
checksum = "07226173c32f2926027b63cce4bcd8076c3552846cbe7925f3aaffeac0a3b92e"
dependencies = [
"lazy_static",
"libc",
@ -1112,7 +1138,7 @@ dependencies = [
"nydus-storage",
"serde",
"serde_json",
"vm-memory",
"vm-memory 0.9.0",
]
[[package]]
@ -1163,7 +1189,7 @@ dependencies = [
"serde",
"serde_json",
"spmc",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
]
@ -1174,6 +1200,7 @@ dependencies = [
"anyhow",
"base64",
"clap",
"flexi_logger",
"fuse-backend-rs",
"hex",
"hyper",
@ -1202,7 +1229,7 @@ dependencies = [
"vhost-user-backend",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
"xattr",
]
@ -1231,7 +1258,7 @@ dependencies = [
"vhost-user-backend",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
]
@ -1268,7 +1295,7 @@ dependencies = [
"time",
"tokio",
"url",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
]
@ -1286,6 +1313,7 @@ dependencies = [
"lz4-sys",
"nix",
"nydus-error",
"openssl",
"serde",
"serde_json",
"sha2",
@ -1312,9 +1340,9 @@ checksum = "18a6dbe30758c9f83eb00cbea4ac95966305f5a7772f3f42ebfc7fc7eddbd8e1"
[[package]]
name = "openssl"
version = "0.10.45"
version = "0.10.55"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b102428fd03bc5edf97f62620f7298614c45cedf287c271e7ed450bbaf83f2e1"
checksum = "345df152bc43501c5eb9e4654ff05f794effb78d4efe3d53abc158baddc0703d"
dependencies = [
"bitflags",
"cfg-if",
@ -1353,11 +1381,10 @@ dependencies = [
[[package]]
name = "openssl-sys"
version = "0.9.80"
version = "0.9.90"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23bbbf7854cd45b83958ebe919f0e8e516793727652e27fda10a8384cfc790b7"
checksum = "374533b0e45f3a7ced10fcaeccca020e66656bc03dac384f852e4e5a7a8104a6"
dependencies = [
"autocfg",
"cc",
"libc",
"openssl-src",
@ -1512,20 +1539,11 @@ version = "0.6.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3f87b73ce11b1619a3c6332f45341e0047173771e8b8b73f87bfeefb7b56244"
[[package]]
name = "remove_dir_all"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7"
dependencies = [
"winapi",
]
[[package]]
name = "reqwest"
version = "0.11.11"
version = "0.11.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b75aa69a3f06bbcc66ede33af2af253c6f7a86b1ca0033f60c580a27074fbf92"
checksum = "27b71749df584b7f4cac2c426c127a7c785a5106cc98f7a8feb044115f0fa254"
dependencies = [
"base64",
"bytes",
@ -1539,10 +1557,10 @@ dependencies = [
"hyper-tls",
"ipnet",
"js-sys",
"lazy_static",
"log",
"mime",
"native-tls",
"once_cell",
"percent-encoding",
"pin-project-lite",
"serde",
@ -1592,6 +1610,20 @@ version = "0.1.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ef03e0a2b150c7a90d01faf6254c9c48a41e95fb2a8c2ac1c6f0d2b9aefc342"
[[package]]
name = "rustix"
version = "0.36.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4fdebc4b395b7fbb9ab11e462e20ed9051e7b16e42d24042c776eca0ac81b03"
dependencies = [
"bitflags",
"errno",
"io-lifetimes",
"libc",
"linux-raw-sys",
"windows-sys",
]
[[package]]
name = "ryu"
version = "1.0.10"
@ -1607,12 +1639,6 @@ dependencies = [
"windows-sys",
]
[[package]]
name = "scoped-tls"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ea6a9290e3c9cf0f18145ef7ffa62d68ee0bf5fcd651017e586dc7fd5da448c2"
[[package]]
name = "scopeguard"
version = "1.1.0"
@ -1769,9 +1795,9 @@ dependencies = [
[[package]]
name = "tar"
version = "0.4.38"
version = "0.4.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4b55807c0344e1e6c04d7c965f5289c39a8d94ae23ed5c0b57aabac549f871c6"
checksum = "b16afcea1f22891c49a00c751c7b63b2233284064f11a200fc624137c51e2ddb"
dependencies = [
"filetime",
"libc",
@ -1780,16 +1806,15 @@ dependencies = [
[[package]]
name = "tempfile"
version = "3.3.0"
version = "3.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5cdb1ef4eaeeaddc8fbd371e5017057064af0911902ef36b39801f67cc6d79e4"
checksum = "af18f7ae1acd354b992402e9ec5864359d693cd8a79dcbef59f76891701c1e95"
dependencies = [
"cfg-if",
"fastrand",
"libc",
"redox_syscall",
"remove_dir_all",
"winapi",
"rustix",
"windows-sys",
]
[[package]]
@ -1887,20 +1912,6 @@ dependencies = [
"tokio",
]
[[package]]
name = "tokio-uring"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0d5e02bb137e030b3a547c65a3bd2f1836d66a97369fdcc69034002b10e155ef"
dependencies = [
"io-uring",
"libc",
"scoped-tls",
"slab",
"socket2",
"tokio",
]
[[package]]
name = "tokio-util"
version = "0.7.4"
@ -2036,28 +2047,28 @@ checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
[[package]]
name = "vhost"
version = "0.5.0"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "79243657c76e5c90dcbf60187c842614f6dfc7123972c55bb3bcc446792aca93"
checksum = "c9b791c5b0717a0558888a4cf7240cea836f39a99cb342e12ce633dcaa078072"
dependencies = [
"bitflags",
"libc",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
]
[[package]]
name = "vhost-user-backend"
version = "0.7.0"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a0fc7d5f8e2943cd9f2ecd58be3f2078add863a49573d14dd9d64e1ab26544c"
checksum = "9f237b91db4ac339d639fb43398b52d785fa51e3c7760ac9425148863c1f4303"
dependencies = [
"libc",
"log",
"vhost",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
]
@ -2069,13 +2080,13 @@ checksum = "3ff512178285488516ed85f15b5d0113a7cdb89e9e8a760b269ae4f02b84bd6b"
[[package]]
name = "virtio-queue"
version = "0.6.1"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "435dd49c7b38419729afd43675850c7b5dc4728f2fabd70c7a9079a331e4f8c6"
checksum = "3ba81e2bcc21c0d2fc5e6683e79367e26ad219197423a498df801d79d5ba77bd"
dependencies = [
"log",
"virtio-bindings",
"vm-memory",
"vm-memory 0.10.0",
"vmm-sys-util",
]
@ -2084,6 +2095,16 @@ name = "vm-memory"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "583f213899e8a5eea23d9c507252d4bed5bc88f0ecbe0783262f80034630744b"
dependencies = [
"libc",
"winapi",
]
[[package]]
name = "vm-memory"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "688a70366615b45575a424d9c665561c1b5ab2224d494f706b6a6812911a827c"
dependencies = [
"arc-swap",
"libc",
@ -2092,9 +2113,9 @@ dependencies = [
[[package]]
name = "vmm-sys-util"
version = "0.10.0"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08604d7be03eb26e33b3cee3ed4aef2bf550b305d1cca60e84da5d28d3790b62"
checksum = "dd64fe09d8e880e600c324e7d664760a17f56e9672b7495a86381b49e4f72f46"
dependencies = [
"bitflags",
"libc",
@ -2291,9 +2312,9 @@ dependencies = [
[[package]]
name = "xattr"
version = "0.2.3"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d1526bbe5aaeb5eb06885f4d987bcdfa5e23187055de9b83fe00156a821fabc"
checksum = "f4686009f71ff3e5c4dbcf1a282d0a44db3f021ba69350cd42086b3e5f1c6985"
dependencies = [
"libc",
]

View File

@ -31,9 +31,10 @@ path = "src/lib.rs"
[dependencies]
anyhow = "1"
base64 = "0.13.0"
base64 = "0.21"
clap = { version = "4.0.18", features = ["derive", "cargo"] }
fuse-backend-rs = "0.10.1"
flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.10.4"
hex = "0.4.3"
hyper = "0.14.11"
hyperlocal = "0.8.0"
@ -47,13 +48,13 @@ rlimit = "0.9.0"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.51"
sha2 = "0.10.2"
tar = "0.4.38"
tar = "0.4.40"
tokio = { version = "1.24", features = ["macros"] }
vmm-sys-util = "0.10.0"
xattr = "0.2.3"
vmm-sys-util = "0.11.0"
xattr = "1.0.1"
# Build static linked openssl library
openssl = { version = "0.10.45", features = ["vendored"] }
openssl = { version = "0.10.55", features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
#openssl-src = { version = "111.22" }
@ -65,11 +66,11 @@ nydus-service = { version = "0.2.0", path = "service" }
nydus-storage = { version = "0.6.2", path = "storage" }
nydus-utils = { version = "0.4.1", path = "utils" }
vhost = { version = "0.5.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.7.0", optional = true }
vhost = { version = "0.6.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.8.0", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { version = "0.6.0", optional = true }
vm-memory = { version = "0.9.0", features = ["backend-mmap"], optional = true }
virtio-queue = { version = "0.7.0", optional = true }
vm-memory = { version = "0.10.0", features = ["backend-mmap"], optional = true }
[features]
default = [
@ -96,4 +97,4 @@ backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"]
[workspace]
members = ["api", "app", "blobfs", "clib", "error", "rafs", "storage", "service", "utils"]
members = ["api", "app", "blobfs", "clib", "error", "rafs", "storage", "service", "utils"]

View File

@ -137,6 +137,11 @@ The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/
In the future, `zstd::chunked` can work in this way as well.
## Reuse Nydus Services
Using the key features of nydus as native in your project without preparing and invoking `nydusd` deliberately, [nydus-service](./service/README.md) helps to reuse the core services of nyuds.
## Documentation
Browse the documentation to learn more. Here are some topics you may be interested in:

View File

@ -23,7 +23,7 @@ url = { version = "2.1.1", optional = true }
nydus-error = { version = "0.2", path = "../error" }
[dev-dependencies]
vmm-sys-util = { version = "0.10" }
vmm-sys-util = { version = "0.11" }
[features]
handler = ["dbs-uhttp", "http", "lazy_static", "mio", "url"]

View File

@ -202,6 +202,17 @@ impl ConfigV2 {
false
}
}
/// Fill authorization for registry backend.
pub fn update_registry_auth_info(&mut self, auth: &Option<String>) {
if let Some(auth) = auth {
if let Some(backend) = self.backend.as_mut() {
if let Some(registry) = backend.registry.as_mut() {
registry.auth = Some(auth.to_string());
}
}
}
}
}
impl FromStr for ConfigV2 {
@ -843,12 +854,6 @@ pub struct MirrorConfig {
/// HTTP request headers to be passed to mirror server.
#[serde(default)]
pub headers: HashMap<String, String>,
/// Whether the authorization process is through mirror, default to false.
/// true: authorization through mirror, e.g. Using normal registry as mirror.
/// false: authorization through original registry,
/// e.g. when using Dragonfly server as mirror, authorization through it may affect performance.
#[serde(default)]
pub auth_through: bool,
/// Interval for mirror health checking, in seconds.
#[serde(default = "default_check_interval")]
pub health_check_interval: u64,
@ -862,7 +867,6 @@ impl Default for MirrorConfig {
Self {
host: String::new(),
headers: HashMap::new(),
auth_through: false,
health_check_interval: 5,
failure_limit: 5,
ping_url: String::new(),
@ -1586,7 +1590,6 @@ mod tests {
[[backend.oss.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
auth_through = true
health_check_interval = 10
failure_limit = 10
"#;
@ -1620,7 +1623,6 @@ mod tests {
let mirror = &oss.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.auth_through);
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
@ -1652,7 +1654,6 @@ mod tests {
[[backend.registry.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
auth_through = true
health_check_interval = 10
failure_limit = 10
"#;
@ -1688,7 +1689,6 @@ mod tests {
let mirror = &registry.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.auth_through);
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
@ -1895,4 +1895,48 @@ mod tests {
assert_eq!(&config.id, "id1");
assert_eq!(config.backend.as_ref().unwrap().backend_type, "localfs");
}
#[test]
fn test_update_registry_auth_info() {
let config = r#"
{
"device": {
"id": "test",
"backend": {
"type": "registry",
"config": {
"readahead": false,
"host": "docker.io",
"repo": "library/nginx",
"scheme": "https",
"proxy": {
"fallback": false
},
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 8
}
}
},
"mode": "direct",
"digest_validate": false,
"enable_xattr": true,
"fs_prefetch": {
"enable": true,
"threads_count": 10,
"merging_size": 131072,
"bandwidth_rate": 10485760
}
}"#;
let mut rafs_config = ConfigV2::from_str(&config).unwrap();
let test_auth = "test_auth".to_string();
rafs_config.update_registry_auth_info(&Some(test_auth.clone()));
let backend = rafs_config.backend.unwrap();
let registry = backend.registry.unwrap();
let auth = registry.auth.unwrap();
assert_eq!(auth, test_auth);
}
}

33
builder/Cargo.toml Normal file
View File

@ -0,0 +1,33 @@
[package]
name = "nydus-builder"
version = "0.1.0"
description = "Nydus Image Builder"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
[dependencies]
anyhow = "1.0.35"
base64 = "0.21"
hex = "0.4.3"
indexmap = "1"
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
sha2 = "0.10.2"
tar = "0.4.40"
vmm-sys-util = "0.10.0"
xattr = "1.0.1"
nydus-api = { version = "0.3", path = "../api" }
nydus-rafs = { version = "0.3", path = "../rafs" }
nydus-storage = { version = "0.6", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.4", path = "../utils" }
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "aarch64-unknown-linux-gnu", "aarch64-apple-darwin"]

948
builder/src/stargz.rs Normal file
View File

@ -0,0 +1,948 @@
// Copyright 2020 Alibaba cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate a RAFS filesystem bootstrap from an stargz layer, reusing the stargz layer as data blob.
use std::collections::HashMap;
use std::ffi::{OsStr, OsString};
use std::fs::File;
use std::io::{Seek, SeekFrom};
use std::ops::Deref;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use anyhow::{anyhow, bail, Context, Error, Result};
use base64::Engine;
use nix::NixPath;
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::{InodeWrapper, RafsInodeFlags, RafsV6Inode};
use nydus_rafs::metadata::layout::v5::RafsV5ChunkInfo;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobChunkFlags;
use nydus_storage::{RAFS_MAX_CHUNKS_PER_BLOB, RAFS_MAX_CHUNK_SIZE};
use nydus_utils::compact::makedev;
use nydus_utils::compress::{self, compute_compressed_gzip_size};
use nydus_utils::digest::{self, DigestData, RafsDigest};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer, try_round_up_4k, ByteSize};
use serde::{Deserialize, Serialize};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
};
use super::core::node::{ChunkSource, Node, NodeChunk, NodeInfo};
use super::{
build_bootstrap, dump_bootstrap, finalize_blob, Bootstrap, Builder, TarBuilder, Tree, TreeNode,
};
#[derive(Deserialize, Serialize, Debug, Clone, Default)]
struct TocEntry {
/// This REQUIRED property contains the name of the tar entry.
///
/// This MUST be the complete path stored in the tar file.
pub name: PathBuf,
/// This REQUIRED property contains the type of tar entry.
///
/// This MUST be either of the following.
/// - dir: directory
/// - reg: regular file
/// - symlink: symbolic link
/// - hardlink: hard link
/// - char: character device
/// - block: block device
/// - fifo: fifo
/// - chunk: a chunk of regular file data As described in the above section,
/// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk.
/// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after
/// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize
/// properties.
#[serde(rename = "type")]
pub toc_type: String,
/// This OPTIONAL property contains the uncompressed size of the regular file.
///
/// Non-empty reg file MUST set this property.
#[serde(default)]
pub size: u64,
// This OPTIONAL property contains the modification time of the tar entry.
//
// Empty means zero or unknown. Otherwise, the value is in UTC RFC3339 format.
// // ModTime3339 is the modification time of the tar entry. Empty
// // means zero or unknown. Otherwise it's in UTC RFC3339
// // format. Use the ModTime method to access the time.Time value.
// #[serde(default, alias = "modtime")]
// mod_time_3339: String,
// #[serde(skip)]
// mod_time: Time,
/// This OPTIONAL property contains the link target.
///
/// Symlink and hardlink MUST set this property.
#[serde(default, rename = "linkName")]
pub link_name: PathBuf,
/// This REQUIRED property contains the permission and mode bits.
#[serde(default)]
pub mode: u32,
/// This REQUIRED property contains the user ID of the owner of this file.
#[serde(default)]
pub uid: u32,
/// This REQUIRED property contains the group ID of the owner of this file.
#[serde(default)]
pub gid: u32,
/// This OPTIONAL property contains the username of the owner.
///
/// In the serialized JSON, this field may only be present for
/// the first entry with the same Uid.
#[serde(default, rename = "userName")]
pub uname: String,
/// This OPTIONAL property contains the groupname of the owner.
///
/// In the serialized JSON, this field may only be present for
/// the first entry with the same Gid.
#[serde(default, rename = "groupName")]
pub gname: String,
/// This OPTIONAL property contains the major device number of device files.
///
/// char and block files MUST set this property.
#[serde(default, rename = "devMajor")]
pub dev_major: u64,
/// This OPTIONAL property contains the minor device number of device files.
///
/// char and block files MUST set this property.
#[serde(default, rename = "devMinor")]
pub dev_minor: u64,
/// This OPTIONAL property contains the extended attribute for the tar entry.
#[serde(default)]
pub xattrs: HashMap<String, String>,
/// This OPTIONAL property contains the digest of the regular file contents.
///
/// It has the form "sha256:abcdef01234....".
#[serde(default)]
pub digest: String,
/// This OPTIONAL property contains the offset of the gzip header of the regular file or chunk
/// in the blob.
///
/// TOCEntries of non-empty reg and chunk MUST set this property.
#[serde(default)]
pub offset: u64,
/// This OPTIONAL property contains the offset of this chunk in the decompressed regular file
/// payload. TOCEntries of chunk type MUST set this property.
///
/// ChunkOffset is non-zero if this is a chunk of a large, regular file.
/// If so, the Offset is where the gzip header of ChunkSize bytes at ChunkOffset in Name begin.
///
/// In serialized form, a "chunkSize" JSON field of zero means that the chunk goes to the end
/// of the file. After reading from the stargz TOC, though, the ChunkSize is initialized to
/// a non-zero file for when Type is either "reg" or "chunk".
#[serde(default, rename = "chunkOffset")]
pub chunk_offset: u64,
/// This OPTIONAL property contains the decompressed size of this chunk.
///
/// The last chunk in a reg file or reg file that isn't chunked MUST set this property to zero.
/// Other reg and chunk MUST set this property.
#[serde(default, rename = "chunkSize")]
pub chunk_size: u64,
/// This OPTIONAL property contains a digest of this chunk.
///
/// TOCEntries of non-empty reg and chunk MUST set this property. This MAY be used for verifying
/// the data of the chunk.
#[serde(default, rename = "chunkDigest")]
pub chunk_digest: String,
/// This OPTIONAL property indicates the uncompressed offset of the "reg" or "chunk" entry
/// payload in a stream starts from offset field.
///
/// `innerOffset` enables to put multiple "reg" or "chunk" payloads in one gzip stream starts
/// from offset.
#[serde(default, rename = "innerOffset")]
pub inner_offset: u64,
}
impl TocEntry {
/// Check whether the `TocEntry` is a directory.
pub fn is_dir(&self) -> bool {
self.toc_type.as_str() == "dir"
}
/// Check whether the `TocEntry` is a regular file.
pub fn is_reg(&self) -> bool {
self.toc_type.as_str() == "reg"
}
/// Check whether the `TocEntry` is a symlink.
pub fn is_symlink(&self) -> bool {
self.toc_type.as_str() == "symlink"
}
/// Check whether the `TocEntry` is a hardlink.
pub fn is_hardlink(&self) -> bool {
self.toc_type.as_str() == "hardlink"
}
/// Check whether the `TocEntry` is a file data chunk.
pub fn is_chunk(&self) -> bool {
self.toc_type.as_str() == "chunk"
}
/// Check whether the `TocEntry` is a block device.
pub fn is_blockdev(&self) -> bool {
self.toc_type.as_str() == "block"
}
/// Check whether the `TocEntry` is a char device.
pub fn is_chardev(&self) -> bool {
self.toc_type.as_str() == "char"
}
/// Check whether the `TocEntry` is a FIFO.
pub fn is_fifo(&self) -> bool {
self.toc_type.as_str() == "fifo"
}
/// Check whether the `TocEntry` is a special entry.
pub fn is_special(&self) -> bool {
self.is_blockdev() || self.is_chardev() || self.is_fifo()
}
pub fn is_supported(&self) -> bool {
self.is_dir() || self.is_reg() || self.is_symlink() || self.is_hardlink() || self.is_chunk()
}
/// Check whether the `TocEntry` has associated extended attributes.
pub fn has_xattr(&self) -> bool {
!self.xattrs.is_empty()
}
/// Get access permission and file mode of the `TocEntry`.
pub fn mode(&self) -> u32 {
let mut mode = 0;
if self.is_dir() {
mode |= libc::S_IFDIR;
} else if self.is_reg() || self.is_hardlink() {
mode |= libc::S_IFREG;
} else if self.is_symlink() {
mode |= libc::S_IFLNK;
} else if self.is_blockdev() {
mode |= libc::S_IFBLK;
} else if self.is_chardev() {
mode |= libc::S_IFCHR;
} else if self.is_fifo() {
mode |= libc::S_IFIFO;
}
self.mode & !libc::S_IFMT as u32 | mode as u32
}
/// Get real device id associated with the `TocEntry`.
pub fn rdev(&self) -> u32 {
if self.is_special() {
makedev(self.dev_major, self.dev_minor) as u32
} else {
u32::MAX
}
}
/// Get content size of the entry.
pub fn size(&self) -> u64 {
if self.is_reg() {
self.size
} else {
0
}
}
/// Get file name of the `TocEntry` from the associated path.
///
/// For example: `` to `/`, `/` to `/`, `a/b` to `b`, `a/b/` to `b`
pub fn name(&self) -> Result<&OsStr> {
let name = if self.name == Path::new("/") {
OsStr::new("/")
} else {
self.name
.file_name()
.ok_or_else(|| anyhow!("stargz: invalid entry name {}", self.name.display()))?
};
Ok(name)
}
/// Get absolute path for the `TocEntry`.
///
/// For example: `` to `/`, `a/b` to `/a/b`, `a/b/` to `/a/b`
pub fn path(&self) -> &Path {
&self.name
}
/// Convert link path of hardlink entry to rootfs absolute path
///
/// For example: `a/b` to `/a/b`
pub fn hardlink_link_path(&self) -> &Path {
assert!(self.is_hardlink());
&self.link_name
}
/// Get target of symlink.
pub fn symlink_link_path(&self) -> &Path {
assert!(self.is_symlink());
&self.link_name
}
pub fn block_id(&self) -> Result<RafsDigest> {
if self.chunk_digest.len() != 71 || !self.chunk_digest.starts_with("sha256:") {
bail!("stargz: invalid chunk digest {}", self.chunk_digest);
}
match hex::decode(&self.chunk_digest[7..]) {
Err(_e) => bail!("stargz: invalid chunk digest {}", self.chunk_digest),
Ok(v) => {
let mut data = DigestData::default();
data.copy_from_slice(&v[..32]);
Ok(RafsDigest { data })
}
}
}
fn normalize(&mut self) -> Result<()> {
if self.name.is_empty() {
bail!("stargz: invalid TocEntry with empty name");
}
self.name = PathBuf::from("/").join(&self.name);
if !self.is_supported() && !self.is_special() {
bail!("stargz: invalid type {} for TocEntry", self.toc_type);
}
if (self.is_symlink() || self.is_hardlink()) && self.link_name.is_empty() {
bail!("stargz: empty link target");
}
if self.is_hardlink() {
self.link_name = PathBuf::from("/").join(&self.link_name);
}
if (self.is_reg() || self.is_chunk())
&& (self.digest.is_empty() || self.chunk_digest.is_empty())
{
bail!("stargz: missing digest or chunk digest");
}
if self.is_chunk() && self.chunk_offset == 0 {
bail!("stargz: chunk offset is zero");
}
Ok(())
}
}
#[derive(Deserialize, Debug, Clone, Default)]
struct TocIndex {
pub version: u32,
pub entries: Vec<TocEntry>,
}
impl TocIndex {
fn load(path: &Path, offset: u64) -> Result<TocIndex> {
let mut index_file = File::open(path)
.with_context(|| format!("stargz: failed to open index file {:?}", path))?;
let pos = index_file
.seek(SeekFrom::Start(offset))
.context("stargz: failed to seek to start of TOC")?;
if pos != offset {
bail!("stargz: failed to seek file position to start of TOC");
}
let mut toc_index: TocIndex = serde_json::from_reader(index_file).with_context(|| {
format!(
"stargz: failed to deserialize stargz TOC index file {:?}",
path
)
})?;
if toc_index.version != 1 {
return Err(Error::msg(format!(
"stargz: unsupported index version {}",
toc_index.version
)));
}
for entry in toc_index.entries.iter_mut() {
entry.normalize()?;
}
Ok(toc_index)
}
}
/// Build RAFS filesystems from eStargz images.
pub struct StargzBuilder {
blob_size: u64,
builder: TarBuilder,
file_chunk_map: HashMap<PathBuf, (u64, Vec<NodeChunk>)>,
hardlink_map: HashMap<PathBuf, TreeNode>,
uncompressed_offset: u64,
}
impl StargzBuilder {
/// Create a new instance of [StargzBuilder].
pub fn new(blob_size: u64, ctx: &BuildContext) -> Self {
Self {
blob_size,
builder: TarBuilder::new(ctx.explicit_uidgid, 0, ctx.fs_version),
file_chunk_map: HashMap::new(),
hardlink_map: HashMap::new(),
uncompressed_offset: 0,
}
}
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<Tree> {
let toc_index = TocIndex::load(&ctx.source_path, 0)?;
if toc_index.version != 1 {
bail!("stargz: TOC version {} is unsupported", toc_index.version);
} else if toc_index.entries.is_empty() {
bail!("stargz: TOC array is empty");
}
self.builder.layer_idx = layer_idx;
let root = self.builder.create_directory(&[OsString::from("/")])?;
let mut tree = Tree::new(root);
// Map regular file path to chunks: HashMap<<file_path>, <(file_size, chunks)>>
let mut last_reg_entry: Option<&TocEntry> = None;
for entry in toc_index.entries.iter() {
let path = entry.path();
// TODO: support chardev/blockdev/fifo
if !entry.is_supported() {
warn!(
"stargz: unsupported {} with type {}",
path.display(),
entry.toc_type
);
continue;
} else if self.builder.is_stargz_special_files(path) {
// skip estargz special files.
continue;
}
// Build RAFS chunk info from eStargz regular file or chunk data record.
let uncompress_size = Self::get_content_size(ctx, entry, &mut last_reg_entry)?;
if (entry.is_reg() || entry.is_chunk()) && uncompress_size != 0 {
let block_id = entry
.block_id()
.context("stargz: failed to get chunk digest")?;
// blob_index, index and compressed_size will be fixed later
let chunk_info = ChunkWrapper::V6(RafsV5ChunkInfo {
block_id,
blob_index: 0,
flags: BlobChunkFlags::COMPRESSED,
compressed_size: 0,
uncompressed_size: uncompress_size as u32,
compressed_offset: entry.offset as u64,
uncompressed_offset: self.uncompressed_offset,
file_offset: entry.chunk_offset as u64,
index: 0,
reserved: 0,
});
let chunk = NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk_info),
};
if let Some((size, chunks)) = self.file_chunk_map.get_mut(path) {
chunks.push(chunk);
if entry.is_reg() {
*size = entry.size;
}
} else if entry.is_reg() {
self.file_chunk_map
.insert(path.to_path_buf(), (entry.size, vec![chunk]));
} else {
bail!("stargz: file chunk lacks of corresponding head regular file entry");
}
let aligned_chunk_size = if ctx.aligned_chunk {
// Safe to unwrap because `chunk_size` is much less than u32::MAX.
try_round_up_4k(uncompress_size).unwrap()
} else {
uncompress_size
};
self.uncompressed_offset += aligned_chunk_size;
}
if !entry.is_chunk() && !self.builder.is_stargz_special_files(path) {
self.parse_entry(&mut tree, entry, path)?;
}
}
for (size, ref mut chunks) in self.file_chunk_map.values_mut() {
Self::sort_and_validate_chunks(chunks, *size)?;
}
Ok(tree)
}
/// Get content size of a regular file or file chunk entry.
fn get_content_size<'a>(
ctx: &mut BuildContext,
entry: &'a TocEntry,
last_reg_entry: &mut Option<&'a TocEntry>,
) -> Result<u64> {
if entry.is_reg() {
// Regular file without chunk
if entry.chunk_offset == 0 && entry.chunk_size == 0 {
Ok(entry.size)
} else if entry.chunk_offset % ctx.chunk_size as u64 != 0 {
bail!(
"stargz: chunk offset (0x{:x}) is not aligned to 0x{:x}",
entry.chunk_offset,
ctx.chunk_size
);
} else if entry.chunk_size != ctx.chunk_size as u64 {
bail!("stargz: first chunk size is not 0x{:x}", ctx.chunk_size);
} else {
*last_reg_entry = Some(entry);
Ok(entry.chunk_size)
}
} else if entry.is_chunk() {
if entry.chunk_offset % ctx.chunk_size as u64 != 0 {
bail!(
"stargz: chunk offset (0x{:x}) is not aligned to 0x{:x}",
entry.chunk_offset,
ctx.chunk_size
);
} else if entry.chunk_size == 0 {
// Figure out content size for the last chunk entry of regular file
if let Some(reg_entry) = last_reg_entry {
let size = reg_entry.size - entry.chunk_offset;
if size > ctx.chunk_size as u64 {
bail!(
"stargz: size of last chunk 0x{:x} is bigger than chunk size 0x {:x}",
size,
ctx.chunk_size
);
}
*last_reg_entry = None;
Ok(size)
} else {
bail!("stargz: tailer chunk lacks of corresponding head chunk");
}
} else if entry.chunk_size != ctx.chunk_size as u64 {
bail!(
"stargz: chunk size 0x{:x} is not 0x{:x}",
entry.chunk_size,
ctx.chunk_size
);
} else {
Ok(entry.chunk_size)
}
} else {
Ok(0)
}
}
fn parse_entry(&mut self, tree: &mut Tree, entry: &TocEntry, path: &Path) -> Result<()> {
let name_size = entry.name()?.byte_size() as u16;
let uid = if self.builder.explicit_uidgid {
entry.uid
} else {
0
};
let gid = if self.builder.explicit_uidgid {
entry.gid
} else {
0
};
let mut file_size = entry.size();
let mut flags = RafsInodeFlags::default();
// Parse symlink
let (symlink, symlink_size) = if entry.is_symlink() {
let symlink_link_path = entry.symlink_link_path();
let symlink_size = symlink_link_path.as_os_str().byte_size() as u16;
file_size = symlink_size.into();
flags |= RafsInodeFlags::SYMLINK;
(Some(symlink_link_path.as_os_str().to_owned()), symlink_size)
} else {
(None, 0)
};
// Handle hardlink ino
let ino = if entry.is_hardlink() {
let link_path = entry.hardlink_link_path();
let link_path = link_path.components().as_path();
let targets = Node::generate_target_vec(link_path);
assert!(!targets.is_empty());
let mut tmp_tree: &Tree = tree;
for name in &targets[1..] {
match tmp_tree.get_child_idx(name.as_bytes()) {
Some(idx) => tmp_tree = &tmp_tree.children[idx],
None => {
bail!(
"stargz: unknown target {} for hardlink {}",
link_path.display(),
path.display(),
);
}
}
}
let mut tmp_node = tmp_tree.lock_node();
if !tmp_node.is_reg() {
bail!(
"stargz: target {} for hardlink {} is not a regular file",
link_path.display(),
path.display()
);
}
self.hardlink_map
.insert(path.to_path_buf(), tmp_tree.node.clone());
flags |= RafsInodeFlags::HARDLINK;
tmp_node.inode.set_has_hardlink(true);
tmp_node.inode.ino()
} else {
self.builder.next_ino()
};
// Parse xattrs
let mut xattrs = RafsXAttrs::new();
if entry.has_xattr() {
for (name, value) in entry.xattrs.iter() {
flags |= RafsInodeFlags::XATTR;
let value = base64::engine::general_purpose::STANDARD
.decode(value)
.with_context(|| {
format!(
"stargz: failed to parse xattr {:?} for entry {:?}",
path, name
)
})?;
xattrs.add(OsString::from(name), value)?;
}
}
let mut inode = InodeWrapper::V6(RafsV6Inode {
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: entry.mode(),
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_count: 0,
i_name_size: name_size,
i_symlink_size: symlink_size,
i_rdev: entry.rdev(),
// TODO: add mtime from entry.ModTime()
i_mtime: 0,
i_mtime_nsec: 0,
});
inode.set_has_xattr(!xattrs.is_empty());
let source = PathBuf::from("/");
let target = Node::generate_target(&path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: self.builder.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: entry.rdev() as u64,
source,
target,
path: path.to_path_buf(),
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
let node = Node::new(inode, info, self.builder.layer_idx);
self.builder.insert_into_tree(tree, node)
}
fn sort_and_validate_chunks(chunks: &mut [NodeChunk], size: u64) -> Result<()> {
if chunks.len() > RAFS_MAX_CHUNKS_PER_BLOB as usize {
bail!("stargz: file has two many chunks");
}
if chunks.len() > 1 {
chunks.sort_unstable_by_key(|v| v.inner.file_offset());
for idx in 0..chunks.len() - 2 {
let curr = &chunks[idx].inner;
let pos = curr
.file_offset()
.checked_add(curr.uncompressed_size() as u64);
match pos {
Some(pos) => {
if pos != chunks[idx + 1].inner.file_offset() {
bail!("stargz: unexpected holes between data chunks");
}
}
None => {
bail!(
"stargz: invalid chunk offset 0x{:x} or size 0x{:x}",
curr.file_offset(),
curr.uncompressed_size()
)
}
}
}
}
if !chunks.is_empty() {
let last = &chunks[chunks.len() - 1];
if last.inner.file_offset() + last.inner.uncompressed_size() as u64 != size {
bail!("stargz: file size and sum of chunk size doesn't match");
}
} else if size != 0 {
bail!("stargz: file size and sum of chunk size doesn't match");
}
Ok(())
}
fn fix_chunk_info(&mut self, ctx: &mut BuildContext, blob_mgr: &mut BlobManager) -> Result<()> {
/*
let mut header = BlobMetaHeaderOndisk::default();
header.set_4k_aligned(true);
header.set_ci_separate(ctx.blob_meta_features & BLOB_META_FEATURE_SEPARATE != 0);
header.set_chunk_info_v2(ctx.blob_meta_features & BLOB_META_FEATURE_CHUNK_INFO_V2 != 0);
header.set_ci_zran(ctx.blob_meta_features & BLOB_META_FEATURE_ZRAN != 0);
blob_ctx.blob_meta_header = header;
*/
// Ensure that the chunks in the blob meta are sorted by uncompressed_offset and ordered
// by chunk index so that they can be found quickly at runtime with a binary search.
let mut blob_chunks: Vec<&mut NodeChunk> = Vec::with_capacity(10240);
for (_, chunks) in self.file_chunk_map.values_mut() {
for chunk in chunks.iter_mut() {
blob_chunks.push(chunk);
}
}
blob_chunks.sort_unstable_by(|a, b| {
a.inner
.uncompressed_offset()
.cmp(&b.inner.uncompressed_offset())
});
if blob_chunks.is_empty() {
return Ok(());
}
// Compute compressed_size for chunks.
let (blob_index, blob_ctx) = blob_mgr.get_or_create_current_blob(ctx)?;
let chunk_count = blob_chunks.len();
let mut compressed_blob_size = 0u64;
for idx in 0..chunk_count {
let curr = blob_chunks[idx].inner.compressed_offset();
let next = if idx == chunk_count - 1 {
self.blob_size
} else {
blob_chunks[idx + 1].inner.compressed_offset()
};
if curr >= next {
bail!("stargz: compressed offset is out of order");
} else if next - curr > RAFS_MAX_CHUNK_SIZE {
bail!("stargz: compressed size is too big");
}
let mut chunk = blob_chunks[idx].inner.deref().clone();
let uncomp_size = chunk.uncompressed_size() as usize;
let max_size = (next - curr) as usize;
let max_gzip_size = compute_compressed_gzip_size(uncomp_size, max_size);
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_index(chunk_index);
chunk.set_blob_index(blob_index);
chunk.set_compressed_size(max_gzip_size as u32);
blob_ctx.add_chunk_meta_info(&chunk, None)?;
compressed_blob_size = std::cmp::max(
compressed_blob_size,
chunk.compressed_offset() + chunk.compressed_size() as u64,
);
assert_eq!(Arc::strong_count(&blob_chunks[idx].inner), 1);
blob_chunks[idx].inner = Arc::new(chunk);
}
blob_ctx.uncompressed_blob_size = self.uncompressed_offset;
blob_ctx.compressed_blob_size = compressed_blob_size;
Ok(())
}
fn fix_nodes(&mut self, bootstrap: &mut Bootstrap) -> Result<()> {
bootstrap
.tree
.walk_bfs(true, &mut |n| {
let mut node = n.lock_node();
let node_path = node.path();
if let Some((size, ref mut chunks)) = self.file_chunk_map.get_mut(node_path) {
node.inode.set_size(*size);
node.inode.set_child_count(chunks.len() as u32);
node.chunks = chunks.to_vec();
}
Ok(())
})
.context("stargz: failed to update chunk info array for nodes")?;
for (k, v) in self.hardlink_map.iter() {
match bootstrap.tree.get_node(k) {
Some(n) => {
let mut node = n.lock_node();
let target = v.lock().unwrap();
node.inode.set_size(target.inode.size());
node.inode.set_child_count(target.inode.child_count());
node.chunks = target.chunks.clone();
node.set_xattr(target.info.xattrs.clone());
}
None => bail!(
"stargz: failed to get target node for hardlink {}",
k.display()
),
}
}
Ok(())
}
}
impl Builder for StargzBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
if ctx.fs_version != RafsVersion::V6 {
bail!(
"stargz: unsupported filesystem version {:?}",
ctx.fs_version
);
} else if ctx.compressor != compress::Algorithm::GZip {
bail!("stargz: invalid compression algorithm {:?}", ctx.compressor);
} else if ctx.digester != digest::Algorithm::Sha256 {
bail!("stargz: invalid digest algorithm {:?}", ctx.digester);
}
let mut blob_writer = if let Some(blob_stor) = ctx.blob_storage.clone() {
ArtifactWriter::new(blob_stor)?
} else {
return Err(anyhow!("missing configuration for target path"));
};
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
// Build filesystem tree from the stargz TOC.
let tree = timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build bootstrap
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
self.fix_chunk_info(ctx, blob_mgr)?;
self.fix_nodes(&mut bootstrap)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, &bootstrap.tree, blob_mgr, &mut blob_writer) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, &mut blob_writer)?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
} else {
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec};
#[ignore]
#[test]
fn test_build_stargz_toc() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path =
PathBuf::from(root_dir).join("../tests/texture/stargz/estargz_sample.json");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"".to_string(),
true,
0,
compress::Algorithm::GZip,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::EStargzIndexToRef,
source_path,
prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())),
false,
Features::new(),
);
ctx.fs_version = RafsVersion::V6;
let mut bootstrap_mgr =
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
let mut builder = StargzBuilder::new(0x1000000, &ctx);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
}

View File

@ -1,7 +1,7 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io
ifdef GOPROXY

View File

@ -3,23 +3,23 @@ module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.18
require (
github.com/containerd/containerd v1.6.18
github.com/containerd/nydus-snapshotter v0.5.1
github.com/opencontainers/image-spec v1.1.0-rc2
github.com/containerd/containerd v1.6.20
github.com/containerd/nydus-snapshotter v0.6.1
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b
github.com/urfave/cli v1.22.12
)
require (
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.7 // indirect
github.com/cilium/ebpf v0.10.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/containerd/fifo v1.0.0 // indirect
github.com/containerd/go-cni v1.1.8 // indirect
github.com/containerd/fifo v1.1.0 // indirect
github.com/containerd/go-cni v1.1.9 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/ttrpc v1.1.0 // indirect
github.com/containerd/ttrpc v1.2.1 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/containernetworking/cni v1.1.2 // indirect
github.com/containernetworking/plugins v1.2.0 // indirect
@ -27,33 +27,36 @@ require (
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gogo/googleapis v1.4.1 // indirect
github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/klauspost/compress v1.15.15 // indirect
github.com/klauspost/compress v1.16.3 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/onsi/ginkgo/v2 v2.4.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.4 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sirupsen/logrus v1.9.0 // indirect
github.com/stretchr/testify v1.8.2 // indirect
go.opencensus.io v0.24.0 // indirect
golang.org/x/mod v0.8.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.8.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.5.0 // indirect
golang.org/x/text v0.7.0 // indirect
golang.org/x/tools v0.6.0 // indirect
google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc // indirect
google.golang.org/grpc v1.53.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
golang.org/x/sys v0.6.0 // indirect
golang.org/x/text v0.8.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/genproto v0.0.0-20230330200707-38013875ee22 // indirect
google.golang.org/grpc v1.54.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
)

File diff suppressed because it is too large Load Diff

View File

@ -278,6 +278,27 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "errno"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f639046355ee4f37944e44f60642c6f3a7efa3cf6b78c78a0d989a8ce6c396a1"
dependencies = [
"errno-dragonfly",
"libc",
"winapi",
]
[[package]]
name = "errno-dragonfly"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa68f1b12764fab894d2755d2518754e71b4fd80ecfb822714a1206c2aab39bf"
dependencies = [
"cc",
"libc",
]
[[package]]
name = "fastrand"
version = "1.7.0"
@ -564,6 +585,16 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "io-lifetimes"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1abeb7a0dd0f8181267ff8adc397075586500b81b28a73e8a0208b00fc170fb3"
dependencies = [
"libc",
"windows-sys 0.45.0",
]
[[package]]
name = "itoa"
version = "1.0.2"
@ -578,9 +609,15 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.126"
version = "0.2.139"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "349d5a591cd28b49e1d1037471617a32ddcda5731b99419008085f72d5a53836"
checksum = "201de327520df007757c1f0adce6e827fe8562fbc28bfd9c15571c66ca1f5f79"
[[package]]
name = "linux-raw-sys"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f051f77a7c8e6957c0696eac88f26b0117e54f52d3fc682ab19397a8812846a4"
[[package]]
name = "lock_api"
@ -655,7 +692,7 @@ dependencies = [
"libc",
"log",
"wasi",
"windows-sys",
"windows-sys 0.42.0",
]
[[package]]
@ -753,7 +790,7 @@ dependencies = [
"libc",
"redox_syscall",
"smallvec",
"windows-sys",
"windows-sys 0.42.0",
]
[[package]]
@ -929,15 +966,6 @@ version = "0.6.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49b3de9ec5dc0a3417da371aab17d729997c15010e7fd24ff707773a33bddb64"
[[package]]
name = "remove_dir_all"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7"
dependencies = [
"winapi",
]
[[package]]
name = "rocket"
version = "0.5.0-rc.2"
@ -1019,6 +1047,20 @@ dependencies = [
"uncased",
]
[[package]]
name = "rustix"
version = "0.36.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f43abb88211988493c1abb44a70efa56ff0ce98f233b7b276146f1f3f7ba9644"
dependencies = [
"bitflags",
"errno",
"io-lifetimes",
"libc",
"linux-raw-sys",
"windows-sys 0.45.0",
]
[[package]]
name = "rustversion"
version = "1.0.7"
@ -1174,16 +1216,15 @@ dependencies = [
[[package]]
name = "tempfile"
version = "3.3.0"
version = "3.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5cdb1ef4eaeeaddc8fbd371e5017057064af0911902ef36b39801f67cc6d79e4"
checksum = "af18f7ae1acd354b992402e9ec5864359d693cd8a79dcbef59f76891701c1e95"
dependencies = [
"cfg-if",
"fastrand",
"libc",
"redox_syscall",
"remove_dir_all",
"winapi",
"rustix",
"windows-sys 0.42.0",
]
[[package]]
@ -1238,7 +1279,7 @@ dependencies = [
"signal-hook-registry",
"socket2",
"tokio-macros",
"windows-sys",
"windows-sys 0.42.0",
]
[[package]]
@ -1498,6 +1539,30 @@ dependencies = [
"windows_x86_64_msvc",
]
[[package]]
name = "windows-sys"
version = "0.45.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75283be5efb2831d37ea142365f009c02ec203cd29a3ebecbc093d52315b66d0"
dependencies = [
"windows-targets",
]
[[package]]
name = "windows-targets"
version = "0.42.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e2522491fbfcd58cc84d47aeb2958948c4b8982e9a2d8a2a35bbaed431390e7"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.42.1"

View File

@ -1,7 +1,7 @@
GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io
ifdef GOPROXY

View File

@ -9,6 +9,7 @@ package main
import (
"context"
"encoding/json"
"fmt"
"io"
"os"
@ -16,12 +17,15 @@ import (
"strings"
"github.com/containerd/containerd/reference/docker"
"github.com/docker/distribution/reference"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/rule"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/converter"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/copier"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/packer"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils"
@ -64,27 +68,27 @@ func parseBackendConfig(backendConfigJSON, backendConfigFile string) (string, er
return backendConfigJSON, nil
}
func getBackendConfig(c *cli.Context, required bool) (string, string, error) {
backendType := c.String("backend-type")
func getBackendConfig(c *cli.Context, suffix string, required bool) (string, string, error) {
backendType := c.String(suffix + "backend-type")
if backendType == "" {
if required {
return "", "", errors.Errorf("backend type is empty, please specify option '--backend-type'")
return "", "", errors.Errorf("backend type is empty, please specify option '--%sbackend-type'", suffix)
}
return "", "", nil
}
possibleBackendTypes := []string{"oss", "s3"}
if !isPossibleValue(possibleBackendTypes, backendType) {
return "", "", fmt.Errorf("--backend-type should be one of %v", possibleBackendTypes)
return "", "", fmt.Errorf("--%sbackend-type should be one of %v", suffix, possibleBackendTypes)
}
backendConfig, err := parseBackendConfig(
c.String("backend-config"), c.String("backend-config-file"),
c.String(suffix+"backend-config"), c.String(suffix+"backend-config-file"),
)
if err != nil {
return "", "", err
} else if (backendType == "oss" || backendType == "s3") && strings.TrimSpace(backendConfig) == "" {
return "", "", errors.Errorf("backend configuration is empty, please specify option '--backend-config'")
return "", "", errors.Errorf("backend configuration is empty, please specify option '--%sbackend-config'", suffix)
}
return backendType, backendConfig, nil
@ -335,12 +339,22 @@ func main() {
Usage: "Convert to OCI-referenced nydus zran image",
EnvVars: []string{"OCI_REF"},
},
&cli.BoolFlag{
Name: "with-referrer",
Value: false,
Usage: "Associate a reference to the source image, see https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers",
EnvVars: []string{"WITH_REFERRER"},
},
&cli.BoolFlag{
Name: "oci",
Value: false,
Usage: "Convert Docker media types to OCI media types",
EnvVars: []string{"OCI"},
Aliases: []string{"docker-v2-format"},
},
&cli.BoolFlag{
Name: "docker-v2-format",
Value: false,
Hidden: true,
},
&cli.StringFlag{
Name: "fs-version",
@ -408,7 +422,7 @@ func main() {
return err
}
backendType, backendConfig, err := getBackendConfig(c, false)
backendType, backendConfig, err := getBackendConfig(c, "", false)
if err != nil {
return err
}
@ -446,6 +460,20 @@ func main() {
}
}
docker2OCI := false
if c.Bool("docker-v2-format") {
logrus.Warn("the option `--docker-v2-format` has been deprecated, use `--oci` instead")
docker2OCI = false
} else if c.Bool("oci") {
docker2OCI = true
}
// Forcibly enable `--oci` option when `--oci-ref` be enabled.
if c.Bool("oci-ref") {
logrus.Warn("forcibly enabled `--oci` option when `--oci-ref` be enabled")
docker2OCI = true
}
opt := converter.Opt{
WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"),
@ -469,13 +497,14 @@ func main() {
PrefetchPatterns: prefetchPatterns,
MergePlatform: c.Bool("merge-platform"),
Docker2OCI: c.Bool("oci"),
Docker2OCI: docker2OCI,
FsVersion: fsVersion,
FsAlignChunk: c.Bool("backend-aligned-chunk") || c.Bool("fs-align-chunk"),
Compressor: c.String("compressor"),
ChunkSize: c.String("chunk-size"),
OCIRef: c.Bool("oci-ref"),
WithReferrer: c.Bool("with-referrer"),
AllPlatforms: c.Bool("all-platforms"),
Platforms: c.String("platform"),
}
@ -566,7 +595,7 @@ func main() {
Action: func(c *cli.Context) error {
setupLogLevel(c)
backendType, backendConfig, err := getBackendConfig(c, false)
backendType, backendConfig, err := getBackendConfig(c, "", false)
if err != nil {
return err
}
@ -617,7 +646,7 @@ func main() {
&cli.StringFlag{
Name: "backend-type",
Value: "",
Required: true,
Required: false,
Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"},
},
@ -663,12 +692,30 @@ func main() {
Action: func(c *cli.Context) error {
setupLogLevel(c)
backendType, backendConfig, err := getBackendConfig(c, false)
backendType, backendConfig, err := getBackendConfig(c, "", false)
if err != nil {
return err
} else if backendConfig == "" {
// TODO get auth from docker configuration file
return errors.Errorf("backend configuration is empty, please specify option '--backend-config'")
backendType = "registry"
parsed, err := reference.ParseNormalizedNamed(c.String("target"))
if err != nil {
return err
}
backendConfigStruct, err := rule.NewRegistryBackendConfig(parsed)
if err != nil {
return errors.Wrap(err, "parse registry backend configuration")
}
backendConfigStruct.SkipVerify = c.Bool("target-insecure")
bytes, err := json.Marshal(backendConfigStruct)
if err != nil {
return errors.Wrap(err, "marshal registry backend configuration")
}
backendConfig = string(bytes)
}
_, arch, err := provider.ExtractOsArch(c.String("platform"))
@ -821,7 +868,7 @@ func main() {
// if backend-push is specified, we should make sure backend-config-file exists
if c.Bool("backend-push") || c.Bool("compact") {
_backendType, _backendConfig, err := getBackendConfig(c, true)
_backendType, _backendConfig, err := getBackendConfig(c, "", true)
if err != nil {
return err
}
@ -861,6 +908,106 @@ func main() {
return nil
},
},
{
Name: "copy",
Usage: "Copy an image from source to target",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "source",
Required: true,
Usage: "Source image reference",
EnvVars: []string{"SOURCE"},
},
&cli.StringFlag{
Name: "target",
Required: false,
Usage: "Target image reference",
EnvVars: []string{"TARGET"},
},
&cli.BoolFlag{
Name: "source-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS source registry",
EnvVars: []string{"SOURCE_INSECURE"},
},
&cli.BoolFlag{
Name: "target-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS target registry",
EnvVars: []string{"TARGET_INSECURE"},
},
&cli.StringFlag{
Name: "source-backend-type",
Value: "",
Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"},
},
&cli.StringFlag{
Name: "source-backend-config",
Value: "",
Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"},
},
&cli.PathFlag{
Name: "source-backend-config-file",
Value: "",
TakesFile: true,
Usage: "Json configuration file for storage backend",
EnvVars: []string{"BACKEND_CONFIG_FILE"},
},
&cli.BoolFlag{
Name: "all-platforms",
Value: false,
Usage: "Copy images for all platforms, conflicts with --platform",
},
&cli.StringFlag{
Name: "platform",
Value: "linux/" + runtime.GOARCH,
Usage: "Copy images for specific platforms, for example: 'linux/amd64,linux/arm64'",
},
&cli.StringFlag{
Name: "work-dir",
Value: "./tmp",
Usage: "Working directory for image copy",
EnvVars: []string{"WORK_DIR"},
},
&cli.StringFlag{
Name: "nydus-image",
Value: "nydus-image",
Usage: "Path to the nydus-image binary, default to search in PATH",
EnvVars: []string{"NYDUS_IMAGE"},
},
},
Action: func(c *cli.Context) error {
setupLogLevel(c)
sourceBackendType, sourceBackendConfig, err := getBackendConfig(c, "source-", false)
if err != nil {
return err
}
opt := copier.Opt{
WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"),
Source: c.String("source"),
Target: c.String("target"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
SourceBackendType: sourceBackendType,
SourceBackendConfig: sourceBackendConfig,
AllPlatforms: c.Bool("all-platforms"),
Platforms: c.String("platform"),
}
return copier.Copy(context.Background(), opt)
},
},
}
if !utils.IsSupportedArch(runtime.GOARCH) {

View File

@ -3,127 +3,118 @@ module github.com/dragonflyoss/image-service/contrib/nydusify
go 1.18
require (
github.com/aliyun/aliyun-oss-go-sdk v2.1.5+incompatible
github.com/aws/aws-sdk-go-v2 v1.17.2
github.com/aws/aws-sdk-go-v2/config v1.18.4
github.com/aws/aws-sdk-go-v2/credentials v1.13.4
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.43
github.com/aws/aws-sdk-go-v2/service/s3 v1.29.5
github.com/containerd/containerd v1.6.18
github.com/docker/cli v20.10.23+incompatible
github.com/docker/distribution v2.8.1+incompatible
github.com/goharbor/acceleration-service v0.2.0
github.com/aliyun/aliyun-oss-go-sdk v2.2.6+incompatible
github.com/aws/aws-sdk-go-v2 v1.17.6
github.com/aws/aws-sdk-go-v2/config v1.18.16
github.com/aws/aws-sdk-go-v2/credentials v1.13.16
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.56
github.com/aws/aws-sdk-go-v2/service/s3 v1.30.6
github.com/containerd/containerd v1.7.2
github.com/containerd/nydus-snapshotter v0.10.0
github.com/docker/cli v23.0.3+incompatible
github.com/docker/distribution v2.8.2+incompatible
github.com/dustin/go-humanize v1.0.1
github.com/goharbor/acceleration-service v0.2.6
github.com/google/uuid v1.3.0
github.com/hashicorp/go-hclog v1.3.1
github.com/hashicorp/go-plugin v1.4.5
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799
github.com/opencontainers/image-spec v1.1.0-rc3
github.com/pkg/errors v0.9.1
github.com/pkg/xattr v0.4.3
github.com/prometheus/client_golang v1.14.0
github.com/sirupsen/logrus v1.9.0
github.com/stretchr/testify v1.8.1
github.com/urfave/cli/v2 v2.24.2
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4
golang.org/x/sys v0.5.0
github.com/stretchr/testify v1.8.2
github.com/urfave/cli/v2 v2.25.0
golang.org/x/sync v0.1.0
golang.org/x/sys v0.7.0
lukechampine.com/blake3 v1.1.5
)
require (
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d // indirect
github.com/astaxie/beego v1.12.2 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 // indirect
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20221215162035-5330a85ea652 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.8 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.26 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.27 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.17 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.30 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.31 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.22 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.11 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.21 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.20 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.20 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.11.26 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.13.9 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.17.6 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.25 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.24 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.24 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.12.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.18.6 // indirect
github.com/aws/smithy-go v1.13.5 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.1.2 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/containerd/cgroups v1.0.4 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/containerd/fifo v1.0.0 // indirect
github.com/containerd/nydus-snapshotter v0.6.1 // indirect
github.com/containerd/stargz-snapshotter v0.13.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.1 // indirect
github.com/containerd/ttrpc v1.1.0 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/continuity v0.4.1 // indirect
github.com/containerd/fifo v1.1.0 // indirect
github.com/containerd/stargz-snapshotter v0.14.3 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/containerd/ttrpc v1.2.2 // indirect
github.com/containerd/typeurl/v2 v2.1.1 // indirect
github.com/containers/ocicrypt v1.1.7 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/docker v20.10.17+incompatible // indirect
github.com/docker/docker-credential-helpers v0.6.4 // indirect
github.com/docker/docker v23.0.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.2 // indirect
github.com/fatih/color v1.14.1 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/gocraft/work v0.5.1 // indirect
github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/goharbor/harbor/src v0.0.0-20211021012518-bc6a7f65a6fa // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/gomodule/redigo v2.0.0+incompatible // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/klauspost/compress v1.15.15 // indirect
github.com/klauspost/compress v1.16.0 // indirect
github.com/klauspost/cpuid v1.3.1 // indirect
github.com/lib/pq v1.10.0 // indirect
github.com/kr/pretty v0.3.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.16 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mitchellh/go-testing-interface v1.14.1 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/moby/sys/signal v0.6.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/oklog/run v1.1.0 // indirect
github.com/opencontainers/runc v1.1.2 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
github.com/opencontainers/selinux v1.10.1 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/robfig/cron v1.0.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.9.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/vbatts/tar-split v0.11.2 // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
go.opencensus.io v0.23.0 // indirect
go.opentelemetry.io/contrib v0.22.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.22.0 // indirect
go.opentelemetry.io/otel v1.3.0 // indirect
go.opentelemetry.io/otel/exporters/jaeger v1.0.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.3.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.3.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.3.0 // indirect
go.opentelemetry.io/otel/internal/metric v0.22.0 // indirect
go.opentelemetry.io/otel/metric v0.22.0 // indirect
go.opentelemetry.io/otel/sdk v1.3.0 // indirect
go.opentelemetry.io/otel/trace v1.3.0 // indirect
go.opentelemetry.io/proto/otlp v0.11.0 // indirect
golang.org/x/mod v0.8.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/text v0.7.0 // indirect
golang.org/x/time v0.0.0-20220722155302-e5dcc9cfc0b9 // indirect
google.golang.org/genproto v0.0.0-20220927151529-dcaddaf36704 // indirect
google.golang.org/grpc v1.50.1 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
go.etcd.io/bbolt v1.3.7 // indirect
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.14.0 // indirect
go.opentelemetry.io/otel/trace v1.14.0 // indirect
golang.org/x/crypto v0.6.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.8.0 // indirect
golang.org/x/term v0.6.0 // indirect
golang.org/x/text v0.8.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4 // indirect
google.golang.org/grpc v1.53.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/square/go-jose.v2 v2.5.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
replace github.com/opencontainers/runc => github.com/opencontainers/runc v1.1.2

File diff suppressed because it is too large Load Diff

View File

@ -7,6 +7,7 @@ package backend
import (
"context"
"fmt"
"io"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils"
@ -25,6 +26,8 @@ type Backend interface {
Finalize(cancel bool) error
Check(blobID string) (bool, error)
Type() Type
Reader(blobID string) (io.ReadCloser, error)
Size(blobID string) (int64, error)
}
// TODO: Directly forward blob data to storage backend

View File

@ -259,6 +259,26 @@ func (b *OSSBackend) Type() Type {
return OssBackend
}
func (b *OSSBackend) Reader(blobID string) (io.ReadCloser, error) {
blobID = b.objectPrefix + blobID
rc, err := b.bucket.GetObject(blobID)
return rc, err
}
func (b *OSSBackend) Size(blobID string) (int64, error) {
blobID = b.objectPrefix + blobID
headers, err := b.bucket.GetObjectMeta(blobID)
if err != nil {
return 0, errors.Wrap(err, "get object size")
}
sizeStr := headers.Get("Content-Length")
size, err := strconv.ParseInt(sizeStr, 10, 0)
if err != nil {
return 0, errors.Wrap(err, "parse content-length header")
}
return size, nil
}
func (b *OSSBackend) remoteID(blobID string) string {
return fmt.Sprintf("oss://%s/%s%s", b.bucket.BucketName, b.objectPrefix, blobID)
}

View File

@ -2,6 +2,7 @@ package backend
import (
"context"
"io"
"os"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/remote"
@ -46,6 +47,14 @@ func (r *Registry) Type() Type {
return RegistryBackend
}
func (r *Registry) Reader(blobID string) (io.ReadCloser, error) {
panic("not implemented")
}
func (r *Registry) Size(blobID string) (int64, error) {
panic("not implemented")
}
func newRegistryBackend(rawConfig []byte, remote *remote.Remote) (Backend, error) {
return &Registry{remote: remote}, nil
}

View File

@ -8,6 +8,7 @@ import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"os"
@ -159,6 +160,27 @@ func (b *S3Backend) blobObjectKey(blobID string) string {
return b.objectPrefix + blobID
}
func (b *S3Backend) Reader(blobID string) (io.ReadCloser, error) {
objectKey := b.blobObjectKey(blobID)
output, err := b.client.GetObject(context.TODO(), &s3.GetObjectInput{
Bucket: &b.bucketName,
Key: &objectKey,
})
return output.Body, err
}
func (b *S3Backend) Size(blobID string) (int64, error) {
objectKey := b.blobObjectKey(blobID)
output, err := b.client.GetObjectAttributes(context.TODO(), &s3.GetObjectAttributesInput{
Bucket: &b.bucketName,
Key: &objectKey,
})
if err != nil {
return 0, errors.Wrap(err, "get object size")
}
return output.ObjectSize, nil
}
func (b *S3Backend) remoteID(blobObjectKey string) string {
remoteURL, _ := url.Parse(b.endpointWithScheme)
remoteURL.Path = path.Join(remoteURL.Path, b.bucketName, blobObjectKey)

View File

@ -221,6 +221,28 @@ func (rule *FilesystemRule) mountSourceImage() (*tool.Image, error) {
return image, nil
}
func NewRegistryBackendConfig(parsed reference.Named) (RegistryBackendConfig, error) {
backendConfig := RegistryBackendConfig{
Scheme: "https",
Host: reference.Domain(parsed),
Repo: reference.Path(parsed),
}
config := dockerconfig.LoadDefaultConfigFile(os.Stderr)
authConfig, err := config.GetAuthConfig(backendConfig.Host)
if err != nil {
return backendConfig, errors.Wrap(err, "get docker registry auth config")
}
var auth string
if authConfig.Username != "" && authConfig.Password != "" {
auth = base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%s:%s", authConfig.Username, authConfig.Password)))
}
backendConfig.Auth = auth
return backendConfig, nil
}
func (rule *FilesystemRule) mountNydusImage() (*tool.Nydusd, error) {
logrus.Infof("Mounting Nydus image to %s", rule.NydusdConfig.MountPath)
@ -237,32 +259,23 @@ func (rule *FilesystemRule) mountNydusImage() (*tool.Nydusd, error) {
return nil, err
}
host := reference.Domain(parsed)
repo := reference.Path(parsed)
if rule.NydusdConfig.BackendType == "" {
rule.NydusdConfig.BackendType = "registry"
if rule.NydusdConfig.BackendConfig == "" {
config := dockerconfig.LoadDefaultConfigFile(os.Stderr)
authConfig, err := config.GetAuthConfig(host)
backendConfig, err := NewRegistryBackendConfig(parsed)
if err != nil {
return nil, errors.Wrap(err, "get docker registry auth config")
return nil, errors.Wrap(err, "failed to parse backend configuration")
}
var auth string
if authConfig.Username != "" && authConfig.Password != "" {
auth = base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%s:%s", authConfig.Username, authConfig.Password)))
}
skipVerify := false
if rule.TargetInsecure {
skipVerify = true
}
scheme := "https"
if rule.PlainHTTP {
scheme = "http"
backendConfig.SkipVerify = true
}
if rule.PlainHTTP {
backendConfig.Scheme = "http"
}
backendConfig := RegistryBackendConfig{scheme, host, repo, auth, skipVerify}
bytes, err := json.Marshal(backendConfig)
if err != nil {
return nil, errors.Wrap(err, "parse registry backend config")

View File

@ -80,6 +80,20 @@ func (rule *ManifestRule) Validate() error {
// Check Nydus image config with OCI image
if rule.SourceParsed.OCIImage != nil {
//nolint:staticcheck
// ignore static check SA1019 here. We have to assign deprecated field.
//
// Skip ArgsEscaped's Check
//
// This field is present only for legacy compatibility with Docker and
// should not be used by new image builders. Nydusify (1.6 and above)
// ignores it, which is an expected behavior.
// Also ignore it in check.
//
// Addition: [ArgsEscaped in spec](https://github.com/opencontainers/image-spec/pull/892)
rule.TargetParsed.NydusImage.Config.Config.ArgsEscaped = rule.SourceParsed.OCIImage.Config.Config.ArgsEscaped
ociConfig, err := json.Marshal(rule.SourceParsed.OCIImage.Config.Config)
if err != nil {
return errors.New("marshal oci image config")

View File

@ -0,0 +1,38 @@
package rule
import (
"testing"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser"
"github.com/stretchr/testify/assert"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) {
source := &parser.Parsed{
NydusImage: &parser.Image{
Config: v1.Image{
Config: v1.ImageConfig{
ArgsEscaped: true, // deprecated field
},
},
},
}
target := &parser.Parsed{
NydusImage: &parser.Image{
Config: v1.Image{
Config: v1.ImageConfig{
ArgsEscaped: false,
},
},
},
}
rule := ManifestRule{
SourceParsed: source,
TargetParsed: target,
}
assert.Nil(t, rule.Validate())
}

View File

@ -4,7 +4,9 @@
package converter
import "strconv"
import (
"strconv"
)
func getConfig(opt Opt) map[string]string {
cfg := map[string]string{}
@ -17,9 +19,10 @@ func getConfig(opt Opt) map[string]string {
cfg["backend_force_push"] = strconv.FormatBool(opt.BackendForcePush)
cfg["chunk_dict_ref"] = opt.ChunkDictRef
cfg["docker2oci"] = strconv.FormatBool(!opt.Docker2OCI)
cfg["docker2oci"] = strconv.FormatBool(opt.Docker2OCI)
cfg["merge_manifest"] = strconv.FormatBool(opt.MergePlatform)
cfg["oci_ref"] = strconv.FormatBool(opt.OCIRef)
cfg["with_referrer"] = strconv.FormatBool(opt.WithReferrer)
cfg["prefetch_patterns"] = opt.PrefetchPatterns
cfg["compressor"] = opt.Compressor

View File

@ -6,10 +6,12 @@ package converter
import (
"context"
"os"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/converter/provider"
"github.com/goharbor/acceleration-service/pkg/converter"
"github.com/goharbor/acceleration-service/pkg/platformutil"
"github.com/pkg/errors"
)
type Opt struct {
@ -42,22 +44,40 @@ type Opt struct {
ChunkSize string
PrefetchPatterns string
OCIRef bool
WithReferrer bool
AllPlatforms bool
Platforms string
}
func Convert(ctx context.Context, opt Opt) error {
pvd, err := provider.New(opt.WorkDir, hosts(opt))
if err != nil {
return err
}
platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms)
if err != nil {
return err
}
if _, err := os.Stat(opt.WorkDir); err != nil {
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return errors.Wrap(err, "prepare work directory")
}
// We should only clean up when the work directory not exists
// before, otherwise it may delete user data by mistake.
defer os.RemoveAll(opt.WorkDir)
} else {
return errors.Wrap(err, "stat work directory")
}
}
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
pvd, err := provider.New(tmpDir, hosts(opt), platformMC)
if err != nil {
return err
}
defer os.RemoveAll(tmpDir)
cvt, err := converter.New(
converter.WithProvider(pvd),
converter.WithDriver("nydus", getConfig(opt)),
@ -67,5 +87,6 @@ func Convert(ctx context.Context, opt Opt) error {
return err
}
return cvt.Convert(ctx, opt.Source, opt.Target)
_, err = cvt.Convert(ctx, opt.Source, opt.Target)
return err
}

View File

@ -17,6 +17,8 @@ import (
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/remotes"
"github.com/containerd/containerd/remotes/docker"
// nolint:staticcheck
"github.com/containerd/containerd/remotes/docker/schema1"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
@ -44,6 +46,7 @@ func fetch(ctx context.Context, store content.Store, rCtx *containerd.RemoteCont
limiter *semaphore.Weighted
)
// nolint:staticcheck
if desc.MediaType == images.MediaTypeDockerSchema1Manifest && rCtx.ConvertSchema1 {
schema1Converter := schema1.NewConverter(store, fetcher)

View File

@ -12,29 +12,34 @@ import (
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/remotes"
"github.com/goharbor/acceleration-service/pkg/remote"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
var LayerConcurrentLimit = 5
type Provider struct {
mutex sync.Mutex
usePlainHTTP bool
images map[string]*ocispec.Descriptor
store content.Store
hosts remote.HostFunc
platformMC platforms.MatchComparer
}
func New(root string, hosts remote.HostFunc) (*Provider, error) {
func New(root string, hosts remote.HostFunc, platformMC platforms.MatchComparer) (*Provider, error) {
store, err := local.NewLabeledStore(root, newMemoryLabelStore())
if err != nil {
return nil, err
}
return &Provider{
images: make(map[string]*ocispec.Descriptor),
store: store,
hosts: hosts,
images: make(map[string]*ocispec.Descriptor),
store: store,
hosts: hosts,
platformMC: platformMC,
}, nil
}
@ -56,7 +61,9 @@ func (pvd *Provider) Pull(ctx context.Context, ref string) error {
return err
}
rc := &containerd.RemoteContext{
Resolver: resolver,
Resolver: resolver,
PlatformMatcher: pvd.platformMC,
MaxConcurrentDownloads: LayerConcurrentLimit,
}
img, err := fetch(ctx, pvd.store, rc, ref, 0)
@ -77,7 +84,9 @@ func (pvd *Provider) Push(ctx context.Context, desc ocispec.Descriptor, ref stri
return err
}
rc := &containerd.RemoteContext{
Resolver: resolver,
Resolver: resolver,
PlatformMatcher: pvd.platformMC,
MaxConcurrentUploadedLayers: LayerConcurrentLimit,
}
return push(ctx, pvd.store, rc, desc, ref)
@ -95,3 +104,7 @@ func (pvd *Provider) Image(ctx context.Context, ref string) (*ocispec.Descriptor
func (pvd *Provider) ContentStore() content.Store {
return pvd.store
}
func (pvd *Provider) SetContentStore(store content.Store) {
pvd.store = store
}

View File

@ -0,0 +1,389 @@
// Copyright 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package copier
import (
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/containerd/containerd/content"
containerdErrdefs "github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/reference/docker"
"github.com/containerd/nydus-snapshotter/pkg/converter"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/backend"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/converter/provider"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser"
nydusifyUtils "github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils"
"github.com/dustin/go-humanize"
"github.com/goharbor/acceleration-service/pkg/errdefs"
"github.com/goharbor/acceleration-service/pkg/platformutil"
"github.com/goharbor/acceleration-service/pkg/remote"
"github.com/goharbor/acceleration-service/pkg/utils"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
type Opt struct {
WorkDir string
NydusImagePath string
Source string
Target string
SourceInsecure bool
TargetInsecure bool
SourceBackendType string
SourceBackendConfig string
TargetBackendType string
TargetBackendConfig string
AllPlatforms bool
Platforms string
}
type output struct {
Blobs []string
}
func hosts(opt Opt) remote.HostFunc {
maps := map[string]bool{
opt.Source: opt.SourceInsecure,
opt.Target: opt.TargetInsecure,
}
return func(ref string) (remote.CredentialFunc, bool, error) {
return remote.NewDockerConfigCredFunc(), maps[ref], nil
}
}
func getPushWriter(ctx context.Context, pvd *provider.Provider, desc ocispec.Descriptor, opt Opt) (content.Writer, error) {
resolver, err := pvd.Resolver(opt.Target)
if err != nil {
return nil, errors.Wrap(err, "get resolver")
}
ref := opt.Target
if !strings.Contains(ref, "@") {
ref = ref + "@" + desc.Digest.String()
}
pusher, err := resolver.Pusher(ctx, ref)
if err != nil {
return nil, errors.Wrap(err, "create pusher")
}
writer, err := pusher.Push(ctx, desc)
if err != nil {
if containerdErrdefs.IsAlreadyExists(err) {
return nil, nil
}
return nil, err
}
return writer, nil
}
func pushBlobFromBackend(
ctx context.Context, pvd *provider.Provider, backend backend.Backend, src ocispec.Descriptor, opt Opt,
) ([]ocispec.Descriptor, *ocispec.Descriptor, error) {
if src.MediaType != ocispec.MediaTypeImageManifest && src.MediaType != images.MediaTypeDockerSchema2Manifest {
return nil, nil, fmt.Errorf("unsupported media type %s", src.MediaType)
}
manifest := ocispec.Manifest{}
if _, err := utils.ReadJSON(ctx, pvd.ContentStore(), &manifest, src); err != nil {
return nil, nil, errors.Wrap(err, "read manifest from store")
}
bootstrapDesc := parser.FindNydusBootstrapDesc(&manifest)
if bootstrapDesc == nil {
return nil, nil, nil
}
ra, err := pvd.ContentStore().ReaderAt(ctx, *bootstrapDesc)
if err != nil {
return nil, nil, errors.Wrap(err, "prepare reading bootstrap")
}
bootstrapPath := filepath.Join(opt.WorkDir, "bootstrap.tgz")
if err := nydusifyUtils.UnpackFile(io.NewSectionReader(ra, 0, ra.Size()), nydusifyUtils.BootstrapFileNameInLayer, bootstrapPath); err != nil {
return nil, nil, errors.Wrap(err, "unpack bootstrap layer")
}
outputPath := filepath.Join(opt.WorkDir, "output.json")
builder := tool.NewBuilder(opt.NydusImagePath)
if err := builder.Check(tool.BuilderOption{
BootstrapPath: bootstrapPath,
DebugOutputPath: outputPath,
}); err != nil {
return nil, nil, errors.Wrap(err, "check bootstrap")
}
var out output
bytes, err := os.ReadFile(outputPath)
if err != nil {
return nil, nil, errors.Wrap(err, "read output file")
}
if err := json.Unmarshal(bytes, &out); err != nil {
return nil, nil, errors.Wrap(err, "unmarshal output json")
}
// Deduplicate the blobs for avoiding uploading repeatedly.
blobIDs := []string{}
blobIDMap := map[string]bool{}
for _, blobID := range out.Blobs {
if blobIDMap[blobID] {
continue
}
blobIDs = append(blobIDs, blobID)
blobIDMap[blobID] = true
}
sem := semaphore.NewWeighted(int64(provider.LayerConcurrentLimit))
eg, ctx := errgroup.WithContext(ctx)
blobDescs := make([]ocispec.Descriptor, len(blobIDs))
for idx := range blobIDs {
func(idx int) {
eg.Go(func() error {
sem.Acquire(ctx, 1)
defer sem.Release(1)
blobID := blobIDs[idx]
blobDigest := digest.Digest("sha256:" + blobID)
blobSize, err := backend.Size(blobID)
if err != nil {
return errors.Wrap(err, "get blob size")
}
blobSizeStr := humanize.Bytes(uint64(blobSize))
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushing blob from backend")
rc, err := backend.Reader(blobID)
if err != nil {
return errors.Wrap(err, "get blob reader")
}
defer rc.Close()
blobDescs[idx] = ocispec.Descriptor{
Digest: blobDigest,
Size: blobSize,
MediaType: converter.MediaTypeNydusBlob,
Annotations: map[string]string{
converter.LayerAnnotationNydusBlob: "true",
},
}
writer, err := getPushWriter(ctx, pvd, blobDescs[idx], opt)
if err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
writer, err = getPushWriter(ctx, pvd, blobDescs[idx], opt)
}
if err != nil {
return errors.Wrap(err, "get push writer")
}
}
if writer != nil {
defer writer.Close()
return content.Copy(ctx, writer, rc, blobSize, blobDigest)
}
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushed blob from backend")
return nil
})
}(idx)
}
if err := eg.Wait(); err != nil {
return nil, nil, errors.Wrap(err, "push blobs")
}
// Update manifest layers
for idx := range manifest.Layers {
if manifest.Layers[idx].Annotations != nil {
// The annotation key is deprecated, but it still exists in some
// old nydus images, let's clean it up.
delete(manifest.Layers[idx].Annotations, "containerd.io/snapshot/nydus-blob-ids")
}
}
manifest.Layers = append(blobDescs, manifest.Layers...)
// Update image config
blobDigests := []digest.Digest{}
for idx := range blobDescs {
blobDigests = append(blobDigests, blobDescs[idx].Digest)
}
config := ocispec.Image{}
if _, err := utils.ReadJSON(ctx, pvd.ContentStore(), &config, manifest.Config); err != nil {
return nil, nil, errors.Wrap(err, "read config json")
}
config.RootFS.DiffIDs = append(blobDigests, config.RootFS.DiffIDs...)
configDesc, err := utils.WriteJSON(ctx, pvd.ContentStore(), config, manifest.Config, opt.Target, nil)
if err != nil {
return nil, nil, errors.Wrap(err, "write config json")
}
manifest.Config = *configDesc
target, err := utils.WriteJSON(ctx, pvd.ContentStore(), &manifest, src, opt.Target, nil)
if err != nil {
return nil, nil, errors.Wrap(err, "write manifest json")
}
return blobDescs, target, nil
}
func getPlatform(platform *ocispec.Platform) string {
if platform == nil {
return platforms.DefaultString()
}
return platforms.Format(*platform)
}
func Copy(ctx context.Context, opt Opt) error {
platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms)
if err != nil {
return err
}
var bkd backend.Backend
if opt.SourceBackendType != "" {
bkd, err = backend.NewBackend(opt.SourceBackendType, []byte(opt.SourceBackendConfig), nil)
if err != nil {
return errors.Wrapf(err, "new backend")
}
}
if _, err := os.Stat(opt.WorkDir); err != nil {
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return errors.Wrap(err, "prepare work directory")
}
// We should only clean up when the work directory not exists
// before, otherwise it may delete user data by mistake.
defer os.RemoveAll(opt.WorkDir)
} else {
return errors.Wrap(err, "stat work directory")
}
}
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
pvd, err := provider.New(tmpDir, hosts(opt), platformMC)
if err != nil {
return err
}
defer os.RemoveAll(tmpDir)
sourceNamed, err := docker.ParseDockerRef(opt.Source)
if err != nil {
return errors.Wrap(err, "parse source reference")
}
targetNamed, err := docker.ParseDockerRef(opt.Target)
if err != nil {
return errors.Wrap(err, "parse target reference")
}
source := sourceNamed.String()
target := targetNamed.String()
logrus.Infof("pulling source image %s", source)
if err := pvd.Pull(ctx, source); err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
if err := pvd.Pull(ctx, source); err != nil {
return errors.Wrap(err, "try to pull image")
}
} else {
return errors.Wrap(err, "pull source image")
}
}
logrus.Infof("pulled source image %s", source)
sourceImage, err := pvd.Image(ctx, source)
if err != nil {
return errors.Wrap(err, "find image from store")
}
sourceDescs, err := utils.GetManifests(ctx, pvd.ContentStore(), *sourceImage, platformMC)
if err != nil {
return errors.Wrap(err, "get image manifests")
}
targetDescs := make([]ocispec.Descriptor, len(sourceDescs))
sem := semaphore.NewWeighted(1)
eg := errgroup.Group{}
for idx := range sourceDescs {
func(idx int) {
eg.Go(func() error {
sem.Acquire(ctx, 1)
defer sem.Release(1)
sourceDesc := sourceDescs[idx]
targetDesc := &sourceDesc
if bkd != nil {
descs, _targetDesc, err := pushBlobFromBackend(ctx, pvd, bkd, sourceDesc, opt)
if err != nil {
return errors.Wrap(err, "get resolver")
}
if _targetDesc == nil {
logrus.WithField("platform", getPlatform(sourceDesc.Platform)).Warnf("%s is not a nydus image", source)
} else {
targetDesc = _targetDesc
store := newStore(pvd.ContentStore(), descs)
pvd.SetContentStore(store)
}
}
targetDescs[idx] = *targetDesc
logrus.WithField("platform", getPlatform(sourceDesc.Platform)).Infof("pushing target manifest %s", targetDesc.Digest)
if err := pvd.Push(ctx, *targetDesc, target); err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
if err := pvd.Push(ctx, *targetDesc, target); err != nil {
return errors.Wrap(err, "try to push image manifest")
}
} else {
return errors.Wrap(err, "push target image manifest")
}
}
logrus.WithField("platform", getPlatform(sourceDesc.Platform)).Infof("pushed target manifest %s", targetDesc.Digest)
return nil
})
}(idx)
}
if err := eg.Wait(); err != nil {
return errors.Wrap(err, "push image manifests")
}
if sourceImage.MediaType == ocispec.MediaTypeImageIndex ||
sourceImage.MediaType == images.MediaTypeDockerSchema2ManifestList {
targetIndex := ocispec.Index{}
if _, err := utils.ReadJSON(ctx, pvd.ContentStore(), &targetIndex, *sourceImage); err != nil {
return errors.Wrap(err, "read source manifest list")
}
targetIndex.Manifests = targetDescs
targetImage, err := utils.WriteJSON(ctx, pvd.ContentStore(), targetIndex, *sourceImage, target, nil)
if err != nil {
return errors.Wrap(err, "write target manifest list")
}
if err := pvd.Push(ctx, *targetImage, target); err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
if err := pvd.Push(ctx, *targetImage, target); err != nil {
return errors.Wrap(err, "try to push image")
}
} else {
return errors.Wrap(err, "push target image")
}
}
logrus.Infof("pushed image %s", target)
}
return nil
}

View File

@ -0,0 +1,45 @@
// Copyright 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package copier
import (
"context"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/errdefs"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
type store struct {
content.Store
remotes []ocispec.Descriptor
}
func newStore(base content.Store, remotes []ocispec.Descriptor) *store {
return &store{
Store: base,
remotes: remotes,
}
}
func (s *store) Info(ctx context.Context, dgst digest.Digest) (content.Info, error) {
info, err := s.Store.Info(ctx, dgst)
if err != nil {
if !errdefs.IsNotFound(err) {
return content.Info{}, err
}
for _, desc := range s.remotes {
if desc.Digest == dgst {
return content.Info{
Digest: desc.Digest,
Size: desc.Size,
}, nil
}
}
return content.Info{}, err
}
return info, nil
}

View File

@ -2,6 +2,7 @@ package packer
import (
"context"
"io"
"os"
"path/filepath"
"testing"
@ -35,6 +36,14 @@ func (m *mockBackend) Type() backend.Type {
return backend.OssBackend
}
func (m *mockBackend) Reader(blobID string) (io.ReadCloser, error) {
panic("not implemented")
}
func (m *mockBackend) Size(blobID string) (int64, error) {
panic("not implemented")
}
func Test_parseBackendConfig(t *testing.T) {
cfg, err := ParseBackendConfig("oss", filepath.Join("testdata", "backend-config.json"))
assert.Nil(t, err)

View File

@ -52,10 +52,12 @@ type withCredentialFunc = func(string) (string, string, error)
func withRemote(ref string, insecure bool, credFunc withCredentialFunc) (*remote.Remote, error) {
resolverFunc := func(retryWithHTTP bool) remotes.Resolver {
registryHosts := docker.ConfigureDefaultRegistries(
docker.WithAuthorizer(docker.NewAuthorizer(
newDefaultClient(insecure),
credFunc,
)),
docker.WithAuthorizer(
docker.NewDockerAuthorizer(
docker.WithAuthClient(newDefaultClient(insecure)),
docker.WithAuthCreds(credFunc),
),
),
docker.WithClient(newDefaultClient(insecure)),
docker.WithPlainHTTP(func(host string) (bool, error) {
return retryWithHTTP, nil

View File

@ -61,23 +61,17 @@ func New(opt Opt) (*FsViewer, error) {
}
mode := "cached"
digestValidate := true
if opt.FsVersion == "6" {
mode = "direct"
digestValidate = false
}
nydusdConfig := tool.NydusdConfig{
NydusdPath: opt.NydusdPath,
BackendType: opt.BackendType,
BackendConfig: opt.BackendConfig,
BootstrapPath: filepath.Join(opt.WorkDir, "nydus_bootstrap"),
ConfigPath: filepath.Join(opt.WorkDir, "fs/nydusd_config.json"),
BlobCacheDir: filepath.Join(opt.WorkDir, "fs/nydus_blobs"),
MountPath: opt.MountPath,
APISockPath: filepath.Join(opt.WorkDir, "fs/nydus_api.sock"),
Mode: mode,
DigestValidate: digestValidate,
NydusdPath: opt.NydusdPath,
BackendType: opt.BackendType,
BackendConfig: opt.BackendConfig,
BootstrapPath: filepath.Join(opt.WorkDir, "nydus_bootstrap"),
ConfigPath: filepath.Join(opt.WorkDir, "fs/nydusd_config.json"),
BlobCacheDir: filepath.Join(opt.WorkDir, "fs/nydus_blobs"),
MountPath: opt.MountPath,
APISockPath: filepath.Join(opt.WorkDir, "fs/nydus_api.sock"),
Mode: mode,
}
fsViewer := &FsViewer{
@ -157,6 +151,18 @@ func (fsViewer *FsViewer) MountImage() error {
// It includes two steps, pull the boostrap of the image, and mount the
// image under specified path.
func (fsViewer *FsViewer) View(ctx context.Context) error {
if err := fsViewer.view(ctx); err != nil {
if utils.RetryWithHTTP(err) {
fsViewer.Parser.Remote.MaybeWithHTTP(err)
return fsViewer.view(ctx)
}
return err
}
return nil
}
func (fsViewer *FsViewer) view(ctx context.Context) error {
// Pull bootstrap
targetParsed, err := fsViewer.Parser.Parse(ctx)
if err != nil {
@ -167,6 +173,18 @@ func (fsViewer *FsViewer) View(ctx context.Context) error {
return errors.Wrap(err, "failed to pull Nydus image bootstrap")
}
// Adjust nydused parameters(DigestValidate) according to rafs format
nydusManifest := parser.FindNydusBootstrapDesc(&targetParsed.NydusImage.Manifest)
if nydusManifest != nil {
v := utils.GetNydusFsVersionOrDefault(nydusManifest.Annotations, utils.V5)
if v == utils.V5 {
// Digest validate is not currently supported for v6,
// but v5 supports it. In order to make the check more sufficient,
// this validate needs to be turned on for v5.
fsViewer.NydusdConfig.DigestValidate = true
}
}
err = fsViewer.MountImage()
if err != nil {
return err

View File

@ -74,7 +74,6 @@ allow = [
"BSD-3-Clause",
"BSD-2-Clause",
"CC0-1.0",
"ISC",
"Unicode-DFS-2016",
]
# List of explictly disallowed licenses
@ -195,6 +194,4 @@ unknown-git = "warn"
# if not specified. If it is specified but empty, no registries are allowed.
allow-registry = ["https://github.com/rust-lang/crates.io-index"]
# List of URLs for allowed Git repositories
allow-git = [
"https://github.com/cloud-hypervisor/micro-http.git"
]
#allow-git = [ ]

View File

@ -34,6 +34,8 @@ sudo nydusd \
--log-level info
```
For registry backend, we can set authorization with environment variable `IMAGE_PULL_AUTH` to avoid loading `auth` from nydusd configuration file.
### Run With Virtio-FS
If no `/path/to/bootstrap` is available, please refer to [nydus-image.md](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) for more details.
@ -227,7 +229,8 @@ Document located at: https://github.com/adamqqqplay/nydus-localdisk/blob/master/
},
...
}
```
```
Note: The value of `device.backend.config.auth` will be overwrite if running the nydusd with environment variable `IMAGE_PULL_AUTH`.
##### Enable P2P Proxy for Storage Backend
@ -283,9 +286,6 @@ Currently, the mirror mode is only tested in the registry backend, and in theory
{
// Mirror server URL (including scheme), e.g. Dragonfly dfdaemon server URL
"host": "http://dragonfly1.io:65001",
// true: Send the authorization request to the mirror e.g. another docker registry.
// false: Authorization request won't be relayed by the mirror e.g. Dragonfly.
"auth_through": false,
// Headers for mirror server
"headers": {
// For Dragonfly dfdaemon server URL, we need to specify "X-Dragonfly-Registry" (including scheme).

View File

@ -170,6 +170,16 @@ nydusify check \
--backend-config-file /path/to/backend-config.json
```
## Copy image between registry repositories
``` shell
nydusify copy \
--source myregistry/repo:tag-nydus \
--target myregistry/repo:tag-nydus-copy
```
It supports copying OCI v1 or Nydus images, use the options `--all-platforms` / `--platform` to copy the images of specific platforms.
## More Nydusify Options
See `nydusify convert/check --help`

View File

@ -57,8 +57,6 @@ host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
# HTTP request headers to be passed to mirror server.
# headers =
# Whether the authorization process is through mirror, default to false.
auth_through = true
# Interval for mirror health checking, in seconds.
health_check_interval = 5
# Maximum number of failures before marking a mirror as unusable.
@ -108,8 +106,6 @@ host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
# HTTP request headers to be passed to mirror server.
# headers =
# Whether the authorization process is through mirror, default to false.
auth_through = true
# Interval for mirror health checking, in seconds.
health_check_interval = 5
# Maximum number of failures before marking a mirror as unusable.

View File

@ -55,8 +55,6 @@ host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
# HTTP request headers to be passed to mirror server.
# headers =
# Whether the authorization process is through mirror, default to false.
auth_through = true
# Interval for mirror health checking, in seconds.
health_check_interval = 5
# Maximum number of failures before marking a mirror as unusable.
@ -106,8 +104,6 @@ host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
# HTTP request headers to be passed to mirror server.
# headers =
# Whether the authorization process is through mirror, default to false.
auth_through = true
# Interval for mirror health checking, in seconds.
health_check_interval = 5
# Maximum number of failures before marking a mirror as unusable.

View File

@ -43,3 +43,4 @@ kong
solr
sentry
zookeeper
ghcr.io/dragonflyoss/image-service/pax-uid-test

View File

@ -11,7 +11,7 @@ edition = "2018"
[dependencies]
anyhow = "1.0.35"
arc-swap = "1.5"
base64 = { version = "0.13.0", optional = true }
base64 = { version = "0.21", optional = true }
bitflags = "1.2.1"
blake3 = "1.0"
futures = "0.3"
@ -23,8 +23,8 @@ nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
spmc = "0.3.0"
vm-memory = "0.9"
fuse-backend-rs = "0.10"
vm-memory = "0.10"
fuse-backend-rs = "0.10.5"
nydus-api = { version = "0.2", path = "../api" }
nydus-error = { version = "0.2", path = "../error" }
@ -32,7 +32,7 @@ nydus-storage = { version = "0.6", path = "../storage", features = ["backend-loc
nydus-utils = { version = "0.4", path = "../utils" }
[dev-dependencies]
vmm-sys-util = "0.10"
vmm-sys-util = "0.11"
assert_matches = "1.5.0"
[features]

View File

@ -102,7 +102,7 @@ impl Rafs {
initialized: false,
digest_validate: rafs_cfg.validate,
fs_prefetch: rafs_cfg.prefetch.enable,
amplify_io: rafs_cfg.prefetch.batch_size as u32,
amplify_io: rafs_cfg.batch_size as u32,
prefetch_all: rafs_cfg.prefetch.prefetch_all,
xattr_enabled: rafs_cfg.enable_xattr,
@ -617,29 +617,30 @@ impl FileSystem for Rafs {
let real_size = cmp::min(size as u64, inode_size - offset);
let mut result = 0;
let mut descs = inode.alloc_bio_vecs(&self.device, offset, real_size as usize, true)?;
assert!(!descs.is_empty() && !descs[0].is_empty());
let mut io_vecs = inode.alloc_bio_vecs(&self.device, offset, real_size as usize, true)?;
assert!(!io_vecs.is_empty() && !io_vecs[0].is_empty());
// Try to amplify user io for Rafs v5, to improve performance.
if self.sb.meta.is_v5() && size < self.amplify_io {
let all_chunks_ready = self.device.all_chunks_ready(&descs);
let amplify_io = cmp::min(self.amplify_io as usize, w.available_bytes()) as u32;
if self.sb.meta.is_v5() && size < amplify_io {
let all_chunks_ready = self.device.all_chunks_ready(&io_vecs);
if !all_chunks_ready {
let chunk_mask = self.metadata().chunk_size as u64 - 1;
let next_chunk_base = (offset + (size as u64) + chunk_mask) & !chunk_mask;
let window_base = cmp::min(next_chunk_base, inode_size);
let actual_size = window_base - (offset & !chunk_mask);
if actual_size < self.amplify_io as u64 {
let window_size = self.amplify_io as u64 - actual_size;
let orig_cnt = descs.iter().fold(0, |s, d| s + d.len());
if actual_size < amplify_io as u64 {
let window_size = amplify_io as u64 - actual_size;
let orig_cnt = io_vecs.iter().fold(0, |s, d| s + d.len());
self.sb.amplify_io(
&self.device,
self.amplify_io,
&mut descs,
amplify_io,
&mut io_vecs,
&inode,
window_base,
window_size,
)?;
let new_cnt = descs.iter().fold(0, |s, d| s + d.len());
let new_cnt = io_vecs.iter().fold(0, |s, d| s + d.len());
trace!(
"amplify RAFS v5 read from {} to {} chunks",
orig_cnt,
@ -650,15 +651,15 @@ impl FileSystem for Rafs {
}
let start = self.ios.latency_start();
for desc in descs.iter_mut() {
assert!(!desc.is_empty());
assert_ne!(desc.size(), 0);
for io_vec in io_vecs.iter_mut() {
assert!(!io_vec.is_empty());
assert_ne!(io_vec.size(), 0);
// Avoid copying `desc`
let r = self.device.read_to(w, desc)?;
let r = self.device.read_to(w, io_vec)?;
result += r;
recorder.mark_success(r);
if r as u32 != desc.size() {
if r as u64 != io_vec.size() {
break;
}
}

View File

@ -410,7 +410,7 @@ impl OndiskInodeWrapper {
Ordering::Greater => (EROFS_BLOCK_SIZE - base) as usize,
Ordering::Equal => {
if self.size() % EROFS_BLOCK_SIZE == 0 {
EROFS_BLOCK_SIZE as usize
(EROFS_BLOCK_SIZE - base) as usize
} else {
(self.size() % EROFS_BLOCK_SIZE - base) as usize
}

View File

@ -260,7 +260,7 @@ impl RafsXAttrs {
return Err(einval!("xattr key/value is too big"));
}
for p in RAFS_XATTR_PREFIXES {
if buf.len() > p.as_bytes().len() && &buf[..p.as_bytes().len()] == p.as_bytes() {
if buf.len() >= p.as_bytes().len() && &buf[..p.as_bytes().len()] == p.as_bytes() {
self.pairs.insert(name, value);
return Ok(());
}

View File

@ -1854,7 +1854,7 @@ impl RafsXAttrs {
for (key, value) in self.pairs.iter() {
let (index, prefix_len) = Self::match_prefix(key)
.map_err(|_| einval!(format!("invalid xattr key {:?}", key)))?;
if key.len() <= prefix_len {
if key.len() < prefix_len {
return Err(einval!(format!("invalid xattr key {:?}", key)));
}
if value.len() > u16::MAX as usize {
@ -2177,10 +2177,29 @@ mod tests {
let mut reader: Box<dyn RafsIoRead> = Box::new(r);
let mut xattrs = RafsXAttrs::new();
xattrs.add(OsString::from("user.nydus"), vec![1u8]).unwrap();
// These xattrs are in "e_name_index" order for easier reading:
xattrs
.add(OsString::from("security.rafs"), vec![2u8, 3u8])
.unwrap();
xattrs
.add(
OsString::from("system.posix_acl_access"),
vec![4u8, 5u8, 6u8],
)
.unwrap();
xattrs
.add(
OsString::from("system.posix_acl_default"),
vec![7u8, 8u8, 9u8, 10u8],
)
.unwrap();
xattrs
.add(
OsString::from("trusted.abc"),
vec![11u8, 12u8, 13u8, 14u8, 15u8],
)
.unwrap();
xattrs.add(OsString::from("user.nydus"), vec![1u8]).unwrap();
xattrs.store_v6(&mut writer).unwrap();
writer.flush().unwrap();
@ -2191,35 +2210,59 @@ mod tests {
assert_eq!(header.h_shared_count, 0u8);
let target1 = RafsV6XattrEntry {
e_name_len: 4u8,
e_name_index: 6u8,
e_value_size: u16::to_le(2u16),
};
let target2 = RafsV6XattrEntry {
e_name_len: 5u8,
e_name_index: 1u8,
e_name_len: 5u8, // "nydus"
e_name_index: 1u8, // EROFS_XATTR_INDEX_USER
e_value_size: u16::to_le(1u16),
};
let mut entry1 = RafsV6XattrEntry::new();
reader.read_exact(entry1.as_mut()).unwrap();
assert!((entry1 == target1 || entry1 == target2));
let target2 = RafsV6XattrEntry {
e_name_len: 0u8, // ""
e_name_index: 2u8, // EROFS_XATTR_INDEX_POSIX_ACL_ACCESS
e_value_size: u16::to_le(3u16),
};
size += size_of::<RafsV6XattrEntry>()
+ entry1.name_len() as usize
+ entry1.value_size() as usize;
let target3 = RafsV6XattrEntry {
e_name_len: 0u8, // ""
e_name_index: 3u8, // EROFS_XATTR_INDEX_POSIX_ACL_DEFAULT
e_value_size: u16::to_le(4u16),
};
reader
.seek_to_offset(round_up(size as u64, size_of::<RafsV6XattrEntry>() as u64))
.unwrap();
let target4 = RafsV6XattrEntry {
e_name_len: 3u8, // "abc"
e_name_index: 4u8, // EROFS_XATTR_INDEX_TRUSTED
e_value_size: u16::to_le(5u16),
};
let mut entry2 = RafsV6XattrEntry::new();
reader.read_exact(entry2.as_mut()).unwrap();
if entry1 == target1 {
assert!(entry2 == target2);
} else {
assert!(entry2 == target1);
let target5 = RafsV6XattrEntry {
e_name_len: 4u8, // "rafs"
e_name_index: 6u8, // EROFS_XATTR_INDEX_SECURITY
e_value_size: u16::to_le(2u16),
};
let targets = vec![target1, target2, target3, target4, target5];
let mut entries: Vec<RafsV6XattrEntry> = Vec::new();
entries.reserve(targets.len());
for _i in 0..targets.len() {
let mut entry = RafsV6XattrEntry::new();
reader.read_exact(entry.as_mut()).unwrap();
size += round_up(
(size_of::<RafsV6XattrEntry>()
+ entry.e_name_len as usize
+ entry.e_value_size as usize) as u64,
size_of::<RafsV6XattrEntry>() as u64,
) as usize;
reader.seek_to_offset(size as u64).unwrap();
entries.push(entry);
}
for (i, target) in targets.iter().enumerate() {
let j = entries
.iter()
.position(|entry| entry == target)
.unwrap_or_else(|| panic!("Test failed for: target{}", i + 1));
// Note: swap_remove() is faster than remove() when order doesn't matter:
entries.swap_remove(j);
}
}
}

View File

@ -172,12 +172,12 @@ impl RafsSuper {
if let Ok(ni) = self.get_inode(next_ino, false) {
if ni.is_reg() {
let next_size = ni.size();
let next_size = if next_size < window_size {
let next_size = if next_size == 0 {
continue;
} else if next_size < window_size {
next_size
} else if window_size >= self.meta.chunk_size as u64 {
window_size / self.meta.chunk_size as u64 * self.meta.chunk_size as u64
} else if next_size == 0 {
continue;
} else {
break;
};

View File

@ -699,21 +699,11 @@ impl RafsSuper {
// Old converters extracts bootstraps from data blobs with inlined bootstrap
// use blob digest as the bootstrap file name. The last blob in the blob table from
// the bootstrap has wrong blod id, so we need to fix it.
let mut fixed = false;
let blobs = rs.superblock.get_blob_infos();
for blob in blobs.iter() {
// Fix blob id for new images with old converters.
if blob.has_feature(BlobFeatures::INLINED_FS_META) {
blob.set_blob_id_from_meta_path(path.as_ref())?;
fixed = true;
}
}
if !fixed && !blob_accessible && !blobs.is_empty() {
// Fix blob id for old images with old converters.
let last = blobs.len() - 1;
let blob = &blobs[last];
if !blob.has_feature(BlobFeatures::CAP_TAR_TOC) {
rs.set_blob_id_from_meta_path(path.as_ref())?;
}
}
}

View File

@ -10,7 +10,7 @@ edition = "2018"
resolver = "2"
[dependencies]
fuse-backend-rs = "0.10.1"
fuse-backend-rs = "0.10.5"
libc = "0.2"
log = "0.4.8"
mio = { version = "0.8", features = ["os-poll", "os-ext"] }
@ -28,14 +28,14 @@ nydus-rafs = { version = "0.2.2", path = "../rafs" }
nydus-storage = { version = "0.6.2", path = "../storage" }
nydus-utils = { version = "0.4.1", path = "../utils" }
vhost = { version = "0.5.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.7.0", optional = true }
vhost = { version = "0.6.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.8.0", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { version = "0.6.0", optional = true }
vm-memory = { version = "0.9.0", features = ["backend-mmap"], optional = true }
virtio-queue = { version = "0.7.0", optional = true }
vm-memory = { version = "0.10.0", features = ["backend-mmap"], optional = true }
[dev-dependencies]
vmm-sys-util = "0.10.0"
vmm-sys-util = "0.11.0"
[features]
default = ["fuse-backend-rs/fusedev"]

157
service/README.md Normal file
View File

@ -0,0 +1,157 @@
# nydus-service
The `nydus-service` crate helps to reuse the core services of nydus, allowing you to integrate nydus services into your project elegantly and easily. It provides:
* fuse service
* virtio-fs service
* fscache service
* blobcache service
It also supplies the nydus daemon and the daemon controller to help manage these services.
## Why you need
You're supposed to know that `nydusd` running as daemon to expose a [FUSE](https://www.kernel.org/doc/html/latest/filesystems/fuse.html) mountpoint, a [Virtio-FS](https://virtio-fs.gitlab.io/) mountpoint or an [EROFS](https://docs.kernel.org/filesystems/erofs.html) mountpoint inside guest for containers to access, and it provides key features include:
- Container images are downloaded on demand
- Chunk level data deduplication
- Flatten image metadata and data to remove all intermediate layers
- Only usable image data is saved when building a container image
- Only usable image data is downloaded when running a container
- End-to-end image data integrity
- Compatible with the OCI artifacts spec and distribution spec
- Integrated with existing CNCF project Dragonfly to support image distribution in large clusters
- Different container image storage backends are supported
If you want to use these features as native in your project without preparing and invoking `nydusd` deliberately, `nydus-service` is just born for this.
## How to use
For example, reuse the fuse service with `nydus-service` in three steps.
**prepare the config**:
```json
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "",
"skip_verify": true,
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 4,
"auth": "YOUR_LOGIN_AUTH="
}
},
"cache": {
"type": "blobcache",
"config": {
"work_dir": "cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": false,
"enable_xattr": true,
"fs_prefetch": {
"enable": true,
"threads_count": 4
}
}
```
**create a daemon**:
```Rust
static ref DAEMON_CONTROLLER: DaemonController = DaemonController::default()
let cmd = FsBackendMountCmd {
fs_type: FsBackendType::Rafs,
// Bootstrap path
source: bootstrap,
// Backend config
config,
// Virutal mountpoint
mountpoint: "/".to_string(),
// Prefetch files
prefetch_files: None,
};
let daemon = {
create_fuse_daemon(
// Mountpoint for the FUSE filesystem, target for `mount.fuse`
mountpoint,
// Vfs associated with the filesystem service object
vfs,
// Supervisor
None,
// Service instance identifier
id,
// Number of working threads to serve fuse requests
fuse_threads,
// daemon controller's waker
waker,
// Path to the Nydus daemon administration API socket
Some("api_sock"),
// Start Nydus daemon in upgrade mode
upgrade,
// Mounts FUSE filesystem in rw mode
!writable,
// FUSE server failover policy
failvoer-policy,
// Request structure to mount a backend filesystem instance
Some(cmd),
BTI.to_owned(),
)
.map(|d| {
info!("Fuse daemon started!");
d
})
.map_err(|e| {
error!("Failed in starting daemon: {}", e);
e
})?
};
DAEMON_CONTROLLER.set_daemon(daemon);
```
**start daemon controller**:
```rust
thread::spawn(move || {
let daemon = DAEMON_CONTROLLER.get_daemon();
if let Some(fs) = daemon.get_default_fs_service() {
DAEMON_CONTROLLER.set_fs_service(fs);
}
// Run the main event loop
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Gracefully shutdown system.
info!("nydusd quits");
DAEMON_CONTROLLER.shutdown();
});
```
Then, you can make the most of nydus services in your project.
## Support
**Platforms**:
- x86_64
- aarch64
**Operating Systems**:
- Linux
## License
This code is licensed under [Apache-2.0](LICENSE-APACHE) or [BSD-3-Clause](LICENSE-BSD-3-Clause).

View File

@ -19,7 +19,7 @@ lint:
# NYDUS_NYDUSIFY=/path/to/latest/nydusify \
# make test
test: build lint
sudo -E ./smoke.test -test.v -test.timeout 10m -test.parallel=8 -test.run=$(TESTS)
sudo -E ./smoke.test -test.v -test.timeout 10m -test.parallel=16 -test.run=$(TESTS)
# WORK_DIR=/tmp \
# NYDUS_BUILDER=/path/to/latest/nydus-image \

View File

@ -3,35 +3,46 @@ module github.com/dragonflyoss/image-service/smoke
go 1.18
require (
github.com/containerd/containerd v1.6.18
github.com/containerd/nydus-snapshotter v0.6.1
github.com/google/uuid v1.2.0
github.com/containerd/containerd v1.7.0
github.com/containerd/nydus-snapshotter v0.10.0
github.com/google/uuid v1.3.0
github.com/opencontainers/go-digest v1.0.0
github.com/pkg/errors v0.9.1
github.com/pkg/xattr v0.4.9
github.com/stretchr/testify v1.8.1
golang.org/x/sys v0.4.0
github.com/stretchr/testify v1.8.2
golang.org/x/sys v0.6.0
)
require (
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/containerd/cgroups v1.0.4 // indirect
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.7 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/containerd/fifo v1.0.0 // indirect
github.com/containerd/fifo v1.1.0 // indirect
github.com/containers/ocicrypt v1.1.7 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/klauspost/compress v1.15.12 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/klauspost/compress v1.16.0 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/opencontainers/image-spec v1.1.0-rc3 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/sirupsen/logrus v1.9.0 // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/mod v0.8.0 // indirect
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 // indirect
google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21 // indirect
google.golang.org/grpc v1.50.1 // indirect
google.golang.org/protobuf v1.28.1 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1 // indirect
go.opencensus.io v0.24.0 // indirect
golang.org/x/crypto v0.1.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.8.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/term v0.6.0 // indirect
golang.org/x/text v0.8.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4 // indirect
google.golang.org/grpc v1.53.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/square/go-jose.v2 v2.5.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

File diff suppressed because it is too large Load Diff

View File

@ -159,7 +159,10 @@ func (a *APIV1TestSuite) TestPrefetch(t *testing.T) {
ctx.PrepareWorkDir(t)
defer ctx.Destroy(t)
rootFs := texture.MakeLowerLayer(t, filepath.Join(ctx.Env.WorkDir, "root-fs"))
rootFs := texture.MakeLowerLayer(
t,
filepath.Join(ctx.Env.WorkDir, "root-fs"),
texture.LargerFileMaker("large-blob.bin", 5))
rafs := a.rootFsToRafs(t, ctx, rootFs)
@ -192,7 +195,7 @@ func (a *APIV1TestSuite) TestPrefetch(t *testing.T) {
config.RafsMode = ctx.Runtime.RafsMode
err = nydusd.MountByAPI(config)
require.NoError(t, err)
time.Sleep(time.Millisecond * 10)
time.Sleep(time.Millisecond * 15)
bcm, err := nydusd.GetBlobCacheMetrics("")
require.NoError(t, err)

View File

@ -15,7 +15,8 @@ import (
)
const (
paramZran = "zran"
paramZran = "zran"
paramAmplifyIO = "amplify_io"
)
type ImageTestSuite struct {

View File

@ -77,8 +77,60 @@ func (n *NativeLayerTestSuite) TestMakeLayers() test.Generator {
}
}
func (n *NativeLayerTestSuite) testMakeLayers(ctx tool.Context, t *testing.T) {
func (n *NativeLayerTestSuite) TestAmplifyIO() test.Generator {
scenarios := tool.DescartesIterator{}
scenarios.
/* Common params */
Dimension(paramCompressor, []interface{}{"lz4_block"}).
Dimension(paramFSVersion, []interface{}{"5", "6"}).
Dimension(paramChunkSize, []interface{}{"0x100000"}).
Dimension(paramCacheType, []interface{}{"blobcache"}).
Dimension(paramCacheCompressed, []interface{}{false}).
Dimension(paramRafsMode, []interface{}{"direct"}).
Dimension(paramEnablePrefetch, []interface{}{true}).
/* Amplify io - target param */
Dimension(paramAmplifyIO, []interface{}{uint64(0x0), uint64(0x100000), uint64(0x10000000)}).
Skip(func(param *tool.DescartesItem) bool {
// Rafs v6 not support cached mode nor dummy cache
if param.GetString(paramFSVersion) == "6" {
return param.GetString(paramRafsMode) == "cached" || param.GetString(paramCacheType) == ""
}
// Dummy cache not support prefetch
if param.GetString(paramCacheType) == "" && param.GetBool(paramEnablePrefetch) {
return true
}
return false
})
return func() (name string, testCase test.Case) {
if !scenarios.HasNext() {
return
}
scenario := scenarios.Next()
ctx := tool.DefaultContext(n.t)
ctx.Build.Compressor = scenario.GetString(paramCompressor)
ctx.Build.FSVersion = scenario.GetString(paramFSVersion)
ctx.Build.ChunkSize = scenario.GetString(paramChunkSize)
ctx.Runtime.CacheType = scenario.GetString(paramCacheType)
ctx.Runtime.CacheCompressed = scenario.GetBool(paramCacheCompressed)
ctx.Runtime.RafsMode = scenario.GetString(paramRafsMode)
ctx.Runtime.EnablePrefetch = scenario.GetBool(paramEnablePrefetch)
ctx.Runtime.AmplifyIO = scenario.GetUInt64(paramAmplifyIO)
return scenario.Str(), func(t *testing.T) {
n.testMakeLayers(*ctx, t)
}
}
}
func (n *NativeLayerTestSuite) testMakeLayers(ctx tool.Context, t *testing.T) {
packOption := converter.PackOption{
BuilderPath: ctx.Binary.Builder,
Compressor: ctx.Build.Compressor,
@ -154,6 +206,60 @@ func (n *NativeLayerTestSuite) testMakeLayers(ctx tool.Context, t *testing.T) {
lowerLayer.Overlay(t, upperLayer)
ctx.Env.BootstrapPath = overlayBootstrap
tool.Verify(t, ctx, lowerLayer.FileTree)
// Make base layers (use as a parent bootstrap)
packOption.ChunkDictPath = ""
baseLayer1 := texture.MakeMatrixLayer(t, filepath.Join(ctx.Env.WorkDir, "source-base-1"), "1")
baseLayer1BlobDigest := baseLayer1.Pack(t, packOption, ctx.Env.BlobDir)
baseLayer2 := texture.MakeMatrixLayer(t, filepath.Join(ctx.Env.WorkDir, "source-base-2"), "2")
baseLayer2BlobDigest := baseLayer2.Pack(t, packOption, ctx.Env.BlobDir)
lowerLayer = texture.MakeLowerLayer(t, filepath.Join(ctx.Env.WorkDir, "source-lower-1"))
lowerBlobDigest = lowerLayer.Pack(t, packOption, ctx.Env.BlobDir)
upperLayer = texture.MakeUpperLayer(t, filepath.Join(ctx.Env.WorkDir, "source-upper-1"))
upperBlobDigest = upperLayer.Pack(t, packOption, ctx.Env.BlobDir)
mergeOption = converter.MergeOption{
BuilderPath: ctx.Binary.Builder,
}
baseLayerDigests, baseBootstrap := tool.MergeLayers(t, ctx, mergeOption, []converter.Layer{
{
Digest: baseLayer1BlobDigest,
},
{
Digest: baseLayer2BlobDigest,
},
})
ctx.Env.BootstrapPath = baseBootstrap
require.Equal(t, []digest.Digest{baseLayer1BlobDigest, baseLayer2BlobDigest}, baseLayerDigests)
// Test merge from a parent bootstrap
mergeOption = converter.MergeOption{
ParentBootstrapPath: baseBootstrap,
ChunkDictPath: baseBootstrap,
BuilderPath: ctx.Binary.Builder,
}
actualDigests, overlayBootstrap = tool.MergeLayers(t, ctx, mergeOption, []converter.Layer{
{
Digest: lowerBlobDigest,
},
{
Digest: upperBlobDigest,
},
})
require.Equal(t, []digest.Digest{
baseLayer1BlobDigest,
baseLayer2BlobDigest,
lowerBlobDigest,
upperBlobDigest,
}, actualDigests)
ctx.Env.BootstrapPath = overlayBootstrap
baseLayer1.Overlay(t, baseLayer2).Overlay(t, lowerLayer).Overlay(t, upperLayer)
tool.Verify(t, ctx, baseLayer1.FileTree)
}
func TestNativeLayer(t *testing.T) {

View File

@ -13,7 +13,15 @@ import (
"github.com/dragonflyoss/image-service/smoke/tests/tool"
)
func MakeChunkDictLayer(t *testing.T, workDir string) *tool.Layer {
type LayerMaker func(t *testing.T, layer *tool.Layer)
func LargerFileMaker(path string, sizeGB int) LayerMaker {
return func(t *testing.T, layer *tool.Layer) {
layer.CreateLargeFile(t, path, sizeGB)
}
}
func MakeChunkDictLayer(t *testing.T, workDir string, makers ...LayerMaker) *tool.Layer {
layer := tool.NewLayer(t, workDir)
// Create regular file
@ -22,11 +30,20 @@ func MakeChunkDictLayer(t *testing.T, workDir string) *tool.Layer {
layer.CreateFile(t, "chunk-dict-file-3", []byte("dir-1/file-1"))
layer.CreateFile(t, "chunk-dict-file-4", []byte("dir-2/file-1"))
layer.CreateFile(t, "chunk-dict-file-5", []byte("dir-1/file-2"))
layer.CreateFile(t, "chunk-dict-file-6", []byte("This is poetry"))
layer.CreateFile(t, "chunk-dict-file-7", []byte("My name is long"))
layer.CreateHoledFile(t, "chunk-dict-file-9", []byte("hello world"), 1024, 1024*1024)
layer.CreateFile(t, "chunk-dict-file-10", []byte(""))
// Customized files
for _, maker := range makers {
maker(t, layer)
}
return layer
}
func MakeLowerLayer(t *testing.T, workDir string) *tool.Layer {
func MakeLowerLayer(t *testing.T, workDir string, makers ...LayerMaker) *tool.Layer {
layer := tool.NewLayer(t, workDir)
// Create regular file
@ -52,10 +69,42 @@ func MakeLowerLayer(t *testing.T, workDir string) *tool.Layer {
layer.CreateSpecialFile(t, "block-1", syscall.S_IFBLK)
layer.CreateSpecialFile(t, "fifo-1", syscall.S_IFIFO)
// Create file with chinese name
layer.CreateFile(t, "唐诗三百首", []byte("This is poetry"))
// Create file with long name
layer.CreateFile(t, "/test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.test-😉-name.", []byte("My name is long"))
// Create symlink with non-existed source file
layer.CreateSymlink(t, "dir-1/file-deleted-symlink", "dir-1/file-deleted")
// Create holed file
layer.CreateHoledFile(t, "file-hole-1", []byte("hello world"), 1024, 1024*1024)
// Create empty file
layer.CreateFile(t, "empty.txt", []byte(""))
layer.CreateFile(t, "dir-1/file-2", []byte("dir-1/file-2"))
// Set file xattr (only `security.capability` xattr is supported in OCI layer)
tool.Run(t, fmt.Sprintf("setcap CAP_NET_RAW+ep %s", filepath.Join(workDir, "dir-1/file-2")))
// Note: The following test is omitted for now because containerd does not
// support created layers with any xattr except "security." xattrs described
// in this issue: https://github.com/containerd/containerd/issues/8947
// Create file with an ACL:
//layer.CreateFile(t, "acl-file.txt", []byte(""))
// The following xattr key and value are equivalent to running this ACL
// command: "setfacl -x user:root:rwx acl-file.txt"
//layer.SetXattr(t, "acl-file.txt", "system.posix_acl_access", []byte{
// 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x07, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0x02, 0x00, 0x07, 0x00,
// 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x05, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0x10, 0x00, 0x07, 0x00,
// 0xFF, 0xFF, 0xFF, 0xFF, 0x20, 0x00, 0x05, 0x00, 0xFF, 0xFF, 0xFF, 0xFF });
// Customized files
for _, maker := range makers {
maker(t, layer)
}
return layer
}
@ -74,3 +123,15 @@ func MakeUpperLayer(t *testing.T, workDir string) *tool.Layer {
return layer
}
func MakeMatrixLayer(t *testing.T, workDir, id string) *tool.Layer {
layer := tool.NewLayer(t, workDir)
// Create regular file
file1 := fmt.Sprintf("matrix-file-%s-1", id)
file2 := fmt.Sprintf("matrix-file-%s-2", id)
layer.CreateFile(t, file1, []byte(file1))
layer.CreateFile(t, file2, []byte(file2))
return layer
}

View File

@ -37,6 +37,7 @@ type RuntimeContext struct {
CacheCompressed bool
RafsMode string
EnablePrefetch bool
AmplifyIO uint64
}
type EnvContext struct {
@ -73,6 +74,7 @@ func DefaultContext(t *testing.T) *Context {
CacheCompressed: false,
RafsMode: "direct",
EnablePrefetch: true,
AmplifyIO: uint64(0x100000),
},
}
}

View File

@ -30,10 +30,10 @@ type File struct {
func GetXattrs(t *testing.T, path string) map[string]string {
xattrs := map[string]string{}
names, err := xattr.List(path)
names, err := xattr.LList(path)
require.NoError(t, err)
for _, name := range names {
data, err := xattr.Get(path, name)
data, err := xattr.LGet(path, name)
require.NoError(t, err)
xattrs[name] = string(data)
}
@ -41,7 +41,7 @@ func GetXattrs(t *testing.T, path string) map[string]string {
}
func NewFile(t *testing.T, path, target string) *File {
stat, err := os.Stat(path)
stat, err := os.Lstat(path)
require.NoError(t, err)
xattrs := GetXattrs(t, path)

View File

@ -47,35 +47,38 @@ func (d *DescartesItem) Str() string {
return sb.String()
}
func (d *DescartesItem) GetUInt64(name string) uint64 {
return d.vals[name].(uint64)
}
// Generator of Cartesian product.
//
// An example is below:
//
// import (
// "fmt"
// "github.com/dragonflyoss/image-service/smoke/tests/tool"
// )
// import (
// "fmt"
// "github.com/dragonflyoss/image-service/smoke/tests/tool"
// )
//
// products := tool.DescartesIterator{}
// products.
// Dimension("name", []interface{}{"foo", "imoer", "morgan"}).
// Dimension("age", []interface{}{"20", "30"}).
// Skip(func(item *tool.DescartesItem) bool {
// // skip ("morgan", "30")
// return item.GetString("name") == "morgan" && param.GetString("age") == "30"
// })
//
// // output:
// // age: 20, name: foo
// // age: 20, name: imoer
// // age: 20, name: morgan
// // age: 30, name: foo
// // age: 30, name: imoer
// for products.HasNext(){
// item := products.Next()
// fmt.Println(item.Str())
// }
// products := tool.DescartesIterator{}
// products.
// Dimension("name", []interface{}{"foo", "imoer", "morgan"}).
// Dimension("age", []interface{}{"20", "30"}).
// Skip(func(item *tool.DescartesItem) bool {
// // skip ("morgan", "30")
// return item.GetString("name") == "morgan" && param.GetString("age") == "30"
// })
//
// // output:
// // age: 20, name: foo
// // age: 20, name: imoer
// // age: 20, name: morgan
// // age: 30, name: foo
// // age: 30, name: imoer
// for products.HasNext(){
// item := products.Next()
// fmt.Println(item.Str())
// }
type DescartesIterator struct {
cursores []int
valLists [][]interface{}

View File

@ -8,6 +8,7 @@ import (
"bytes"
"compress/gzip"
"context"
"crypto/rand"
"io"
"io/ioutil"
"os"
@ -21,6 +22,7 @@ import (
"github.com/containerd/nydus-snapshotter/pkg/converter"
"github.com/opencontainers/go-digest"
"github.com/pkg/xattr"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/sys/unix"
)
@ -44,6 +46,31 @@ func (l *Layer) CreateFile(t *testing.T, name string, data []byte) {
require.NoError(t, err)
}
func (l *Layer) CreateLargeFile(t *testing.T, name string, sizeGB int) {
f, err := os.Create(filepath.Join(l.workDir, name))
require.NoError(t, err)
defer func() {
f.Close()
}()
_, err = io.CopyN(f, rand.Reader, int64(sizeGB)<<30)
assert.Nil(t, err)
}
func (l *Layer) CreateHoledFile(t *testing.T, name string, data []byte, offset, fileSize int64) {
f, err := os.Create(filepath.Join(l.workDir, name))
require.NoError(t, err)
defer func() {
f.Close()
}()
err = f.Truncate(fileSize)
require.NoError(t, err)
_, err = f.WriteAt(data, offset)
require.NoError(t, err)
}
func (l *Layer) CreateDir(t *testing.T, name string) {
err := os.MkdirAll(filepath.Join(l.workDir, name), 0755)
require.NoError(t, err)
@ -164,7 +191,7 @@ func (l *Layer) PackRef(t *testing.T, ctx Context, blobDir string, compress bool
return ociBlobDigest, rafsBlobDigest
}
func (l *Layer) Overlay(t *testing.T, upper *Layer) {
func (l *Layer) Overlay(t *testing.T, upper *Layer) *Layer {
// Handle whiteout/opaque files
for upperName := range upper.FileTree {
name := filepath.Base(upperName)
@ -198,6 +225,8 @@ func (l *Layer) Overlay(t *testing.T, upper *Layer) {
}
}
}
return l
}
func (l *Layer) recordFileTree(t *testing.T) {

View File

@ -69,6 +69,7 @@ type NydusdConfig struct {
LatestReadFiles bool
AccessPattern bool
PrefetchFiles []string
AmplifyIO uint64
}
type Nydusd struct {
@ -104,7 +105,8 @@ var configTpl = `
"digest_validate": {{.DigestValidate}},
"enable_xattr": true,
"latest_read_files": {{.LatestReadFiles}},
"access_pattern": {{.AccessPattern}}
"access_pattern": {{.AccessPattern}},
"amplify_io": {{.AmplifyIO}}
}
`

View File

@ -45,134 +45,134 @@ type Generator func() (name string, testCase Case)
//
// Example1: synchronized way
//
// import (
// "fmt"
// "testing"
// import (
// "fmt"
// "testing"
//
// "github.com/stretchr/testify/require"
// )
// "github.com/stretchr/testify/require"
// )
//
// type TestSuite struct{}
// type TestSuite struct{}
//
// func (s *TestSuite) TestOk(t *testing.T) {
// require.Equal(t, 1, 1)
// }
// func (s *TestSuite) TestOk(t *testing.T) {
// require.Equal(t, 1, 1)
// }
//
// func (s *TestSuite) TestFail(t *testing.T) {
// require.Equal(t, 1, 2)
// }
// func (s *TestSuite) TestFail(t *testing.T) {
// require.Equal(t, 1, 2)
// }
//
// func (s *TestSuite) TestDynamicTest() TestGenerator {
// caseNum := 0
// return func() (name string, testCase TestCase) {
// if caseNum <= 5 {
// testCase = func(t *testing.T) {
// require.Equal(t, 1, 2)
// }
// }
// caseNum++
// return fmt.Sprintf("dynamic_test_%v", caseNum), testCase
// }
// }
// func (s *TestSuite) TestDynamicTest() TestGenerator {
// caseNum := 0
// return func() (name string, testCase TestCase) {
// if caseNum <= 5 {
// testCase = func(t *testing.T) {
// require.Equal(t, 1, 2)
// }
// }
// caseNum++
// return fmt.Sprintf("dynamic_test_%v", caseNum), testCase
// }
// }
//
// func Test1(t *testing.T) {
// Run(t, &TestSuite{}, Sync)
// }
// func Test1(t *testing.T) {
// Run(t, &TestSuite{}, Sync)
// }
//
// Output:
// `go test -v --parallel 4`
// 1. The cases are serialized executed.
// 2. The dynamic tests are generated and executed.
//
// === RUN Test1
// === RUN Test1/dynamic_test_1
// === RUN Test1/dynamic_test_2
// === RUN Test1/dynamic_test_3
// === RUN Test1/dynamic_test_4
// === RUN Test1/dynamic_test_5
// === RUN Test1/dynamic_test_6
// === RUN Test1/TestFail
// suite_test.go:18:
// Error Trace: suite_test.go:18
// Error: Not equal:
// expected: 1
// actual : 2
// Test: Test1/TestFail
// === RUN Test1/TestOk
// --- FAIL: Test1 (0.00s)
// --- PASS: Test1/dynamic_test_1 (0.00s)
// --- PASS: Test1/dynamic_test_2 (0.00s)
// --- PASS: Test1/dynamic_test_3 (0.00s)
// --- PASS: Test1/dynamic_test_4 (0.00s)
// --- PASS: Test1/dynamic_test_5 (0.00s)
// --- PASS: Test1/dynamic_test_6 (0.00s)
// --- FAIL: Test1/TestFail (0.00s)
// --- PASS: Test1/TestOk (0.00s)
// `go test -v --parallel 4`
// 1. The cases are serialized executed.
// 2. The dynamic tests are generated and executed.
//
// === RUN Test1
// === RUN Test1/dynamic_test_1
// === RUN Test1/dynamic_test_2
// === RUN Test1/dynamic_test_3
// === RUN Test1/dynamic_test_4
// === RUN Test1/dynamic_test_5
// === RUN Test1/dynamic_test_6
// === RUN Test1/TestFail
// suite_test.go:18:
// Error Trace: suite_test.go:18
// Error: Not equal:
// expected: 1
// actual : 2
// Test: Test1/TestFail
// === RUN Test1/TestOk
// --- FAIL: Test1 (0.00s)
// --- PASS: Test1/dynamic_test_1 (0.00s)
// --- PASS: Test1/dynamic_test_2 (0.00s)
// --- PASS: Test1/dynamic_test_3 (0.00s)
// --- PASS: Test1/dynamic_test_4 (0.00s)
// --- PASS: Test1/dynamic_test_5 (0.00s)
// --- PASS: Test1/dynamic_test_6 (0.00s)
// --- FAIL: Test1/TestFail (0.00s)
// --- PASS: Test1/TestOk (0.00s)
//
// Example2: asynchronized way
//
// import (
// "fmt"
// "testing"
// "time"
// )
// import (
// "fmt"
// "testing"
// "time"
// )
//
// type AsyncTestSuite struct{}
// type AsyncTestSuite struct{}
//
// func (s *AsyncTestSuite) Test1(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
// func (s *AsyncTestSuite) Test1(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
//
// func (s *AsyncTestSuite) Test2(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
// func (s *AsyncTestSuite) Test2(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
//
// func (s *AsyncTestSuite) Test3(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
// func (s *AsyncTestSuite) Test3(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
//
// func (s *AsyncTestSuite) TestDynamicTest() TestGenerator {
// caseNum := 0
// return func() (name string, testCase TestCase) {
// if caseNum <= 5 {
// testCase = func(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
// }
// caseNum++
// return "", testCase
// }
// }
// func (s *AsyncTestSuite) TestDynamicTest() TestGenerator {
// caseNum := 0
// return func() (name string, testCase TestCase) {
// if caseNum <= 5 {
// testCase = func(t *testing.T) {
// for i := 0; i < 5; i++ {
// time.Sleep(time.Second)
// }
// }
// }
// caseNum++
// return "", testCase
// }
// }
//
// func Test1(t *testing.T) {
// Run(t, &AsyncTestSuite{})
// }
// func Test1(t *testing.T) {
// Run(t, &AsyncTestSuite{})
// }
//
// Output:
// `go test -v --parallel 4`
// 1. The cases are parallel executed, which leads to random completion.
// 2. The dynamic tests are named automicly in lack of customized name.
//
// --- PASS: Test1 (0.00s)
// --- PASS: Test1/TestDynamicTest_4 (5.00s)
// --- PASS: Test1/Test1 (5.00s)
// --- PASS: Test1/TestDynamicTest_6 (5.00s)
// --- PASS: Test1/TestDynamicTest_5 (5.00s)
// --- PASS: Test1/TestDynamicTest_2 (5.00s)
// --- PASS: Test1/TestDynamicTest_3 (5.00s)
// --- PASS: Test1/TestDynamicTest_1 (5.00s)
// --- PASS: Test1/Test3 (5.00s)
// --- PASS: Test1/Test2 (5.00s)
//
// `go test -v --parallel 4`
// 1. The cases are parallel executed, which leads to random completion.
// 2. The dynamic tests are named automicly in lack of customized name.
//
// --- PASS: Test1 (0.00s)
// --- PASS: Test1/TestDynamicTest_4 (5.00s)
// --- PASS: Test1/Test1 (5.00s)
// --- PASS: Test1/TestDynamicTest_6 (5.00s)
// --- PASS: Test1/TestDynamicTest_5 (5.00s)
// --- PASS: Test1/TestDynamicTest_2 (5.00s)
// --- PASS: Test1/TestDynamicTest_3 (5.00s)
// --- PASS: Test1/TestDynamicTest_1 (5.00s)
// --- PASS: Test1/Test3 (5.00s)
// --- PASS: Test1/Test2 (5.00s)
func Run(t *testing.T, suite interface{}, opts ...Option) {
cases := reflect.ValueOf(suite)

View File

@ -29,6 +29,7 @@ func Verify(t *testing.T, ctx Context, expectedFiles map[string]*File) {
CacheCompressed: ctx.Runtime.CacheCompressed,
RafsMode: ctx.Runtime.RafsMode,
DigestValidate: false,
AmplifyIO: ctx.Runtime.AmplifyIO,
}
nydusd, err := NewNydusd(config)

View File

@ -13,6 +13,7 @@ use std::path::{Path, PathBuf};
use std::rc::Rc;
use anyhow::{anyhow, bail, Context, Error, Result};
use base64::Engine;
use serde::{Deserialize, Serialize};
use nydus_rafs::metadata::chunk::ChunkWrapper;
@ -561,12 +562,14 @@ impl StargzTreeBuilder {
if entry.has_xattr() {
for (name, value) in entry.xattrs.iter() {
flags |= RafsV5InodeFlags::XATTR;
let value = base64::decode(value).with_context(|| {
format!(
"parse xattr name {:?} of file {:?} failed",
entry_path, name
)
})?;
let value = base64::engine::general_purpose::STANDARD
.decode(value)
.with_context(|| {
format!(
"parse xattr name {:?} of file {:?} failed",
entry_path, name
)
})?;
xattrs.add(OsString::from(name), value)?;
}
}

View File

@ -95,7 +95,9 @@ impl Blob {
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
) -> Result<()> {
if ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc) {
if !ctx.blob_features.contains(BlobFeatures::SEPARATE)
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
blob_ctx.write_tar_header(
blob_writer,
@ -121,6 +123,14 @@ impl Blob {
Ok(())
}
fn get_meta_compressor(ctx: &BuildContext) -> compress::Algorithm {
if ctx.conversion_type.is_to_ref() {
compress::Algorithm::Zstd
} else {
ctx.compressor
}
}
pub(crate) fn dump_meta_data(
ctx: &BuildContext,
blob_ctx: &mut BlobContext,
@ -153,7 +163,7 @@ impl Blob {
header.set_ci_zran(false);
};
let mut compressor = compress::Algorithm::Zstd;
let mut compressor = Self::get_meta_compressor(ctx);
let (compressed_data, compressed) = compress::compress(ci_data, compressor)
.with_context(|| "failed to compress blob chunk info array".to_string())?;
if !compressed {
@ -254,3 +264,41 @@ impl Blob {
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_get_meta_compressor() {
let mut ctx = BuildContext::default();
let conversion_types = vec![
ConversionType::DirectoryToRafs,
ConversionType::DirectoryToStargz,
ConversionType::DirectoryToTargz,
ConversionType::EStargzToRafs,
ConversionType::EStargzToRef,
ConversionType::EStargzIndexToRef,
ConversionType::TargzToRafs,
ConversionType::TargzToStargz,
ConversionType::TargzToRef,
ConversionType::TarToStargz,
ConversionType::TarToRafs,
ConversionType::TarToRef,
];
for c_type in conversion_types {
ctx = BuildContext {
conversion_type: c_type,
..ctx
};
let compressor = Blob::get_meta_compressor(&ctx);
if ctx.conversion_type.is_to_ref() {
assert_eq!(compressor, compress::Algorithm::Zstd);
} else {
assert_eq!(compressor, compress::Algorithm::None);
}
}
}
}

View File

@ -942,19 +942,44 @@ impl Node {
pub fn v6_dir_d_size(&self, tree: &Tree) -> Result<u64> {
ensure!(self.is_dir(), "{} is not a directory", self);
// Use length in byte, instead of length in character.
let mut d_size: u64 = (".".as_bytes().len()
+ size_of::<RafsV6Dirent>()
+ "..".as_bytes().len()
+ size_of::<RafsV6Dirent>()) as u64;
let mut d_size = 0;
for child in tree.children.iter() {
let len = child.node.name().as_bytes().len() + size_of::<RafsV6Dirent>();
// erofs disk format requires dirent to be aligned with 4096.
if (d_size % EROFS_BLOCK_SIZE) + len as u64 > EROFS_BLOCK_SIZE {
d_size = div_round_up(d_size as u64, EROFS_BLOCK_SIZE) * EROFS_BLOCK_SIZE;
// Sort all children if "." and ".." are not at the head after sorting.
if !tree.children.is_empty() && tree.children[0].node.name() < ".." {
let mut children = Vec::with_capacity(tree.children.len() + 2);
let dot = OsString::from(".");
let dotdot = OsString::from("..");
children.push(dot.as_os_str());
children.push(dotdot.as_os_str());
for child in tree.children.iter() {
children.push(child.node.name());
}
children.sort_unstable();
for c in children {
// Use length in byte, instead of length in character.
let len = c.as_bytes().len() + size_of::<RafsV6Dirent>();
// erofs disk format requires dirent to be aligned to block size.
if (d_size % EROFS_BLOCK_SIZE) + len as u64 > EROFS_BLOCK_SIZE {
d_size = round_up(d_size as u64, EROFS_BLOCK_SIZE);
}
d_size += len as u64;
}
} else {
// Avoid sorting again if "." and ".." are at the head after sorting due to that
// `tree.children` has already been sorted.
d_size = (".".as_bytes().len()
+ size_of::<RafsV6Dirent>()
+ "..".as_bytes().len()
+ size_of::<RafsV6Dirent>()) as u64;
for child in tree.children.iter() {
let len = child.node.name().as_bytes().len() + size_of::<RafsV6Dirent>();
// erofs disk format requires dirent to be aligned to block size.
if (d_size % EROFS_BLOCK_SIZE) + len as u64 > EROFS_BLOCK_SIZE {
d_size = round_up(d_size as u64, EROFS_BLOCK_SIZE);
}
d_size += len as u64;
}
d_size += len as u64;
}
Ok(d_size)

View File

@ -327,11 +327,17 @@ fn prepare_cmd_args(bti_string: &'static str) -> App {
.subcommand(
App::new("merge")
.about("Merge multiple bootstraps into a overlaid bootstrap")
.arg(
Arg::new("parent-bootstrap")
.long("parent-bootstrap")
.help("File path of the parent/referenced RAFS metadata blob (optional)")
.required(false),
)
.arg(
Arg::new("bootstrap")
.long("bootstrap")
.short('B')
.help("output path of nydus overlaid bootstrap"),
.help("Output path of nydus overlaid bootstrap"),
)
.arg(
Arg::new("blob-dir")
@ -354,6 +360,12 @@ fn prepare_cmd_args(bti_string: &'static str) -> App {
.required(false)
.help("RAFS blob digest list separated by comma"),
)
.arg(
Arg::new("original-blob-ids")
.long("original-blob-ids")
.required(false)
.help("original blob id list separated by comma, it may usually be a sha256 hex string"),
)
.arg(
Arg::new("blob-sizes")
.long("blob-sizes")
@ -897,6 +909,12 @@ impl Command {
.map(|item| item.trim().to_string())
.collect()
});
let original_blob_ids: Option<Vec<String>> =
matches.get_one::<String>("original-blob-ids").map(|list| {
list.split(',')
.map(|item| item.trim().to_string())
.collect()
});
let blob_toc_sizes: Option<Vec<u64>> =
matches.get_one::<String>("blob-toc-sizes").map(|list| {
list.split(',')
@ -930,10 +948,14 @@ impl Command {
};
ctx.configuration = config.clone();
let parent_bootstrap_path = Self::get_parent_bootstrap(matches)?;
let output = Merger::merge(
&mut ctx,
parent_bootstrap_path,
source_bootstrap_paths,
blob_digests,
original_blob_ids,
blob_sizes,
blob_toc_digests,
blob_toc_sizes,
@ -1336,9 +1358,10 @@ impl Command {
let file_type = metadata(path.as_ref())
.context(format!("failed to access path {:?}", path.as_ref()))?
.file_type();
// The SOURCE can be a regular file, FIFO file, or /dev/stdin char device, etc..
ensure!(
file_type.is_file() || file_type.is_fifo(),
"specified path must be a regular/fifo file: {:?}",
file_type.is_file() || file_type.is_fifo() || file_type.is_char_device(),
"specified path must be a regular/fifo/char_device file: {:?}",
path.as_ref()
);
Ok(())
@ -1355,3 +1378,12 @@ impl Command {
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::Command;
#[test]
fn test_ensure_file() {
Command::ensure_file("/dev/stdin").unwrap();
}
}

View File

@ -2,6 +2,7 @@
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::collections::HashSet;
use std::convert::TryFrom;
use std::ops::Deref;
@ -31,6 +32,20 @@ use crate::core::tree::{MetadataTreeBuilder, Tree};
pub struct Merger {}
impl Merger {
fn get_string_from_list(
original_ids: &Option<Vec<String>>,
idx: usize,
) -> Result<Option<String>> {
Ok(if let Some(id) = &original_ids {
let id_string = id
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(id_string.clone())
} else {
None
})
}
fn get_digest_from_list(digests: &Option<Vec<String>>, idx: usize) -> Result<Option<[u8; 32]>> {
Ok(if let Some(digests) = &digests {
let digest = digests
@ -61,8 +76,10 @@ impl Merger {
#[allow(clippy::too_many_arguments)]
pub fn merge(
ctx: &mut BuildContext,
parent_bootstrap_path: Option<String>,
sources: Vec<PathBuf>,
blob_digests: Option<Vec<String>>,
original_blob_ids: Option<Vec<String>>,
blob_sizes: Option<Vec<u64>>,
blob_toc_digests: Option<Vec<String>>,
blob_toc_sizes: Option<Vec<u64>>,
@ -81,6 +98,22 @@ impl Merger {
sources.len(),
);
}
if let Some(original_ids) = original_blob_ids.as_ref() {
ensure!(
original_ids.len() == sources.len(),
"number of original blob id entries {} doesn't match number of sources {}",
original_ids.len(),
sources.len(),
);
}
if let Some(sizes) = blob_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of blob size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
if let Some(toc_digests) = blob_toc_digests.as_ref() {
ensure!(
toc_digests.len() == sources.len(),
@ -106,6 +139,26 @@ impl Merger {
);
}
let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester);
// Load parent bootstrap
let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0;
if let Some(parent_bootstrap_path) = &parent_bootstrap_path {
let (rs, _) =
RafsSuper::load_from_file(parent_bootstrap_path, config_v2.clone(), false, false)
.context(format!("load parent bootstrap {:?}", parent_bootstrap_path))?;
tree = Some(Tree::from_bootstrap(&rs, &mut ())?);
let blobs = rs.superblock.get_blob_infos();
for blob in &blobs {
let blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
blob_idx_map.insert(blob_ctx.blob_id.clone(), blob_mgr.len());
blob_mgr.add(blob_ctx);
}
parent_layers = blobs.len();
}
// Get the blobs come from chunk dict bootstrap.
let mut chunk_dict_blobs = HashSet::new();
let mut config = None;
@ -121,8 +174,6 @@ impl Merger {
let mut fs_version = RafsVersion::V6;
let mut chunk_size = None;
let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester);
for (layer_idx, bootstrap_path) in sources.iter().enumerate() {
let (rs, _) = RafsSuper::load_from_file(bootstrap_path, config_v2.clone(), true, false)
@ -136,9 +187,9 @@ impl Merger {
ctx.digester = rs.meta.get_digester();
ctx.explicit_uidgid = rs.meta.explicit_uidgid();
let mut blob_idx_map = Vec::new();
let mut parent_blob_added = false;
for blob in rs.superblock.get_blob_infos() {
let blobs = &rs.superblock.get_blob_infos();
for blob in blobs {
let mut blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
if let Some(chunk_size) = chunk_size {
ensure!(
@ -166,7 +217,14 @@ impl Merger {
} else {
// The blob id (blob sha256 hash) in parent bootstrap is invalid for nydusd
// runtime, should change it to the hash of whole tar blob.
blob_ctx.blob_id = BlobInfo::get_blob_id_from_meta_path(bootstrap_path)?;
if let Some(original_id) =
Self::get_string_from_list(&original_blob_ids, layer_idx)?
{
blob_ctx.blob_id = original_id;
} else {
blob_ctx.blob_id =
BlobInfo::get_blob_id_from_meta_path(bootstrap_path)?;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_digests, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
@ -191,15 +249,8 @@ impl Merger {
}
}
let mut found = false;
for (idx, blob) in blob_mgr.get_blobs().iter().enumerate() {
if blob.blob_id == blob_ctx.blob_id {
blob_idx_map.push(idx as u32);
found = true;
}
}
if !found {
blob_idx_map.push(blob_mgr.len() as u32);
if !blob_idx_map.contains_key(&blob.blob_id()) {
blob_idx_map.insert(blob.blob_id().clone(), blob_mgr.len());
blob_mgr.add(blob_ctx);
}
}
@ -218,8 +269,11 @@ impl Merger {
))?;
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
// Set the blob index of chunk to real index in blob table of final bootstrap.
chunk.inner.set_blob_index(blob_idx_map[origin_blob_index]);
let blob_ctx = blobs[origin_blob_index].as_ref();
if let Some(blob_index) = blob_idx_map.get(&blob_ctx.blob_id()) {
// Set the blob index of chunk to real index in blob table of final bootstrap.
chunk.inner.set_blob_index(*blob_index as u32);
}
}
// Set node's layer index to distinguish same inode number (from bootstrap)
// between different layers.
@ -227,7 +281,7 @@ impl Merger {
"too many layers {}, limited to {}",
layer_idx,
u16::MAX
))?;
))? + parent_layers as u16;
node.overlay = Overlay::UpperAddition;
match node.whiteout_type(WhiteoutSpec::Oci) {
// Insert whiteouts at the head, so they will be handled first when

View File

@ -21,7 +21,7 @@ use nix::sys::signal;
use rlimit::Resource;
use nydus::{get_build_time_info, SubCmdArgs};
use nydus_api::BuildTimeInfo;
use nydus_api::{BuildTimeInfo, ConfigV2};
use nydus_app::{dump_program_info, setup_logging};
use nydus_service::daemon::DaemonController;
use nydus_service::{
@ -423,7 +423,16 @@ fn process_fs_service(
)
}
None => match args.value_of("config") {
Some(v) => std::fs::read_to_string(v)?,
Some(v) => {
let auth = std::env::var("IMAGE_PULL_AUTH").ok();
if auth.is_some() {
let mut config = ConfigV2::from_file(v)?;
config.update_registry_auth_info(&auth);
serde_json::to_string(&config)?
} else {
std::fs::read_to_string(v)?
}
}
None => {
let e = NydusError::InvalidArguments(
"both --config and --localfs-dir are missing".to_string(),

View File

@ -40,7 +40,8 @@ impl<'a> ServiceArgs for SubCmdArgs<'a> {
}
fn is_present(&self, key: &str) -> bool {
self.subargs.get_flag(key) || self.args.get_flag(key)
matches!(self.subargs.try_get_one::<bool>(key), Ok(Some(true)))
|| matches!(self.args.try_get_one::<bool>(key), Ok(Some(true)))
}
}

View File

@ -10,29 +10,38 @@ edition = "2018"
[dependencies]
arc-swap = "1.5"
base64 = { version = "0.13.0", optional = true }
base64 = { version = "0.21", optional = true }
bitflags = "1.2.1"
hex = "0.4.3"
hmac = { version = "0.12.1", optional = true }
http = { version = "0.2.8", optional = true }
httpdate = { version = "1.0", optional = true }
hyper = {version = "0.14.11", optional = true}
hyperlocal = {version = "0.8.0", optional = true}
hyper = { version = "0.14.11", optional = true }
hyperlocal = { version = "0.8.0", optional = true }
lazy_static = "1.4.0"
leaky-bucket = "0.12.1"
libc = "0.2"
log = "0.4.8"
nix = "0.24"
reqwest = { version = "0.11.11", features = ["blocking", "json"], optional = true }
reqwest = { version = "0.11.14", features = [
"blocking",
"json",
], optional = true }
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
sha2 = { version = "0.10.2", optional = true }
tar = "0.4.38"
tar = "0.4.40"
time = { version = "0.3.14", features = ["formatting"], optional = true }
tokio = { version = "1.19.0", features = ["macros", "rt", "rt-multi-thread", "sync", "time"] }
tokio = { version = "1.19.0", features = [
"macros",
"rt",
"rt-multi-thread",
"sync",
"time",
] }
url = { version = "2.1.1", optional = true }
vm-memory = "0.9"
fuse-backend-rs = "0.10"
vm-memory = "0.10"
fuse-backend-rs = "0.10.5"
gpt = { version = "3.0.0", optional = true }
nydus-api = { version = "0.2", path = "../api" }
@ -41,8 +50,8 @@ nydus-error = { version = "0.2", path = "../error" }
sha1 = { version = "0.10.5", optional = true }
[dev-dependencies]
vmm-sys-util = "0.10"
tar = "0.4.38"
vmm-sys-util = "0.11"
tar = "0.4.40"
regex = "1.7.0"
[features]
@ -55,4 +64,8 @@ backend-http-proxy = ["hyper", "hyperlocal", "http", "reqwest", "url"]
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "aarch64-unknown-linux-gnu", "aarch64-apple-darwin"]
targets = [
"x86_64-unknown-linux-gnu",
"aarch64-unknown-linux-gnu",
"aarch64-apple-darwin",
]

View File

@ -403,14 +403,17 @@ impl Connection {
} else {
mirror_cloned.config.ping_url.clone()
};
info!("Mirror health checking url: {}", mirror_health_url);
info!(
"[mirror] start health check, ping url: {}",
mirror_health_url
);
let client = Client::new();
loop {
// Try to recover the mirror server when it is unavailable.
if !mirror_cloned.status.load(Ordering::Relaxed) {
info!(
"Mirror server {} unhealthy, try to recover",
"[mirror] server unhealthy, try to recover: {}",
mirror_cloned.config.host
);
@ -422,14 +425,17 @@ impl Connection {
// If the response status is less than StatusCode::INTERNAL_SERVER_ERROR,
// the mirror server is recovered.
if resp.status() < StatusCode::INTERNAL_SERVER_ERROR {
info!("Mirror server {} recovered", mirror_cloned.config.host);
info!(
"[mirror] server recovered: {}",
mirror_cloned.config.host
);
mirror_cloned.failed_times.store(0, Ordering::Relaxed);
mirror_cloned.status.store(true, Ordering::Relaxed);
}
})
.map_err(|e| {
warn!(
"Mirror server {} is not recovered: {}",
"[mirror] failed to recover server: {}, {}",
mirror_cloned.config.host, e
);
});
@ -448,13 +454,6 @@ impl Connection {
self.shutdown.store(true, Ordering::Release);
}
/// If the auth_through is enable, all requests are send to the mirror server.
/// If the auth_through disabled, e.g. P2P/Dragonfly, we try to avoid sending
/// non-authorization request to the mirror server, which causes performance loss.
/// requesting_auth means this request is to get authorization from a server,
/// which must be a non-authorization request.
/// IOW, only the requesting_auth is false and the headers contain authorization token,
/// we send this request to mirror.
#[allow(clippy::too_many_arguments)]
pub fn call<R: Read + Clone + Send + 'static>(
&self,
@ -464,8 +463,6 @@ impl Connection {
data: Option<ReqBody<R>>,
headers: &mut HeaderMap,
catch_status: bool,
// This means the request is dedicated to authorization.
requesting_auth: bool,
) -> ConnectionResult<Response> {
if self.shutdown.load(Ordering::Acquire) {
return Err(ConnectionError::Disconnected);
@ -524,27 +521,10 @@ impl Connection {
}
}
let mut mirror_enabled = false;
if !self.mirrors.is_empty() {
let mut fallback_due_auth = false;
mirror_enabled = true;
for mirror in self.mirrors.iter() {
// With configuration `auth_through` disabled, we should not intend to send authentication
// request to mirror. Mainly because mirrors like P2P/Dragonfly has a poor performance when
// relaying non-data requests. But it's still possible that ever returned token is expired.
// So mirror might still respond us with status code UNAUTHORIZED, which should be handle
// by sending authentication request to the original registry.
//
// - For non-authentication request with token in request header, handle is as usual requests to registry.
// This request should already take token in header.
// - For authentication request
// 1. auth_through is disabled(false): directly pass below mirror translations and jump to original registry handler.
// 2. auth_through is enabled(true): try to get authenticated from mirror and should also handle status code UNAUTHORIZED.
if !mirror.config.auth_through
&& (!headers.contains_key(HEADER_AUTHORIZATION) || requesting_auth)
{
fallback_due_auth = true;
break;
}
if mirror.status.load(Ordering::Relaxed) {
let data_cloned = data.as_ref().cloned();
@ -556,7 +536,7 @@ impl Connection {
}
let current_url = mirror.mirror_url(url)?;
debug!("mirror server url {}", current_url);
debug!("[mirror] replace to: {}", current_url);
let result = self.call_inner(
&self.client,
@ -578,14 +558,14 @@ impl Connection {
}
Err(err) => {
warn!(
"request mirror server failed, mirror: {:?}, error: {:?}",
mirror, err
"[mirror] request failed, server: {:?}, {:?}",
mirror.config.host, err
);
mirror.failed_times.fetch_add(1, Ordering::Relaxed);
if mirror.failed_times.load(Ordering::Relaxed) >= mirror.failure_limit {
warn!(
"reach to failure limit {}, disable mirror: {:?}",
"[mirror] exceed failure limit {}, server disabled: {:?}",
mirror.failure_limit, mirror
);
mirror.status.store(false, Ordering::Relaxed);
@ -598,9 +578,10 @@ impl Connection {
headers.remove(HeaderName::from_str(key).unwrap());
}
}
if !fallback_due_auth {
warn!("Request to all mirror server failed, fallback to original server.");
}
}
if mirror_enabled {
warn!("[mirror] request all servers failed, fallback to original server.");
}
self.call_inner(

View File

@ -214,7 +214,6 @@ impl BlobReader for HttpProxyReader {
None,
&mut HeaderMap::new(),
true,
false,
)
.map(|resp| resp.headers().to_owned())
.map_err(|e| HttpProxyError::RemoteRequest(e).into())
@ -255,15 +254,7 @@ impl BlobReader for HttpProxyReader {
.map_err(|e| HttpProxyError::ConstructHeader(format!("{}", e)))?,
);
let mut resp = connection
.call::<&[u8]>(
Method::GET,
uri.as_str(),
None,
None,
&mut headers,
true,
false,
)
.call::<&[u8]>(Method::GET, uri.as_str(), None, None, &mut headers, true)
.map_err(HttpProxyError::RemoteRequest)?;
Ok(resp

View File

@ -89,15 +89,7 @@ where
let resp = self
.connection
.call::<&[u8]>(
Method::HEAD,
url.as_str(),
None,
None,
&mut headers,
true,
false,
)
.call::<&[u8]>(Method::HEAD, url.as_str(), None, None, &mut headers, true)
.map_err(ObjectStorageError::Request)?;
let content_length = resp
.headers()
@ -136,15 +128,7 @@ where
// Safe because the the call() is a synchronous operation.
let mut resp = self
.connection
.call::<&[u8]>(
Method::GET,
url.as_str(),
None,
None,
&mut headers,
true,
false,
)
.call::<&[u8]>(Method::GET, url.as_str(), None, None, &mut headers, true)
.map_err(ObjectStorageError::Request)?;
Ok(resp
.copy_to(&mut buf)

View File

@ -8,6 +8,7 @@ use std::io::Result;
use std::sync::Arc;
use std::time::SystemTime;
use base64::Engine;
use hmac::{Hmac, Mac};
use reqwest::header::HeaderMap;
use reqwest::Method;
@ -99,7 +100,7 @@ impl ObjectStorageState for OssState {
.chain_update(data.as_bytes())
.finalize()
.into_bytes();
let signature = base64::encode(&hmac);
let signature = base64::engine::general_purpose::STANDARD.encode(&hmac);
let authorization = format!("OSS {}:{}", self.access_key_id, signature);

View File

@ -7,11 +7,12 @@ use std::collections::HashMap;
use std::error::Error;
use std::io::{Read, Result};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, RwLock};
use std::sync::{Arc, Once, RwLock};
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use std::{fmt, thread};
use arc_swap::ArcSwapOption;
use arc_swap::{ArcSwap, ArcSwapOption};
use base64::Engine;
use reqwest::blocking::Response;
pub use reqwest::header::HeaderMap;
use reqwest::header::{HeaderValue, CONTENT_LENGTH};
@ -35,6 +36,8 @@ const REDIRECTED_STATUS_CODE: [StatusCode; 2] = [
StatusCode::TEMPORARY_REDIRECT,
];
const REGISTRY_DEFAULT_TOKEN_EXPIRATION: u64 = 10 * 60; // in seconds
/// Error codes related to registry storage backend operations.
#[derive(Debug)]
pub enum RegistryError {
@ -115,13 +118,15 @@ impl HashCache {
#[derive(Clone, serde::Deserialize)]
struct TokenResponse {
/// Registry token string.
token: String,
/// Registry token period of validity, in seconds.
#[serde(default = "default_expires_in")]
expires_in: u64,
}
fn default_expires_in() -> u64 {
10 * 60
REGISTRY_DEFAULT_TOKEN_EXPIRATION
}
#[derive(Debug)]
@ -188,8 +193,8 @@ struct RegistryState {
// Example: RwLock<HashMap<"<blob_id>", "<redirected_url>">>
cached_redirect: HashCache,
// The expiration time of the token, which is obtained from the registry server.
refresh_token_time: ArcSwapOption<u64>,
// The epoch timestamp of token expiration, which is obtained from the registry server.
token_expired_at: ArcSwapOption<u64>,
// Cache bearer auth for refreshing token.
cached_bearer_auth: ArcSwapOption<BearerAuth>,
}
@ -232,7 +237,7 @@ impl RegistryState {
}
/// Request registry authentication server to get bearer token
fn get_token(&self, auth: BearerAuth, connection: &Arc<Connection>) -> Result<String> {
fn get_token(&self, auth: BearerAuth, connection: &Arc<Connection>) -> Result<TokenResponse> {
// The information needed for getting token needs to be placed both in
// the query and in the body to be compatible with different registry
// implementations, which have been tested on these platforms:
@ -264,7 +269,6 @@ impl RegistryState {
Some(ReqBody::Form(form)),
&mut headers,
true,
true,
)
.map_err(|e| einval!(format!("registry auth server request failed {:?}", e)))?;
let ret: TokenResponse = token_resp.json().map_err(|e| {
@ -274,7 +278,7 @@ impl RegistryState {
))
})?;
if let Ok(now_timestamp) = SystemTime::now().duration_since(UNIX_EPOCH) {
self.refresh_token_time
self.token_expired_at
.store(Some(Arc::new(now_timestamp.as_secs() + ret.expires_in)));
debug!(
"cached bearer auth, next time: {}",
@ -285,7 +289,7 @@ impl RegistryState {
// Cache bearer auth for refreshing token.
self.cached_bearer_auth.store(Some(Arc::new(auth)));
Ok(ret.token)
Ok(ret)
}
fn get_auth_header(&self, auth: Auth, connection: &Arc<Connection>) -> Result<String> {
@ -297,7 +301,7 @@ impl RegistryState {
.ok_or_else(|| einval!("invalid auth config")),
Auth::Bearer(auth) => {
let token = self.get_token(auth, connection)?;
Ok(format!("Bearer {}", token))
Ok(format!("Bearer {}", token.token))
}
}
}
@ -361,11 +365,75 @@ impl RegistryState {
}
}
#[derive(Clone)]
struct First {
inner: Arc<ArcSwap<Once>>,
}
impl First {
fn new() -> Self {
First {
inner: Arc::new(ArcSwap::new(Arc::new(Once::new()))),
}
}
fn once<F>(&self, f: F)
where
F: FnOnce(),
{
self.inner.load().call_once(f)
}
fn renew(&self) {
self.inner.store(Arc::new(Once::new()));
}
fn handle<F, T>(&self, handle: &mut F) -> Option<BackendResult<T>>
where
F: FnMut() -> BackendResult<T>,
{
let mut ret = None;
// Call once twice to ensure the subsequent requests use the new
// Once instance after renew happens.
for _ in 0..=1 {
self.once(|| {
ret = Some(handle().map_err(|err| {
// Replace the Once instance so that we can retry it when
// the handle call failed.
self.renew();
err
}));
});
if ret.is_some() {
break;
}
}
ret
}
/// When invoking concurrently, only one of the handle methods will be executed first,
/// then subsequent handle methods will be allowed to execute concurrently.
///
/// Nydusd uses a registry backend which generates a surge of blob requests without
/// auth tokens on initial startup, this caused mirror backends (e.g. dragonfly)
/// to process very slowly. The method implements waiting for the first blob request
/// to complete before making other blob requests, this ensures the first request
/// caches a valid registry auth token, and subsequent concurrent blob requests can
/// reuse the cached token.
fn handle_force<F, T>(&self, handle: &mut F) -> BackendResult<T>
where
F: FnMut() -> BackendResult<T>,
{
self.handle(handle).unwrap_or_else(handle)
}
}
struct RegistryReader {
blob_id: String,
connection: Arc<Connection>,
state: Arc<RegistryState>,
metrics: Arc<BackendMetrics>,
first: First,
}
impl RegistryReader {
@ -419,22 +487,14 @@ impl RegistryReader {
if let Some(data) = data {
return self
.connection
.call(
method,
url,
None,
Some(data),
&mut headers,
catch_status,
false,
)
.call(method, url, None, Some(data), &mut headers, catch_status)
.map_err(RegistryError::Request);
}
// Try to request registry server with `authorization` header
let mut resp = self
.connection
.call::<&[u8]>(method.clone(), url, None, None, &mut headers, false, false)
.call::<&[u8]>(method.clone(), url, None, None, &mut headers, false)
.map_err(RegistryError::Request)?;
if resp.status() == StatusCode::UNAUTHORIZED {
if headers.contains_key(HEADER_AUTHORIZATION) {
@ -449,7 +509,7 @@ impl RegistryReader {
resp = self
.connection
.call::<&[u8]>(method.clone(), url, None, None, &mut headers, false, false)
.call::<&[u8]>(method.clone(), url, None, None, &mut headers, false)
.map_err(RegistryError::Request)?;
};
@ -469,7 +529,7 @@ impl RegistryReader {
// Try to request registry server with `authorization` header again
let resp = self
.connection
.call(method, url, None, data, &mut headers, catch_status, false)
.call(method, url, None, data, &mut headers, catch_status)
.map_err(RegistryError::Request)?;
let status = resp.status();
@ -525,7 +585,6 @@ impl RegistryReader {
None,
&mut headers,
false,
false,
)
.map_err(RegistryError::Request)?;
@ -610,7 +669,6 @@ impl RegistryReader {
None,
&mut headers,
true,
false,
)
.map_err(RegistryError::Request);
match resp_ret {
@ -638,14 +696,20 @@ impl RegistryReader {
impl BlobReader for RegistryReader {
fn blob_size(&self) -> BackendResult<u64> {
let url = format!("/blobs/sha256:{}", self.blob_id);
let url = self
.state
.url(&url, &[])
.map_err(|e| RegistryError::Url(url, e))?;
self.first.handle_force(&mut || -> BackendResult<u64> {
let url = format!("/blobs/sha256:{}", self.blob_id);
let url = self
.state
.url(&url, &[])
.map_err(|e| RegistryError::Url(url, e))?;
let resp =
match self.request::<&[u8]>(Method::HEAD, url.as_str(), None, HeaderMap::new(), true) {
let resp = match self.request::<&[u8]>(
Method::HEAD,
url.as_str(),
None,
HeaderMap::new(),
true,
) {
Ok(res) => res,
Err(RegistryError::Request(ConnectionError::Common(e)))
if self.state.needs_fallback_http(&e) =>
@ -662,21 +726,26 @@ impl BlobReader for RegistryReader {
return Err(BackendError::Registry(e));
}
};
let content_length = resp
.headers()
.get(CONTENT_LENGTH)
.ok_or_else(|| RegistryError::Common("invalid content length".to_string()))?;
let content_length = resp
.headers()
.get(CONTENT_LENGTH)
.ok_or_else(|| RegistryError::Common("invalid content length".to_string()))?;
Ok(content_length
.to_str()
.map_err(|err| RegistryError::Common(format!("invalid content length: {:?}", err)))?
.parse::<u64>()
.map_err(|err| RegistryError::Common(format!("invalid content length: {:?}", err)))?)
Ok(content_length
.to_str()
.map_err(|err| RegistryError::Common(format!("invalid content length: {:?}", err)))?
.parse::<u64>()
.map_err(|err| {
RegistryError::Common(format!("invalid content length: {:?}", err))
})?)
})
}
fn try_read(&self, buf: &mut [u8], offset: u64) -> BackendResult<usize> {
self._try_read(buf, offset, true)
.map_err(BackendError::Registry)
self.first.handle_force(&mut || -> BackendResult<usize> {
self._try_read(buf, offset, true)
.map_err(BackendError::Registry)
})
}
fn metrics(&self) -> &BackendMetrics {
@ -693,6 +762,7 @@ pub struct Registry {
connection: Arc<Connection>,
state: Arc<RegistryState>,
metrics: Arc<BackendMetrics>,
first: First,
}
impl Registry {
@ -738,37 +808,33 @@ impl Registry {
blob_url_scheme: config.blob_url_scheme.clone(),
blob_redirected_host: config.blob_redirected_host.clone(),
cached_redirect: HashCache::new(),
refresh_token_time: ArcSwapOption::new(None),
token_expired_at: ArcSwapOption::new(None),
cached_bearer_auth: ArcSwapOption::new(None),
});
let mirrors = connection.mirrors.clone();
let registry = Registry {
connection,
state,
metrics: BackendMetrics::new(id, "registry"),
first: First::new(),
};
for mirror in mirrors.iter() {
if !mirror.config.auth_through {
registry.start_refresh_token_thread();
info!("Refresh token thread started.");
break;
}
}
registry.start_refresh_token_thread();
info!("Refresh token thread started.");
Ok(registry)
}
fn get_authorization_info(auth: &Option<String>) -> Result<(String, String)> {
if let Some(auth) = &auth {
let auth: Vec<u8> = base64::decode(auth.as_bytes()).map_err(|e| {
einval!(format!(
"Invalid base64 encoded registry auth config: {:?}",
e
))
})?;
let auth: Vec<u8> = base64::engine::general_purpose::STANDARD
.decode(auth.as_bytes())
.map_err(|e| {
einval!(format!(
"Invalid base64 encoded registry auth config: {:?}",
e
))
})?;
let auth = std::str::from_utf8(&auth).map_err(|e| {
einval!(format!(
"Invalid utf-8 encoded registry auth config: {:?}",
@ -789,30 +855,39 @@ impl Registry {
fn start_refresh_token_thread(&self) {
let conn = self.connection.clone();
let state = self.state.clone();
// The default refresh token internal is 10 minutes.
let refresh_check_internal = 10 * 60;
// FIXME: we'd better allow users to specify the expiration time.
let mut refresh_interval = REGISTRY_DEFAULT_TOKEN_EXPIRATION;
thread::spawn(move || {
loop {
if let Ok(now_timestamp) = SystemTime::now().duration_since(UNIX_EPOCH) {
if let Some(next_refresh_timestamp) = state.refresh_token_time.load().as_deref()
{
// If the token will expire in next refresh check internal, get new token now.
// Add 20 seconds to handle critical cases.
if now_timestamp.as_secs() + refresh_check_internal + 20
>= *next_refresh_timestamp
{
if let Some(token_expired_at) = state.token_expired_at.load().as_deref() {
// If the token will expire within the next refresh interval,
// refresh it immediately.
if now_timestamp.as_secs() + refresh_interval >= *token_expired_at {
if let Some(cached_bearer_auth) =
state.cached_bearer_auth.load().as_deref()
{
if let Ok(token) =
state.get_token(cached_bearer_auth.to_owned(), &conn)
{
let new_cached_auth = format!("Bearer {}", token);
info!("Authorization token for registry has been refreshed.");
// Refresh authorization token
let new_cached_auth = format!("Bearer {}", token.token);
debug!(
"[refresh_token_thread] registry token has been refreshed"
);
// Refresh cached token.
state
.cached_auth
.set(&state.cached_auth.get(), new_cached_auth);
// Reset refresh interval according to real expiration time,
// and advance 20s to handle the unexpected cases.
refresh_interval = token
.expires_in
.checked_sub(20)
.unwrap_or(token.expires_in);
} else {
error!(
"[refresh_token_thread] failed to refresh registry token"
);
}
}
}
@ -822,7 +897,7 @@ impl Registry {
if conn.shutdown.load(Ordering::Acquire) {
break;
}
thread::sleep(Duration::from_secs(refresh_check_internal));
thread::sleep(Duration::from_secs(refresh_interval));
if conn.shutdown.load(Ordering::Acquire) {
break;
}
@ -846,6 +921,7 @@ impl BlobBackend for Registry {
state: self.state.clone(),
connection: self.connection.clone(),
metrics: self.metrics.clone(),
first: self.first.clone(),
}))
}
}
@ -914,7 +990,7 @@ mod tests {
blob_redirected_host: "oss.alibaba-inc.com".to_string(),
cached_auth: Default::default(),
cached_redirect: Default::default(),
refresh_token_time: ArcSwapOption::new(None),
token_expired_at: ArcSwapOption::new(None),
cached_bearer_auth: ArcSwapOption::new(None),
};
@ -966,4 +1042,60 @@ mod tests {
assert_eq!(trim(Some(" te st ".to_owned())), Some("te st".to_owned()));
assert_eq!(trim(Some("te st".to_owned())), Some("te st".to_owned()));
}
#[test]
#[allow(clippy::redundant_clone)]
fn test_first_basically() {
let first = First::new();
let mut val = 0;
first.once(|| {
val += 1;
});
assert_eq!(val, 1);
first.clone().once(|| {
val += 1;
});
assert_eq!(val, 1);
first.renew();
first.clone().once(|| {
val += 1;
});
assert_eq!(val, 2);
}
#[test]
#[allow(clippy::redundant_clone)]
fn test_first_concurrently() {
let val = Arc::new(ArcSwap::new(Arc::new(0)));
let first = First::new();
let mut handlers = Vec::new();
for _ in 0..100 {
let val_cloned = val.clone();
let first_cloned = first.clone();
handlers.push(std::thread::spawn(move || {
let _ = first_cloned.handle(&mut || -> BackendResult<()> {
let val = val_cloned.load();
let ret = if *val.as_ref() == 0 {
std::thread::sleep(std::time::Duration::from_secs(2));
Err(BackendError::Registry(RegistryError::Common(String::from(
"network error",
))))
} else {
Ok(())
};
val_cloned.store(Arc::new(val.as_ref() + 1));
ret
});
}));
}
for handler in handlers {
handler.join().unwrap();
}
assert_eq!(*val.load().as_ref(), 2);
}
}

View File

@ -184,7 +184,7 @@ impl AsyncWorkerMgr {
}
/// Consume network bandwidth budget for prefetching.
pub fn consume_prefetch_budget(&self, size: u32) {
pub fn consume_prefetch_budget(&self, size: u64) {
if self.prefetch_inflight.load(Ordering::Relaxed) > 0 {
self.prefetch_consumed
.fetch_add(size as usize, Ordering::AcqRel);

View File

@ -698,7 +698,7 @@ pub struct BlobIoVec {
/// The blob associated with the IO operation.
bi_blob: Arc<BlobInfo>,
/// Total size of blob IOs to be performed.
bi_size: u32,
bi_size: u64,
/// Array of blob IOs, these IOs should executed sequentially.
pub(crate) bi_vec: Vec<BlobIoDesc>,
}
@ -717,14 +717,13 @@ impl BlobIoVec {
pub fn push(&mut self, desc: BlobIoDesc) {
assert_eq!(self.bi_blob.blob_index(), desc.blob.blob_index());
assert_eq!(self.bi_blob.blob_id(), desc.blob.blob_id());
assert!(self.bi_size.checked_add(desc.size).is_some());
self.bi_size += desc.size;
assert!(self.bi_size.checked_add(desc.size as u64).is_some());
self.bi_size += desc.size as u64;
self.bi_vec.push(desc);
}
/// Append another blob io vector to current one.
pub fn append(&mut self, mut vec: BlobIoVec) {
assert_eq!(self.bi_blob.blob_index(), vec.bi_blob.blob_index());
assert_eq!(self.bi_blob.blob_id(), vec.bi_blob.blob_id());
assert!(self.bi_size.checked_add(vec.bi_size).is_some());
self.bi_vec.append(vec.bi_vec.as_mut());
@ -748,7 +747,7 @@ impl BlobIoVec {
}
/// Get size of pending IO data.
pub fn size(&self) -> u32 {
pub fn size(&self) -> u64 {
self.bi_size
}
@ -1436,4 +1435,105 @@ mod tests {
assert!(desc2.is_continuous(&desc3, 0x800));
assert!(desc2.is_continuous(&desc3, 0x1000));
}
#[test]
fn test_append_same_blob_with_diff_index() {
let blob1 = Arc::new(BlobInfo::new(
1,
"test1".to_owned(),
0x200000,
0x100000,
0x100000,
512,
BlobFeatures::_V5_NO_EXT_BLOB_TABLE,
));
let chunk1 = Arc::new(MockChunkInfo {
block_id: Default::default(),
blob_index: 1,
flags: BlobChunkFlags::empty(),
compress_size: 0x800,
uncompress_size: 0x1000,
compress_offset: 0,
uncompress_offset: 0,
file_offset: 0,
index: 0,
reserved: 0,
}) as Arc<dyn BlobChunkInfo>;
let mut iovec = BlobIoVec::new(blob1.clone());
iovec.push(BlobIoDesc::new(blob1, BlobIoChunk(chunk1), 0, 0x1000, true));
let blob2 = Arc::new(BlobInfo::new(
2, // different index
"test1".to_owned(), // same id
0x200000,
0x100000,
0x100000,
512,
BlobFeatures::_V5_NO_EXT_BLOB_TABLE,
));
let chunk2 = Arc::new(MockChunkInfo {
block_id: Default::default(),
blob_index: 2,
flags: BlobChunkFlags::empty(),
compress_size: 0x800,
uncompress_size: 0x1000,
compress_offset: 0x800,
uncompress_offset: 0x1000,
file_offset: 0x1000,
index: 1,
reserved: 0,
}) as Arc<dyn BlobChunkInfo>;
let mut iovec2 = BlobIoVec::new(blob2.clone());
iovec2.push(BlobIoDesc::new(blob2, BlobIoChunk(chunk2), 0, 0x1000, true));
iovec.append(iovec2);
assert_eq!(0x2000, iovec.bi_size);
}
#[test]
fn test_extend_large_blob_io_vec() {
let size = 0x2_0000_0000; // 8G blob
let chunk_size = 0x10_0000; // 1M chunk
let chunk_count = (size / chunk_size as u64) as u32;
let large_blob = Arc::new(BlobInfo::new(
0,
"blob_id".to_owned(),
size,
size,
chunk_size,
chunk_count,
BlobFeatures::default(),
));
let mut iovec = BlobIoVec::new(large_blob.clone());
let mut iovec2 = BlobIoVec::new(large_blob.clone());
// Extend half of blob
for chunk_idx in 0..chunk_count {
let chunk = Arc::new(MockChunkInfo {
block_id: Default::default(),
blob_index: large_blob.blob_index,
flags: BlobChunkFlags::empty(),
compress_size: chunk_size,
compress_offset: chunk_idx as u64 * chunk_size as u64,
uncompress_size: 2 * chunk_size,
uncompress_offset: 2 * chunk_idx as u64 * chunk_size as u64,
file_offset: 2 * chunk_idx as u64 * chunk_size as u64,
index: chunk_idx as u32,
reserved: 0,
}) as Arc<dyn BlobChunkInfo>;
let desc = BlobIoDesc::new(large_blob.clone(), BlobIoChunk(chunk), 0, chunk_size, true);
if chunk_idx < chunk_count / 2 {
iovec.push(desc);
} else {
iovec2.push(desc)
}
}
// Extend other half of blob
iovec.append(iovec2);
assert_eq!(size, iovec.size());
assert_eq!(chunk_count, iovec.len() as u32);
}
}

View File

@ -405,9 +405,10 @@ impl BlobCompressionContextInfo {
if let Some(reader) = reader {
let buffer =
unsafe { std::slice::from_raw_parts_mut(base as *mut u8, expected_size) };
buffer[0..].fill(0);
Self::read_metadata(blob_info, reader, buffer)?;
Self::validate_header(blob_info, header)?;
if !Self::validate_header(blob_info, header)? {
return Err(enoent!(format!("double check blob_info still invalid",)));
}
filemap.sync_data()?;
} else {
return Err(enoent!(format!(
@ -751,7 +752,6 @@ impl BlobCompressionContextInfo {
if u32::from_le(header.s_magic) != BLOB_CCT_MAGIC
|| u32::from_le(header.s_magic2) != BLOB_CCT_MAGIC
|| u32::from_le(header.s_ci_entries) != blob_info.chunk_count()
|| u32::from_le(header.s_features) != blob_info.features().bits()
|| u32::from_le(header.s_ci_compressor) != blob_info.meta_ci_compressor() as u32
|| u64::from_le(header.s_ci_offset) != blob_info.meta_ci_offset()
|| u64::from_le(header.s_ci_compressed_size) != blob_info.meta_ci_compressed_size()

View File

@ -6,10 +6,10 @@ endif
ci:
bash -f ./install_bats.sh
bats --show-output-of-passing-tests --formatter tap build_docker_image.bats
bats --show-output-of-passing-tests --formatter tap compile_nydusd.bats
bats --show-output-of-passing-tests --formatter tap compile_ctr_remote.bats
bats --show-output-of-passing-tests --formatter tap compile_nydus_snapshotter.bats
bats --show-output-of-passing-tests --formatter tap run_container_with_rafs.bats
bats --show-output-of-passing-tests --formatter tap run_container_with_zran.bats
bats --show-output-of-passing-tests --formatter tap run_container_with_rafs_and_compile_linux.bats
bats --formatter tap build_docker_image.bats
bats --formatter tap compile_nydusd.bats
bats --formatter tap compile_ctr_remote.bats
bats --formatter tap compile_nydus_snapshotter.bats
bats --formatter tap run_container_with_rafs.bats
bats --formatter tap run_container_with_zran.bats
bats --formatter tap run_container_with_rafs_and_compile_linux.bats

View File

@ -47,4 +47,5 @@ version = 2
disable_snapshot_annotations = false
EOF
systemctl restart containerd
sleep 3
}

View File

@ -9,7 +9,7 @@ setup() {
}
@test "compile nydus snapshotter" {
docker run --rm -v /tmp/nydus-snapshotter:/nydus-snapshotter $compile_image bash -c 'cd /nydus-snapshotter && make clear && make'
docker run --rm -v /tmp/nydus-snapshotter:/nydus-snapshotter $compile_image bash -c 'cd /nydus-snapshotter && make clean && make'
if [ -f "/tmp/nydus-snapshotter/bin/containerd-nydus-grpc" ]; then
/usr/bin/cp -f /tmp/nydus-snapshotter/bin/containerd-nydus-grpc /usr/local/bin/
echo "nydus-snapshotter version"

View File

@ -1,7 +1,7 @@
load "${BATS_TEST_DIRNAME}/common_tests.sh"
setup() {
nydus_rafs_image="nydus-anolis-registry.cn-hangzhou.cr.aliyuncs.com/nydus_test/bldlinux:v0.1-rafs-v6-lz4"
nydus_rafs_image="ghcr.io/dragonflyoss/image-service/bldlinux:v0.1-rafs-v6-lz4"
run_nydus_snapshotter
config_containerd_for_nydus
ctr images ls | grep -q "${nydus_rafs_image}" && ctr images rm $nydus_rafs_image

View File

@ -15,6 +15,7 @@ libc = "0.2"
log = "0.4"
lz4-sys = "1.9.4"
lz4 = "1.24.0"
openssl = { version = "0.10.55", features = ["vendored"], optional = true }
serde = { version = ">=1.0.27", features = ["serde_derive", "rc"] }
serde_json = ">=1.0.9"
sha2 = "0.10.0"
@ -33,8 +34,8 @@ libz-sys = { version = "1.1.8", features = ["zlib-ng"], default-features = false
flate2 = { version = "1.0.17", features = ["zlib-ng-compat"], default-features = false }
[dev-dependencies]
vmm-sys-util = "0.10.0"
tar = "0.4.38"
vmm-sys-util = "0.11.0"
tar = "0.4.40"
[features]
zran = ["libz-sys"]