Compare commits

...

35 Commits

Author SHA1 Message Date
Changwei Ge 0233c42d45 cargo: dump package version to 1.1.2
Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 15:40:39 +08:00
Peng Tao bb6236ec34
Merge pull request #278 from changweige/impr-nydusify
Improve chunk-dict processing of nydusify
2022-01-18 14:26:23 +08:00
Peng Tao e4295f4119
Merge pull request #277 from changweige/fix-continuity-check
blobcache: check chunks continuity by their compressed size
2022-01-18 14:25:48 +08:00
Peng Tao 9fc27c6019
Merge pull request #276 from changweige/impr-image-inspect
Slightly improve nydus-image inspect
2022-01-18 14:25:19 +08:00
Peng Tao c10030142e
Merge pull request #275 from changweige/rename-metrics
metrics: rename metric read_latency_hits_dist
2022-01-18 14:24:41 +08:00
Peng Tao 1bd83773f3
Merge pull request #274 from changweige/fix-root-permission
rafs: fix up access API root mode
2022-01-18 14:23:51 +08:00
henry.hj 9a2515cc91 nydusify: update examples for chunk-dict
Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
henry.hj 01d202d740 nydusify: add reference blob layers to manifests
Only for registry backend:
We include a new type cache layer:
    blob layers without SourceTrainID which are referenced from chunk dict

For example:
Chunk-dict1 layers:
    c-layer1 -- c-layer2
Original layers:
    blob-layer1 -- blob-layer2 -- blob-layer3 -- bootstrap
With chunk-dict:
    c-layer1 -- c-layer2 -- blob-layer1' -- blob-layer2' -- blob-layer3'
    -- bootstrap

Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
henry.hj af9d8cb881 nydusify e2e smoke: add chunk-dict testcases
Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
henry.hj 57219e799e nydusify: fix wrong blobs list on manifests
Problem:
    After we change chunk-dict, we lose old chunk-dict info on
manifests which still has been used in bootstrap.

Cause:
    When enable build-cache, we use parent bootstraps which are build
with old chunk-dict from remote cache. But we only combine new chunk-dict
blobs and layer blobs to final blobs-list which would be set on manifests
annotations. Unfortunately we lose old chunk-dict blobs which still
are referenced by parent layers.

Solution:
    Record reference blobs on build-cache records with key
"containerd.io/snapshot/nydus-reference-blob-ids".
    For final blobs-list, we append each layer blobs and its referenced blobs together.

Note:
    We should use new build-cache-version to clear old build-cache records
when we first use versions of nydusify which are build based on this patch.

TODO:
    Let build-cache be aware of verison of chunk-dict. Auto invalidate
build-cache if chunk-dict changed

Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
Changwei Ge 81e86f13d7 blobcache: check chunks continuity by their compressed size
Building nydus image with --chunk-aligned option set, decompressed_offset +
decompressed_size = next_decompressed_offset can't be ruled.
So we use compressed part now.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 11:03:39 +08:00
Changwei Ge 9ca6cc5486 nydus-image/inspect: trim white spaces before parsing
Otherwise, the string parsing may fail resulting in inspector's
error.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 10:48:03 +08:00
Changwei Ge 55e3c0da85 nydus-image/inspect: print sizes info of chunks
`chunk` subcommand prints more info of chunks compressed and
decompressed sizes info. It helps analyze rafs layout.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 10:46:32 +08:00
Changwei Ge ef5f362e10 metrics: rename metric read_latency_hits_dist
Thus to make it more suggestive

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 10:44:14 +08:00
Peng Tao 2c24d49b9b rafs: fix up accese API root mode
Make sure all root inode mode is 0755.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-01-18 10:42:02 +08:00
imeoer 250aad442a
Merge pull request #247 from changweige/pick-stable-tokio-threads
cache: set the number of worker threads of tokio threads pool
2021-12-28 10:01:15 +08:00
Changwei Ge 952daab44a cache: set the number of worker threads of tokio threads pool
It previously uses the default runtime builder which creates
a thread for each cpu core. Nydusd on a server equipped many
cpu sockets and cores will start many threads most of which are
idle.

In addition, use `spawn_blocking` instead which is more reasonable
within blobcache scenario

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-12-27 15:14:25 +08:00
imeoer 560878e373
Merge pull request #228 from changweige/release-upstream-v1.1.1
cargo: dump package version to 1.1.1
2021-11-26 13:58:53 +08:00
Changwei Ge 393f1f2611 cargo: dump package version to 1.1.1
Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-26 11:49:07 +08:00
Peng Tao 484a8cbd28
Merge pull request #227 from changweige/add-ci
action/ci: add stable-1.x branch to CI target branches list
2021-11-26 11:10:48 +08:00
Changwei Ge a4966b3ccf action/ci: add stable-1.x branch to CI target branches list
Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-26 09:52:17 +08:00
Changwei Ge 45e44d8e43
Merge pull request #225 from bergwolf/upstream/update-1.x
backport master commits for 1.1.1 release
2021-11-26 09:31:20 +08:00
dependabot[bot] a749894b43 build(deps): bump github.com/containerd/containerd
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.5.7 to 1.5.8.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.5.7...v1.5.8)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
dependabot[bot] c0962858eb build(deps): bump github.com/containerd/containerd in /contrib/nydusify
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.4.11 to 1.4.12.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.4.11...v1.4.12)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
dependabot[bot] 5ad4e0def1 build(deps): bump github.com/containerd/containerd
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.4.11 to 1.4.12.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.4.11...v1.4.12)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
dependabot[bot] 5e2f549e49 build(deps): bump github.com/opencontainers/image-spec
Bumps [github.com/opencontainers/image-spec](https://github.com/opencontainers/image-spec) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/opencontainers/image-spec/releases)
- [Changelog](https://github.com/opencontainers/image-spec/blob/main/RELEASES.md)
- [Commits](https://github.com/opencontainers/image-spec/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/image-spec
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
Peng Tao 9ca8e82300 release: include an example nydusd config in the release tarball
So that user can just use it in normal use cases.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:39:55 +08:00
Peng Tao 1b85064186 release: package all static binaries when tagging releases
Right now we have a few binaries and we should pack them all in the
release tarball.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:34:08 +08:00
Peng Tao 5109590561 makefile: static-release is missing virtiofs target
We should build both fusedev and virtiofs targets for static releases.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:33:54 +08:00
Peng Tao ebb495e272 cargo: use event-manger from crate.io
Instead of getting it from github.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:33:49 +08:00
Peng Tao dacc27446e vendor: update fuse-backend-rs dependency
To get the latest features and improvements.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:33:36 +08:00
Changwei Ge d12c480213 snapshotter: don't touch original nydusd auth if no auth in labels
Nydusd must pull data from registry with auth if the repo is private.
Snapshotter only fetches auth from labels and if no auth in the labels,
it will use empty string to replace the original auth.

This causes nydusd lossing auth to access registry.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-25 15:33:29 +08:00
Changwei Ge 887267ceb4 makefile: a option to build golang components without docker
Currently, golang components like snapshotter, ctr-remote, nydusify are
built within containers which bind-mount host GOPATH. When user is root
insides container, it will change files' owner in GOPATH to root. So
other users work on the same host can not access those files in GOPATH anymore.

In addition, current golang build will cause dind(docker in docker) if customers'
software build system is on top of container.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-25 15:33:19 +08:00
Peng Tao e9e81752b0
Merge pull request #219 from dragonflyoss/prepare-v1.1.1
snapshotter: don't touch original nydusd auth if no auth in labels
2021-11-24 17:33:50 +08:00
Changwei Ge 2421b06840 snapshotter: don't touch original nydusd auth if no auth in labels
Nydusd must pull data from registry with auth if the repo is private.
Snapshotter only fetches auth from labels and if no auth in the labels,
it will use empty string to replace the original auth.

This causes nydusd lossing auth to access registry.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-23 17:22:23 +08:00
50 changed files with 812 additions and 641 deletions

View File

@ -4,7 +4,7 @@ on:
push:
branches: ["*"]
pull_request:
branches: [master]
branches: [master, stable-1.x]
env:
CARGO_TERM_COLOR: always

View File

@ -27,17 +27,23 @@ jobs:
- name: Build nydus-rs
run: |
make docker-static
sudo mv target-fusedev/x86_64-unknown-linux-musl/release/nydusd .
sudo mv target-fusedev/x86_64-unknown-linux-musl/release/nydusd nydusd-fusedev
sudo mv target-fusedev/x86_64-unknown-linux-musl/release/nydus-image .
sudo mv target-fusedev/x86_64-unknown-linux-musl/release/nydusctl .
sudo mv target-virtiofs/x86_64-unknown-linux-musl/release/nydusd nydusd-virtiofs
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) .
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: nydus-artifacts
path: |
nydusd
nydusd-fusedev
nydusd-virtiofs
nydus-image
build-nydusify-nydus-snapshotter:
nydusctl
configs
build-contrib:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
@ -45,27 +51,30 @@ jobs:
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydus-snapshotter/go.sum', '**/contrib/nydusify/go.sum') }}
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydus-snapshotter/go.sum', '**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/docker-nydus-graphdriver/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
- name: build nydusify
- name: build contrib go components
run: |
make nydusify-static
- name: build nydus-snapshotter
run: |
make nydus-snapshotter-static
make all-contrib-static-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/docker-nydus-graphdriver/bin/nydus_graphdriver .
sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
sudo mv contrib/nydus-snapshotter/bin/containerd-nydus-grpc .
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: nydus-artifacts
path: |
ctr-remote
nydus_graphdriver
nydusify
nydus-overlayfs
containerd-nydus-grpc
upload-artifacts:
runs-on: ubuntu-latest
needs: [build-nydus-rs, build-nydusify-nydus-snapshotter]
needs: [build-nydus-rs, build-contrib]
steps:
- uses: actions/checkout@v2
- name: install hub

176
Cargo.lock generated
View File

@ -51,6 +51,12 @@ version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b585a98a234c46fc563103e9278c9391fde1f4e6850334da895d27edb9580f62"
[[package]]
name = "arc-swap"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c5d78ce20460b82d3fa150275ed9d55e21064fc7951177baacf86a145c4a4b1f"
[[package]]
name = "arrayref"
version = "0.3.6"
@ -173,12 +179,6 @@ version = "3.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e8c087f005730276d1096a652e92a8bacee2e2472bcc9715a74d2bec38b5820"
[[package]]
name = "byteorder"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae44d1a3d5a19df61dd0c8beb138458ac2a53a7ac09eba97d55592540004306b"
[[package]]
name = "bytes"
version = "0.5.4"
@ -191,6 +191,17 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4872d67bab6358e59559027aa3b9157c53d9358c51423c17554809a8858e0f8"
[[package]]
name = "caps"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "61bf7211aad104ce2769ec05efcdfabf85ee84ac92461d142f22cf8badd0e54c"
dependencies = [
"errno",
"libc",
"thiserror",
]
[[package]]
name = "cargo-lock"
version = "4.0.1"
@ -385,13 +396,35 @@ dependencies = [
"libc",
]
[[package]]
name = "errno"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f639046355ee4f37944e44f60642c6f3a7efa3cf6b78c78a0d989a8ce6c396a1"
dependencies = [
"errno-dragonfly",
"libc",
"winapi",
]
[[package]]
name = "errno-dragonfly"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa68f1b12764fab894d2755d2518754e71b4fd80ecfb822714a1206c2aab39bf"
dependencies = [
"cc",
"libc",
]
[[package]]
name = "event-manager"
version = "0.2.0"
source = "git+https://github.com/rust-vmm/event-manager.git?tag=v0.2.0#abac92899908e6a23a6927b5c1c9d4d53bc4f59e"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "377fa591135fbe23396a18e2655a6d5481bf7c5823cdfa3cc81b01a229cbe640"
dependencies = [
"libc",
"vmm-sys-util 0.6.0",
"vmm-sys-util",
]
[[package]]
@ -455,18 +488,20 @@ dependencies = [
]
[[package]]
name = "fuse-rs"
version = "0.1.0"
source = "git+https://github.com/cloud-hypervisor/fuse-backend-rs.git?rev=cfd2cca#cfd2ccae69fad616b7cc863dc230076838d9a245"
name = "fuse-backend-rs"
version = "0.1.2"
source = "git+https://github.com/cloud-hypervisor/fuse-backend-rs.git?rev=afc7b69#afc7b69ca485fd00555687d31bb2b59d36a0155b"
dependencies = [
"arc-swap",
"arc-swap 0.4.6",
"bitflags",
"caps",
"libc",
"log",
"nix",
"nix 0.22.2",
"vhost",
"virtio-queue",
"vm-memory",
"vm-virtio",
"vmm-sys-util",
]
[[package]]
@ -936,13 +971,22 @@ version = "2.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3728d817d99e5ac407411fa471ff9800a778d88a24685968b36824eaf4bee400"
[[package]]
name = "memoffset"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "59accc507f1338036a0477ef61afdae33cde60840f4dfe481319ce3ad116ddf9"
dependencies = [
"autocfg",
]
[[package]]
name = "micro_http"
version = "0.2.0"
source = "git+https://github.com/cloud-hypervisor/micro-http.git?branch=master#f4405cb5afb092e86204c26d589e8c9448721bf0"
dependencies = [
"libc",
"vmm-sys-util 0.9.0",
"vmm-sys-util",
]
[[package]]
@ -1023,6 +1067,19 @@ dependencies = [
"void",
]
[[package]]
name = "nix"
version = "0.22.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3bb9a13fa32bc5aeb64150cd3f32d6cf4c748f8f8a417cce5d2eb976a8370ba"
dependencies = [
"bitflags",
"cc",
"cfg-if 1.0.0",
"libc",
"memoffset",
]
[[package]]
name = "no-std-compat"
version = "0.4.1"
@ -1096,7 +1153,7 @@ dependencies = [
"serde_derive",
"serde_json",
"url",
"vmm-sys-util 0.6.0",
"vmm-sys-util",
]
[[package]]
@ -1107,7 +1164,7 @@ dependencies = [
"flexi_logger",
"libc",
"log",
"nix",
"nix 0.17.0",
"nydus-error 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde",
]
@ -1140,7 +1197,7 @@ dependencies = [
[[package]]
name = "nydus-rs"
version = "1.0.0"
version = "1.1.2"
dependencies = [
"anyhow",
"base64",
@ -1150,13 +1207,13 @@ dependencies = [
"epoll",
"event-manager",
"flexi_logger",
"fuse-rs",
"fuse-backend-rs",
"hyper",
"hyperlocal",
"lazy_static",
"libc",
"log",
"nix",
"nix 0.17.0",
"nydus-api",
"nydus-app",
"nydus-error 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1173,9 +1230,11 @@ dependencies = [
"storage",
"tokio",
"vhost",
"vhost_user_backend",
"vhost-user-backend",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vmm-sys-util 0.6.0",
"vmm-sys-util",
"xattr",
]
@ -1184,17 +1243,14 @@ name = "nydus-utils"
version = "0.1.0"
dependencies = [
"blake3 0.3.6",
"epoll",
"fuse-rs",
"fuse-backend-rs",
"lazy_static",
"libc",
"log",
"nix",
"nydus-error 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde",
"serde_json",
"sha2",
"vmm-sys-util 0.6.0",
]
[[package]]
@ -1378,20 +1434,20 @@ name = "rafs"
version = "0.1.0"
dependencies = [
"anyhow",
"arc-swap",
"arc-swap 0.4.6",
"assert_matches",
"base64",
"bitflags",
"blake3 1.0.0",
"flate2",
"fuse-rs",
"fuse-backend-rs",
"futures",
"hmac",
"lazy_static",
"libc",
"log",
"lz4-sys",
"nix",
"nix 0.17.0",
"nydus-error 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"nydus-utils",
"serde",
@ -1403,7 +1459,7 @@ dependencies = [
"storage",
"url",
"vm-memory",
"vmm-sys-util 0.6.0",
"vmm-sys-util",
]
[[package]]
@ -1793,11 +1849,11 @@ name = "storage"
version = "0.5.0"
dependencies = [
"anyhow",
"arc-swap",
"arc-swap 0.4.6",
"base64",
"bitflags",
"flate2",
"fuse-rs",
"fuse-backend-rs",
"futures",
"governor",
"hmac",
@ -1805,7 +1861,7 @@ dependencies = [
"libc",
"log",
"lz4-sys",
"nix",
"nix 0.17.0",
"nydus-error 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"nydus-utils",
"reqwest",
@ -1818,7 +1874,7 @@ dependencies = [
"tokio",
"url",
"vm-memory",
"vmm-sys-util 0.9.0",
"vmm-sys-util",
]
[[package]]
@ -2094,27 +2150,29 @@ checksum = "b5a972e5669d67ba988ce3dc826706fb0a8b01471c088cb0b6110b805cc36aed"
[[package]]
name = "vhost"
version = "0.1.0"
source = "git+https://github.com/cloud-hypervisor/vhost.git?branch=dragonball#422964150a69afd4611708076c60cb34c64a9856"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2d23ddeb452fb4f837526c6298cc8a2f4948e5595b0328a3d61b5eebe51874d7"
dependencies = [
"bitflags",
"libc",
"vmm-sys-util 0.9.0",
"vm-memory",
"vmm-sys-util",
]
[[package]]
name = "vhost_user_backend"
name = "vhost-user-backend"
version = "0.1.0"
source = "git+https://github.com/cloud-hypervisor/vhost-user-backend.git?rev=78757bfa#78757bfad67c84f83dd383a10b1788a137e5478f"
source = "git+https://github.com/rust-vmm/vhost-user-backend?rev=3242b37#3242b37d3258c2072faa47d0d17097f4bc0966c7"
dependencies = [
"epoll",
"libc",
"log",
"vhost",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vm-virtio",
"vmm-sys-util 0.6.0",
"vmm-sys-util",
]
[[package]]
@ -2124,44 +2182,24 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3ff512178285488516ed85f15b5d0113a7cdb89e9e8a760b269ae4f02b84bd6b"
[[package]]
name = "vm-memory"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "45b5b0a6f371f8147143b1adb95edddafc9cb9e40adaf94edb6f93a1d04b0330"
dependencies = [
"libc",
"winapi",
]
[[package]]
name = "vm-virtio"
name = "virtio-queue"
version = "0.1.0"
source = "git+https://github.com/cloud-hypervisor/vm-virtio.git?branch=dragonball#c8cdf7adaf62a368605191803787ab9c28f0f71b"
source = "git+https://github.com/rust-vmm/vm-virtio?rev=6013dd9#6013dd91b2e6eb77ea10c6bdeda8f5eb18de6dda"
dependencies = [
"byteorder",
"libc",
"log",
"vm-memory",
"vmm-sys-util 0.4.0",
"vmm-sys-util",
]
[[package]]
name = "vmm-sys-util"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "048b10a74f061d87dacca196a1964052a7135651641ab8d100aef21e58f33571"
dependencies = [
"libc",
]
[[package]]
name = "vmm-sys-util"
name = "vm-memory"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c40b8c9abb58cec48e0e39dfbe83ff1cfed17d89df28f0fa74f5e88021ae56b"
checksum = "0a8ebcb86ca457f9d6e14cf97009f679952eba42f0113de5db596e514cd0e43b"
dependencies = [
"bitflags",
"arc-swap 1.5.0",
"libc",
"winapi",
]
[[package]]

View File

@ -1,6 +1,6 @@
[package]
name = "nydus-rs"
version = "1.0.0"
version = "1.1.2"
authors = ["The Nydus Developers"]
edition = "2018"
@ -23,7 +23,7 @@ rlimit = "0.3.0"
log = "0.4.8"
epoll = ">=4.0.1"
libc = "0.2"
vmm-sys-util = "0.6.0"
vmm-sys-util = ">=0.8.0"
clap = "2.33"
flexi_logger = { version = "0.17" }
serde = { version = ">=1.0.27", features = ["serde_derive", "rc"] }
@ -36,18 +36,20 @@ nix = "0.17"
anyhow = "1.0.35"
base64 = { version = ">=0.12.0" }
rust-fsm = "0.6.0"
vm-memory = { version = ">=0.2.0", optional = true }
chrono = "0.4.19"
openssl = { version = "0.10.35", features = ["vendored"] }
event-manager = { git = "https://github.com/rust-vmm/event-manager.git", tag = "v0.2.0" }
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", optional = true, rev = "cfd2cca", package = "fuse-rs" }
vhost-rs = { git = "https://github.com/cloud-hypervisor/vhost.git", branch = "dragonball", package = "vhost", optional = true }
vhost-user-backend = { git = "https://github.com/cloud-hypervisor/vhost-user-backend.git", package = "vhost_user_backend", rev = "78757bfa", optional = true }
hyperlocal = "0.8.0"
tokio = { version = "1.9.0", features = ["macros"] }
hyper = "0.14.11"
event-manager = "0.2.1"
vm-memory = { version = "0.6.0", features = ["backend-mmap"], optional = true }
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", rev = "afc7b69", optional = true }
vhost = { version = "0.2.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { git = "https://github.com/rust-vmm/vhost-user-backend", rev = "3242b37", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { git = "https://github.com/rust-vmm/vm-virtio", rev = "6013dd9", optional = true }
nydus-api = { path = "api" }
nydus-app = { path = "app" }
nydus-error = "0.1"
@ -57,12 +59,12 @@ storage = { path = "storage" }
[dev-dependencies]
sendfd = "0.3.3"
vmm-sys-util = "0.6.0"
vmm-sys-util = ">=0.8.0"
env_logger = "0.8.2"
[features]
fusedev = ["nydus-utils/fusedev", "fuse-backend-rs/fusedev"]
virtiofs = ["fuse-backend-rs/vhost-user-fs", "vm-memory/backend-mmap", "vhost-rs/vhost-user-slave", "vhost-user-backend"]
virtiofs = ["fuse-backend-rs/vhost-user-fs", "vm-memory", "vhost", "vhost-user-backend", "virtio-queue", "virtio-bindings"]
[workspace]
members = ["api", "app", "error", "rafs", "storage", "utils"]

View File

@ -1,6 +1,7 @@
all: build
TEST_WORKDIR_PREFIX ?= "/tmp"
DOCKER ?= "true"
current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
ARCH := $(shell uname -p)
@ -25,7 +26,11 @@ VIRIOFS_COMMON = --target-dir target-virtiofs --features=virtiofs --release
# $(2): How to build the golang project
define build_golang
echo "Building target $@ by invoking: $(2)"
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir $(1) golang:1.15 $(2)
if [ $(DOCKER) = "true" ]; then
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.15 $(2)
else
$(2) -C $(1)
fi
endef
# Build nydus respecting different features
@ -53,7 +58,7 @@ endef
# Targets that are exposed to developers and users.
build: .format fusedev virtiofs
release: .format .release_version fusedev virtiofs
static-release: .musl_target .format .release_version fusedev
static-release: .musl_target .format .release_version fusedev virtiofs
fusedev-release: .format .release_version fusedev
virtiofs-release: .format .release_version virtiofs
@ -105,7 +110,7 @@ docker-nydus-smoke:
-v ${current_dir}:/nydus-rs \
nydus-smoke
NYDUSIFY_PATH = /nydus-rs/contrib/nydusify
NYDUSIFY_PATH = contrib/nydusify
# TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image.
# So musl compliation must be involved.
# And docker-in-docker deployment invovles image buiding?
@ -133,30 +138,40 @@ nydusify:
nydusify-static:
$(call build_golang,${NYDUSIFY_PATH},make static-release)
SNAPSHOTTER_PATH = /nydus-rs/contrib/nydus-snapshotter
SNAPSHOTTER_PATH = contrib/nydus-snapshotter
nydus-snapshotter:
$(call build_golang,${SNAPSHOTTER_PATH},make static-release build test)
nydus-snapshotter-static:
$(call build_golang,${SNAPSHOTTER_PATH},make static-release)
CTR-REMOTE_PATH = /nydus-rs/contrib/ctr-remote
CTR-REMOTE_PATH = contrib/ctr-remote
ctr-remote:
$(call build_golang,${CTR-REMOTE_PATH},make)
ctr-remote-static:
$(call build_golang,${CTR-REMOTE_PATH},make static-release)
NYDUS-OVERLAYFS_PATH = /nydus-rs/contrib/nydus-overlayfs
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
nydus-overlayfs-static:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make static-release)
DOCKER-GRAPHDRIVER_PATH = contrib/docker-nydus-graphdriver
docker-nydus-graphdriver:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make)
docker-nydus-graphdriver-static:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make static-release)
# Run integration smoke test in docker-in-docker container. It requires some special settings,
# refer to `misc/example/README.md` for details.
all-static-release: docker-static nydusify-static nydus-snapshotter-static ctr-remote-static nydus-overlayfs-static
all-static-release: docker-static all-contrib-static-release
all-contrib-static-release: nydusify-static nydus-snapshotter-static ctr-remote-static \
nydus-overlayfs-static docker-nydus-graphdriver-static
# https://www.gnu.org/software/make/manual/html_node/One-Shell.html
.ONESHELL:

View File

@ -14,7 +14,7 @@ micro_http = { git = "https://github.com/cloud-hypervisor/micro-http.git", branc
serde = { version = ">=1.0.27", features = ["rc"] }
serde_derive = ">=1.0.27"
serde_json = ">=1.0.9"
vmm-sys-util = "0.6.0"
vmm-sys-util = ">=0.8.0"
url = "2.1.1"
http = "0.2.1"
nydus-utils = { path = "../utils" }

View File

@ -3,7 +3,7 @@ module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.16
require (
github.com/containerd/containerd v1.5.7
github.com/containerd/containerd v1.5.8
github.com/dragonflyoss/image-service/contrib/nydus-snapshotter v0.0.0-20210812024946-ec518a7d1cb8
github.com/opencontainers/image-spec v1.0.1
github.com/urfave/cli v1.22.5

View File

@ -108,8 +108,8 @@ github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg3
github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
github.com/Microsoft/hcsshim v0.8.21 h1:btRfUDThBE5IKcvI8O8jOiIkujUsAMBSRsYDYmEi6oM=
github.com/Microsoft/hcsshim v0.8.21/go.mod h1:+w2gRZ5ReXQhFOrvSQeNfhrYB/dg3oDwTOcER2fw4I4=
github.com/Microsoft/hcsshim v0.8.23 h1:47MSwtKGXet80aIn+7h4YI6fwPmwIghAnsx2aOUrG2M=
github.com/Microsoft/hcsshim v0.8.23/go.mod h1:4zegtUJth7lAvFyc6cH2gGQ5B3OFQim01nnU2M8jKDg=
github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
@ -176,6 +176,7 @@ github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3k
github.com/caarlos0/ctrlc v1.0.0/go.mod h1:CdXpj4rmq0q/1Eb44M9zi2nKB0QraNKuRGYGrrHhcQw=
github.com/campoy/unique v0.0.0-20180121183637-88950e537e7e/go.mod h1:9IOqJGCPMSc6E5ydlp5NIonxObaeu/Iub/X03EKPVYo=
github.com/cavaliercoder/go-cpio v0.0.0-20180626203310-925f9528c45e/go.mod h1:oDpT4efm8tSYHXV5tHSdRvBet/b/QzxZ+XyyPehvm3A=
github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
@ -224,13 +225,13 @@ github.com/containerd/containerd v1.4.0-beta.2.0.20200729163537-40b22ef07410/go.
github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.8/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.9/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.5.0-beta.1/go.mod h1:5HfvG1V2FsKesEGQ17k5/T7V960Tmcumvqn8Mc+pCYQ=
github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo/uBBoBORwEx6ardVcmKU=
github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI=
github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
github.com/containerd/containerd v1.5.1/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g=
github.com/containerd/containerd v1.5.7 h1:rQyoYtj4KddB3bxG6SAqd4+08gePNyJjRqvOIfV3rkM=
github.com/containerd/containerd v1.5.7/go.mod h1:gyvv6+ugqY25TiXxcZC3L5yOeYgEw0QMhscqVp1AR9c=
github.com/containerd/containerd v1.5.8 h1:NmkCC1/QxyZFBny8JogwLpOy2f+VEbO/f6bV2Mqtwuw=
github.com/containerd/containerd v1.5.8/go.mod h1:YdFSv5bTFLpG2HIYmfqDpSYYTDX+mc5qtSuYx1YUb/s=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
@ -267,8 +268,9 @@ github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDG
github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
github.com/containerd/ttrpc v1.0.1/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/ttrpc v1.0.2 h1:2/O3oTZN36q2xRolk0a2WWGgh7/Vf/liElg5hFYLX9U=
github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/ttrpc v1.1.0 h1:GbtyLRxb0gOLR0TYQWt3O6B0NvT8tMdorEHqIQo/lWI=
github.com/containerd/ttrpc v1.1.0/go.mod h1:XX4ZTnoOId4HklF4edwc4DcqskFZuvXB1Evzy5KFQpQ=
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
@ -1357,8 +1359,9 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@ -0,0 +1 @@
/bin

View File

@ -0,0 +1,13 @@
all:clear build
.PHONY: build
build:
GOOS=linux go build -v -o bin/nydus_graphdriver .
.PHONY: clear
clear:
rm -f bin/*
.PHONY: static-release
static-release:
GOOS=linux go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus_graphdriver .

View File

@ -120,10 +120,15 @@ func NewDaemonConfig(cfg DaemonConfig, imageID string, vpcRegistry bool, labels
registryHost = registry.ConvertToVPCHost(registryHost)
}
keyChain := auth.FromLabels(labels)
if keyChain.TokenBase() {
cfg.Device.Backend.Config.RegistryToken = keyChain.Password
} else {
cfg.Device.Backend.Config.Auth = keyChain.ToBase64()
// If no auth is provided, don't touch auth from provided nydusd configuration file.
// We don't validate the original nydusd auth from configuration file since it can be empty
// when repository is public.
if keyChain != nil {
if keyChain.TokenBase() {
cfg.Device.Backend.Config.RegistryToken = keyChain.Password
} else {
cfg.Device.Backend.Config.Auth = keyChain.ToBase64()
}
}
cfg.Device.Backend.Config.Host = registryHost
cfg.Device.Backend.Config.Repo = image.Repo

View File

@ -3,7 +3,7 @@ module github.com/dragonflyoss/image-service/contrib/nydus-snapshotter
go 1.14
require (
github.com/containerd/containerd v1.4.11
github.com/containerd/containerd v1.4.12
github.com/containerd/continuity v0.0.0-20200928162600-f2cc35102c2a
github.com/dragonflyoss/image-service/contrib/nydusify v0.0.0-20210518022841-c17fb49cce7c
github.com/gogo/protobuf v1.3.2 // indirect

View File

@ -151,8 +151,8 @@ github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on
github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.11 h1:QCGOUN+i70jEEL/A6JVIbhy4f4fanzAzSR4kNG7SlcE=
github.com/containerd/containerd v1.4.11/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.12 h1:V+SHzYmhng/iju6M5nFrpTTusrhidoxKTwdwLw+u4c4=
github.com/containerd/containerd v1.4.12/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20200928162600-f2cc35102c2a h1:jEIoR0aA5GogXZ8pP3DUzE+zrhaF6/1rYZy+7KkYEWM=
github.com/containerd/continuity v0.0.0-20200928162600-f2cc35102c2a/go.mod h1:W0qIOTD7mp2He++YVq+kgfXezRYqzP1uDuMVH1bITDY=

View File

@ -25,7 +25,7 @@ var (
emptyPassKeyChain = PassKeyChain{}
)
// PassKeyChain is user/pass based key chain
// PassKeyChain is user/password based key chain
type PassKeyChain struct {
Username string
Password string
@ -59,12 +59,23 @@ func (kc PassKeyChain) TokenBase() bool {
return kc.Username == "" && kc.Password != ""
}
// FromLabels find image pull username and secret from snapshot labels
// if username and secret is empty, we treat it as empty string
func FromLabels(labels map[string]string) PassKeyChain {
return PassKeyChain{
Username: labels[label.ImagePullUsername],
Password: labels[label.ImagePullSecret],
// FromLabels finds image pull username and secret from snapshot labels.
// Returned `nil` means no validated username and secrect are passed, it should
// not override input nydusd configuration.
func FromLabels(labels map[string]string) *PassKeyChain {
u, found := labels[label.ImagePullUsername]
if !found || u == "" {
return nil
}
p, found := labels[label.ImagePullSecret]
if !found || p == "" {
return nil
}
return &PassKeyChain{
Username: u,
Password: p,
}
}
@ -81,7 +92,7 @@ func (kc PassKeyChain) toAuthConfig() authn.AuthConfig {
}
}
return authn.AuthConfig{
Username: kc.Username,
Password: kc.Password,
Username: kc.Username,
Password: kc.Password,
}
}

View File

@ -15,28 +15,27 @@ import (
)
func TestFromLabels(t *testing.T) {
labels := map[string]string {
labels := map[string]string{
label.ImagePullUsername: "mock",
label.ImagePullSecret: "mock",
label.ImagePullSecret: "mock",
}
kc := FromLabels(labels)
assert.Equal(t, kc.Username, "mock")
assert.Equal(t, kc.Password, "mock")
assert.Equal(t, "bW9jazptb2Nr", kc.ToBase64())
kc, err := FromBase64("bW9jazptb2Nr")
kc1, err := FromBase64("bW9jazptb2Nr")
assert.Nil(t, err)
assert.Equal(t, kc.Username, "mock")
assert.Equal(t, kc.Password, "mock")
assert.Equal(t, kc1.Username, "mock")
assert.Equal(t, kc1.Password, "mock")
labels = map[string]string {}
labels = map[string]string{}
kc = FromLabels(labels)
assert.Equal(t, "", kc.ToBase64())
assert.Nil(t, kc)
labels = map[string]string {
labels = map[string]string{
label.ImagePullSecret: "mock",
}
kc = FromLabels(labels)
assert.True(t, kc.TokenBase())
assert.Equal(t, "mock", kc.Password)
assert.Nil(t, kc)
}

View File

@ -7,6 +7,14 @@
"size": 76
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.nydus.blob.v1",
"digest": "sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
"size": 83528010,
"annotations": {
"containerd.io/snapshot/nydus-blob": "true"
}
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": "sha256:fdfe86772cfb157dd364a7caf7a64fdc6f10abd047701c0a3fcd629b8ebc8766",
@ -14,7 +22,8 @@
"annotations": {
"containerd.io/snapshot/nydus-bootstrap": "true",
"containerd.io/snapshot/nydus-source-chainid": "sha256:bacd3af13903e13a43fe87b6944acd1ff21024132aad6e74b4452d984fb1a99a",
"containerd.io/uncompressed": "sha256:032ef23acc516fb5ffda4900db1616f85b39cffb626bc0def51915e14a6a7d8d"
"containerd.io/uncompressed": "sha256:032ef23acc516fb5ffda4900db1616f85b39cffb626bc0def51915e14a6a7d8d",
"containerd.io/snapshot/nydus-reference-blob-ids": "[\"09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059\"]"
}
},
{
@ -33,7 +42,8 @@
"annotations": {
"containerd.io/snapshot/nydus-bootstrap": "true",
"containerd.io/snapshot/nydus-source-chainid": "sha256:3779241fda7b1caf03964626c3503e930f2f19a5ffaba6f4b4ad21fd38df3b6b",
"containerd.io/uncompressed": "sha256:06014764637029de0a5d37c5a2e52249d46f45a5edecca0ad81d98347f076d7a"
"containerd.io/uncompressed": "sha256:06014764637029de0a5d37c5a2e52249d46f45a5edecca0ad81d98347f076d7a",
"containerd.io/snapshot/nydus-reference-blob-ids": "[\"09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059\"]"
}
},
{
@ -52,7 +62,8 @@
"annotations": {
"containerd.io/snapshot/nydus-bootstrap": "true",
"containerd.io/snapshot/nydus-source-chainid": "sha256:9386795d450ce06c6819c8bc5eff8daa71d47ccb9f9fb8d49fe1ccfb5fb3edbe",
"containerd.io/uncompressed": "sha256:e3a229c2fa7d489240052abe2f9ad235e26f1aa10d70060fc8e78d478b624503"
"containerd.io/uncompressed": "sha256:e3a229c2fa7d489240052abe2f9ad235e26f1aa10d70060fc8e78d478b624503",
"containerd.io/snapshot/nydus-reference-blob-ids": "[\"09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059\"]"
}
},
{

View File

@ -6,6 +6,14 @@
"size": 523
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.nydus.blob.v1",
"digest": "sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
"size": 83528010,
"annotations": {
"containerd.io/snapshot/nydus-blob": "true"
}
},
{
"mediaType": "application/vnd.oci.image.layer.nydus.blob.v1",
"digest": "sha256:b413839e4ee5248697ef30fe9a84b659fa744d69bbc9b7754113adc2b2b6bc90",
@ -35,8 +43,9 @@
"digest": "sha256:aec98c9e3dce739877b8f5fe1cddd339de1db2b36c20995d76f6265056dbdb08",
"size": 273320,
"annotations": {
"containerd.io/snapshot/nydus-blob-ids": "[\"b413839e4ee5248697ef30fe9a84b659fa744d69bbc9b7754113adc2b2b6bc90\",\"b6a85be8248b0d3c2f0565ef71d549f404f8edcee1ab666c9871a8e6d9384860\",\"00d151e7d392e68e2c756a6fc42640006ddc0a98d37dba3f90a7b73f63188bbd\"]",
"containerd.io/snapshot/nydus-bootstrap": "true"
"containerd.io/snapshot/nydus-blob-ids": "[\"09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059\",\"b413839e4ee5248697ef30fe9a84b659fa744d69bbc9b7754113adc2b2b6bc90\",\"b6a85be8248b0d3c2f0565ef71d549f404f8edcee1ab666c9871a8e6d9384860\",\"00d151e7d392e68e2c756a6fc42640006ddc0a98d37dba3f90a7b73f63188bbd\"]",
"containerd.io/snapshot/nydus-bootstrap": "true",
"containerd.io/snapshot/nydus-reference-blob-ids": "[\"09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059\"]"
}
}
]

View File

@ -7,7 +7,7 @@ require (
github.com/aliyun/aliyun-oss-go-sdk v2.1.5+incompatible
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect
github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340 // indirect
github.com/containerd/containerd v1.4.11
github.com/containerd/containerd v1.4.12
github.com/containerd/continuity v0.0.0-20200928162600-f2cc35102c2a // indirect
github.com/containerd/ttrpc v1.0.1 // indirect
github.com/containerd/typeurl v1.0.1 // indirect
@ -23,7 +23,7 @@ require (
github.com/kr/text v0.2.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.1
github.com/opencontainers/image-spec v1.0.2
github.com/pkg/errors v0.9.1
github.com/pkg/xattr v0.4.3
github.com/prometheus/client_golang v1.11.0

View File

@ -31,8 +31,8 @@ github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340 h1:9atoWyI9RtXF
github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.11 h1:QCGOUN+i70jEEL/A6JVIbhy4f4fanzAzSR4kNG7SlcE=
github.com/containerd/containerd v1.4.11/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.12 h1:V+SHzYmhng/iju6M5nFrpTTusrhidoxKTwdwLw+u4c4=
github.com/containerd/containerd v1.4.12/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20200928162600-f2cc35102c2a h1:jEIoR0aA5GogXZ8pP3DUzE+zrhaF6/1rYZy+7KkYEWM=
github.com/containerd/continuity v0.0.0-20200928162600-f2cc35102c2a/go.mod h1:W0qIOTD7mp2He++YVq+kgfXezRYqzP1uDuMVH1bITDY=
@ -144,8 +144,8 @@ github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLA
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=

View File

@ -55,6 +55,8 @@ type Cache struct {
opt Opt
// Remote is responsible for pulling & pushing cache image
remote *remote.Remote
// reference blob records
referenceRecords map[digest.Digest]*CacheRecord
// Store the pulled records from registry
pulledRecords map[digest.Digest]*CacheRecord
// Store the records prepared to push to registry
@ -67,14 +69,55 @@ func New(remote *remote.Remote, opt Opt) (*Cache, error) {
opt: opt,
remote: remote,
// source_layer_chain_id -> cache_record
pulledRecords: make(map[digest.Digest]*CacheRecord),
pushedRecords: []*CacheRecord{},
pulledRecords: make(map[digest.Digest]*CacheRecord),
referenceRecords: make(map[digest.Digest]*CacheRecord),
pushedRecords: []*CacheRecord{},
}
return cache, nil
}
func (cacheRecord *CacheRecord) GetReferenceBlobs() []string {
listStr := cacheRecord.NydusBootstrapDesc.Annotations[utils.LayerAnnotationNydusReferenceBlobIDs]
if listStr == "" {
return []string{}
}
var blobs []string
if err := json.Unmarshal([]byte(listStr), &blobs); err != nil {
return []string{}
}
return blobs
}
func (cache *Cache) GetReference(d digest.Digest) *CacheRecord {
r, ok := cache.referenceRecords[d]
if !ok {
return nil
}
return r
}
func (cache *Cache) SetReference(layer *ocispec.Descriptor) {
record := cache.layerToRecord(layer)
cache.referenceRecords[layer.Digest] = record
}
func (cache *Cache) recordToLayer(record *CacheRecord) (*ocispec.Descriptor, *ocispec.Descriptor) {
if record.SourceChainID == "" {
if record.NydusBlobDesc != nil {
if cache.opt.Backend.Type() == backend.RegistryBackend {
return nil, &ocispec.Descriptor{
MediaType: utils.MediaTypeNydusBlob,
Digest: record.NydusBlobDesc.Digest,
Size: record.NydusBlobDesc.Size,
Annotations: map[string]string{
utils.LayerAnnotationNydusBlob: "true",
},
}
}
}
return nil, nil
}
bootstrapCacheMediaType := ocispec.MediaTypeImageLayerGzip
if cache.opt.DockerV2Format {
bootstrapCacheMediaType = images.MediaTypeDockerSchema2LayerGzip
@ -91,6 +134,9 @@ func (cache *Cache) recordToLayer(record *CacheRecord) (*ocispec.Descriptor, *oc
utils.LayerAnnotationUncompressed: record.NydusBootstrapDiffID.String(),
},
}
if refenceBlobsStr, ok := record.NydusBootstrapDesc.Annotations[utils.LayerAnnotationNydusReferenceBlobIDs]; ok {
bootstrapCacheDesc.Annotations[utils.LayerAnnotationNydusReferenceBlobIDs] = refenceBlobsStr
}
var blobCacheDesc *ocispec.Descriptor
if record.NydusBlobDesc != nil {
@ -116,9 +162,22 @@ func (cache *Cache) recordToLayer(record *CacheRecord) (*ocispec.Descriptor, *oc
}
func (cache *Cache) exportRecordsToLayers() []ocispec.Descriptor {
layers := []ocispec.Descriptor{}
var (
layers []ocispec.Descriptor
referenceLayers []ocispec.Descriptor
)
for _, record := range cache.pushedRecords {
referenceBlobIDs := record.GetReferenceBlobs()
for _, blobID := range referenceBlobIDs {
// for oss backend, GetReference always return nil
// for registry backend, GetReference should not return nil
referenceRecord := cache.GetReference(digest.NewDigestFromEncoded(digest.SHA256, blobID))
if referenceRecord != nil {
_, blobDesc := cache.recordToLayer(referenceRecord)
referenceLayers = append(referenceLayers, *blobDesc)
}
}
bootstrapCacheDesc, blobCacheDesc := cache.recordToLayer(record)
layers = append(layers, *bootstrapCacheDesc)
if blobCacheDesc != nil {
@ -126,12 +185,25 @@ func (cache *Cache) exportRecordsToLayers() []ocispec.Descriptor {
}
}
return layers
return append(referenceLayers, layers...)
}
func (cache *Cache) layerToRecord(layer *ocispec.Descriptor) *CacheRecord {
sourceChainIDStr, ok := layer.Annotations[utils.LayerAnnotationNydusSourceChainID]
if !ok {
if layer.Annotations[utils.LayerAnnotationNydusBlob] == "true" {
// for reference blob layers
return &CacheRecord{
NydusBlobDesc: &ocispec.Descriptor{
MediaType: layer.MediaType,
Digest: layer.Digest,
Size: layer.Size,
Annotations: map[string]string{
utils.LayerAnnotationNydusBlob: "true",
},
},
}
}
return nil
}
sourceChainID := digest.Digest(sourceChainIDStr)
@ -161,6 +233,10 @@ func (cache *Cache) layerToRecord(layer *ocispec.Descriptor) *CacheRecord {
utils.LayerAnnotationUncompressed: uncompressedDigestStr,
},
}
referenceBlobsStr := layer.Annotations[utils.LayerAnnotationNydusReferenceBlobIDs]
if referenceBlobsStr != "" {
bootstrapDesc.Annotations[utils.LayerAnnotationNydusReferenceBlobIDs] = referenceBlobsStr
}
var nydusBlobDesc *ocispec.Descriptor
if layer.Annotations[utils.LayerAnnotationNydusBlobDigest] != "" &&
layer.Annotations[utils.LayerAnnotationNydusBlobSize] != "" {
@ -229,11 +305,17 @@ func mergeRecord(old, new *CacheRecord) *CacheRecord {
func (cache *Cache) importRecordsFromLayers(layers []ocispec.Descriptor) {
pulledRecords := make(map[digest.Digest]*CacheRecord)
referenceRecords := make(map[digest.Digest]*CacheRecord)
pushedRecords := []*CacheRecord{}
for _, layer := range layers {
record := cache.layerToRecord(&layer)
if record != nil {
if record.SourceChainID == "" {
referenceRecords[record.NydusBlobDesc.Digest] = record
logrus.Infof("Found reference blob layer %s", record.NydusBlobDesc.Digest)
continue
}
// Merge bootstrap and related blob layer to record
newRecord := mergeRecord(
pulledRecords[record.SourceChainID],
@ -248,6 +330,7 @@ func (cache *Cache) importRecordsFromLayers(layers []ocispec.Descriptor) {
cache.pulledRecords = pulledRecords
cache.pushedRecords = pushedRecords
cache.referenceRecords = referenceRecords
}
// Export pushes cache manifest index to remote registry

View File

@ -66,8 +66,10 @@ func (rule *ManifestRule) Validate() error {
}
layers := rule.TargetParsed.NydusImage.Manifest.Layers
blobListInAnnotation := []string{}
blobListInLayer := []string{}
var (
blobListInAnnotation []string
blobListInLayer []string
)
for i, layer := range layers {
if i == len(layers)-1 {
if layer.Annotations[utils.LayerAnnotationNydusBootstrap] != "true" {

View File

@ -8,6 +8,8 @@ import (
"encoding/json"
"fmt"
"os/exec"
"github.com/pkg/errors"
)
const (
@ -20,11 +22,23 @@ type InspectOption struct {
}
type BlobInfo struct {
BlobID string `json:"blob_id"`
CompressSize uint64 `json:"compress_size"`
DecompressSize uint64 `json:"decompress_size"`
ReadaheadOffset uint32 `json:"readahead_offset"`
ReadaheadSize uint32 `json:"readahead_size"`
BlobID string `json:"blob_id"`
CompressedSize uint64 `json:"compressed_size"`
DecompressedSize uint64 `json:"decompressed_size"`
ReadaheadOffset uint32 `json:"readahead_offset"`
ReadaheadSize uint32 `json:"readahead_size"`
}
func (info *BlobInfo) String() string {
jsonBytes, _ := json.Marshal(info)
return string(jsonBytes)
}
type BlobInfoList []BlobInfo
func (infos BlobInfoList) String() string {
jsonBytes, _ := json.Marshal(&infos)
return string(jsonBytes)
}
type Inspector struct {
@ -51,9 +65,9 @@ func (p *Inspector) Inspect(option InspectOption) (interface{}, error) {
cmd := exec.Command(p.binaryPath, args...)
msg, err := cmd.CombinedOutput()
if err != nil {
return nil, err
return nil, errors.Wrap(err, string(msg))
}
var blobs []BlobInfo
var blobs BlobInfoList
if err = json.Unmarshal(msg, &blobs); err != nil {
return nil, err
}

View File

@ -61,6 +61,10 @@ func newCacheGlue(
}, nil
}
func (cg *cacheGlue) GetReferenceRecord(d digest.Digest) *cache.CacheRecord {
return cg.cache.GetReference(d)
}
func (cg *cacheGlue) Pull(
ctx context.Context, sourceLayerChainID digest.Digest,
) (*cache.CacheRecord, error) {
@ -78,7 +82,7 @@ func (cg *cacheGlue) Pull(
"ChainID": sourceLayerChainID,
})
// Pull the cached layer from cache image, then push to target namespace/repo,
// because the blob data is not shared between diffrent namespaces in registry,
// because the blob data is not shared between different namespaces in registry,
// this operation ensures that Nydus image owns these layers.
cacheRecord = _cacheRecord
defer bootstrapReader.Close()
@ -180,6 +184,11 @@ func (cg *cacheGlue) Export(
for _, layer := range buildLayers {
record := layer.GetCacheRecord()
cacheRecords = append(cacheRecords, &record)
if layer.backend.Type() == backend.RegistryBackend {
for idx := range layer.referenceBlobs {
cg.cache.SetReference(&layer.referenceBlobs[idx])
}
}
}
cg.cache.Record(cacheRecords)

View File

@ -12,6 +12,8 @@ import (
"strings"
"time"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/tool"
@ -92,7 +94,7 @@ func getChunkDictFromRegistry(prepareDir, imageName string, insecure bool, platf
return targetFile, nil
}
func (cvt *Converter) prepareBootstrap(prepareDir, from, info string) (string, []string, error) {
func (cvt *Converter) prepareBootstrap(prepareDir, from, info string) (string, []ocispec.Descriptor, error) {
var (
target string
err error
@ -102,11 +104,11 @@ func (cvt *Converter) prepareBootstrap(prepareDir, from, info string) (string, [
} else {
_, arch, err := provider.ExtractOsArch(cvt.chunkDict.Platform)
if err != nil {
return "", []string{}, err
return "", []ocispec.Descriptor{}, err
}
target, err = getChunkDictFromRegistry(prepareDir, info, cvt.chunkDict.Insecure, arch)
if err != nil {
return "", []string{}, err
return "", []ocispec.Descriptor{}, err
}
}
debugOutput := filepath.Join(prepareDir, "check.output")
@ -115,7 +117,7 @@ func (cvt *Converter) prepareBootstrap(prepareDir, from, info string) (string, [
BootstrapPath: target,
DebugOutputPath: debugOutput,
}); err != nil {
return "", []string{}, fmt.Errorf("invalid bootstrap format %v", err)
return "", []ocispec.Descriptor{}, fmt.Errorf("invalid bootstrap format %v", err)
}
item, err := tool.NewInspector(cvt.NydusImagePath).Inspect(tool.InspectOption{
@ -123,15 +125,23 @@ func (cvt *Converter) prepareBootstrap(prepareDir, from, info string) (string, [
Bootstrap: target,
})
if err != nil {
return "", []string{}, err
return "", []ocispec.Descriptor{}, err
}
blobsInfo, _ := item.([]tool.BlobInfo)
var blobs []string
blobsInfo, _ := item.(tool.BlobInfoList)
var blobLayers []ocispec.Descriptor
for _, blobInfo := range blobsInfo {
blobs = append(blobs, blobInfo.BlobID)
blobDigest := digest.NewDigestFromEncoded(digest.SHA256, blobInfo.BlobID)
blobLayers = append(blobLayers, ocispec.Descriptor{
MediaType: utils.MediaTypeNydusBlob,
Size: int64(blobInfo.CompressedSize),
Digest: blobDigest,
Annotations: map[string]string{
utils.LayerAnnotationNydusBlob: "true",
},
})
}
logrus.Infof("chunk dict has blobs %v", blobs)
return "bootstrap=" + target, blobs, nil
logrus.Infof("chunk dict has blobs %s", blobsInfo)
return "bootstrap=" + target, blobLayers, nil
}
type ChunkDictOpt struct {
@ -146,20 +156,20 @@ type ChunkDictOpt struct {
// return
// type=$path, which could be used as a command-line argument of nydus-image
// blobs we need add blobs which are belongs to chunk dict to bootstrap manifests
func (cvt *Converter) prepareChunkDict() (string, []string, error) {
func (cvt *Converter) prepareChunkDict() (string, []ocispec.Descriptor, error) {
// prepare dir
prepareDir := filepath.Join(cvt.WorkDir, "chunk_dict")
err := os.MkdirAll(prepareDir, 0755)
if err != nil {
return "", []string{}, err
return "", []ocispec.Descriptor{}, err
}
cType, from, info, err := parseArgs(cvt.chunkDict.Args)
if err != nil {
return "", []string{}, err
return "", []ocispec.Descriptor{}, err
}
switch cType {
case "bootstrap":
return cvt.prepareBootstrap(prepareDir, from, info)
}
return "", []string{}, fmt.Errorf("invalid type %s", cType)
return "", []ocispec.Descriptor{}, fmt.Errorf("invalid type %s", cType)
}

View File

@ -12,6 +12,7 @@ import (
"time"
"github.com/containerd/containerd/reference/docker"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@ -185,11 +186,11 @@ func (cvt *Converter) convert(ctx context.Context) (retErr error) {
return errors.Wrap(err, "Pull cache image")
}
var (
blobs []string
blobLayers []ocispec.Descriptor
chunkDictOpt string
)
if cvt.chunkDict.Args != "" {
chunkDictOpt, blobs, err = cvt.prepareChunkDict()
chunkDictOpt, blobLayers, err = cvt.prepareChunkDict()
if err != nil {
logrus.Warnf("get chunk dict err %v", err)
}
@ -248,6 +249,7 @@ func (cvt *Converter) convert(ctx context.Context) (retErr error) {
backend: cvt.storageBackend,
forcePush: cvt.BackendForcePush,
alignedChunk: cvt.BackendAlignedChunk,
referenceBlobs: blobLayers,
}
parentBuildLayer = buildLayer
buildLayers = append(buildLayers, buildLayer)
@ -332,7 +334,6 @@ func (cvt *Converter) convert(ctx context.Context) (retErr error) {
// Push OCI manifest, Nydus manifest and manifest index
mm := &manifestManager{
referenceBlobs: blobs,
sourceProvider: sourceProvider,
remote: cvt.TargetRemote,
backend: cvt.storageBackend,

View File

@ -6,6 +6,7 @@ package converter
import (
"context"
"encoding/json"
"fmt"
"os"
"path/filepath"
@ -55,6 +56,8 @@ type buildLayer struct {
backend backend.Backend
forcePush bool
alignedChunk bool
referenceBlobs []ocispec.Descriptor
}
// parseSourceMount parses mounts object returned by the Mount method in
@ -158,6 +161,10 @@ func (layer *buildLayer) pushBootstrap(ctx context.Context) (*ocispec.Descriptor
utils.LayerAnnotationNydusBootstrap: "true",
},
}
if len(layer.referenceBlobs) > 0 {
blobsBytes, _ := json.Marshal(layersHex(layer.referenceBlobs))
desc.Annotations[utils.LayerAnnotationNydusReferenceBlobIDs] = string(blobsBytes)
}
if err := utils.WithRetry(func() error {
compressedReader, err := utils.PackTargz(
@ -244,6 +251,30 @@ func (layer *buildLayer) Mount(ctx context.Context) (func() error, error) {
}
if cacheRecord != nil {
layer.cacheRecord = cacheRecord
// assign reference blobs from cache
var referenceBlobs []ocispec.Descriptor
blobIDs := cacheRecord.GetReferenceBlobs()
for _, blobID := range blobIDs {
blobDigest := digest.NewDigestFromEncoded(digest.SHA256, blobID)
if layer.backend.Type() == backend.RegistryBackend {
// we store blob layers on build-cache
record := layer.cacheGlue.GetReferenceRecord(blobDigest)
if record == nil {
return nil, fmt.Errorf("can't find reference blob layer in build-cache")
}
referenceBlobs = append(referenceBlobs, *record.NydusBlobDesc)
} else {
// for oss backend, only need digest
referenceBlobs = append(referenceBlobs, ocispec.Descriptor{
MediaType: utils.MediaTypeNydusBlob,
Digest: blobDigest,
Annotations: map[string]string{
utils.LayerAnnotationNydusBlob: "true",
},
})
}
}
layer.referenceBlobs = referenceBlobs
return nil, nil
}

View File

@ -27,7 +27,6 @@ import (
// manifestManager merges OCI and Nydus manifest, pushes them to
// remote registry
type manifestManager struct {
referenceBlobs []string
sourceProvider provider.SourceProvider
backend backend.Backend
remote *remote.Remote
@ -162,24 +161,66 @@ func (mm *manifestManager) CloneSourcePlatform(ctx context.Context, additionalOS
}, nil
}
func (mm *manifestManager) Push(ctx context.Context, buildLayers []*buildLayer) error {
layers := []ocispec.Descriptor{}
// add reference blobs to annotation
blobListInAnnotation := mm.referenceBlobs
func layersHex(layers []ocispec.Descriptor) []string {
var digests []string
for _, layer := range layers {
digests = append(digests, layer.Digest.Hex())
}
return digests
}
func containsLayer(layers []ocispec.Descriptor, d digest.Digest) bool {
for _, layer := range layers {
if layer.Digest == d {
return true
}
}
return false
}
func appendBlobs(oldBlobs []string, newBlobs []string) []string {
for _, newBlob := range newBlobs {
exist := false
for _, oldBlob := range oldBlobs {
if oldBlob == newBlob {
exist = true
break
}
}
if !exist {
oldBlobs = append(oldBlobs, newBlob)
}
}
return oldBlobs
}
func (mm *manifestManager) Push(ctx context.Context, buildLayers []*buildLayer) error {
var (
blobListInAnnotation []string
referenceBlobs []string
layers []ocispec.Descriptor
)
for idx, _layer := range buildLayers {
record := _layer.GetCacheRecord()
referenceBlobs = appendBlobs(referenceBlobs, layersHex(_layer.referenceBlobs))
blobListInAnnotation = appendBlobs(blobListInAnnotation, layersHex(_layer.referenceBlobs))
if record.NydusBlobDesc != nil {
// Write blob digest list in JSON format to layer annotation of bootstrap.
blobListInAnnotation = append(blobListInAnnotation, record.NydusBlobDesc.Digest.Hex())
// For registry backend, we need to write the blob layer to
// manifest to prevent them from being deleted by registry GC.
// todo: add reference blobs layer to manifest
if mm.backend.Type() == backend.RegistryBackend {
layers = append(layers, *record.NydusBlobDesc)
}
}
// try add reference blob layers to manifest
if mm.backend.Type() == backend.RegistryBackend {
for _, blobDesc := range _layer.referenceBlobs {
if !containsLayer(layers, blobDesc.Digest) {
layers = append(layers, blobDesc)
}
}
}
// Only need to write lastest bootstrap layer in nydus manifest
if idx == len(buildLayers)-1 {
@ -188,6 +229,13 @@ func (mm *manifestManager) Push(ctx context.Context, buildLayers []*buildLayer)
return errors.Wrap(err, "Marshal blob list")
}
record.NydusBootstrapDesc.Annotations[utils.LayerAnnotationNydusBlobIDs] = string(blobListBytes)
if len(referenceBlobs) > 0 {
blobListBytes, err = json.Marshal(referenceBlobs)
if err != nil {
return errors.Wrap(err, "Marshal blob list")
}
record.NydusBootstrapDesc.Annotations[utils.LayerAnnotationNydusReferenceBlobIDs] = string(blobListBytes)
}
layers = append(layers, *record.NydusBootstrapDesc)
}
}
@ -201,9 +249,10 @@ func (mm *manifestManager) Push(ctx context.Context, buildLayers []*buildLayer)
// Remove useless annotations from layer
validAnnotationKeys := map[string]bool{
utils.LayerAnnotationNydusBlob: true,
utils.LayerAnnotationNydusBlobIDs: true,
utils.LayerAnnotationNydusBootstrap: true,
utils.LayerAnnotationNydusBlob: true,
utils.LayerAnnotationNydusBlobIDs: true,
utils.LayerAnnotationNydusReferenceBlobIDs: true,
utils.LayerAnnotationNydusBootstrap: true,
}
for idx, desc := range layers {
layerDiffID := digest.Digest(desc.Annotations[utils.LayerAnnotationUncompressed])

View File

@ -18,5 +18,7 @@ const (
LayerAnnotationNydusBootstrap = "containerd.io/snapshot/nydus-bootstrap"
LayerAnnotationNydusSourceChainID = "containerd.io/snapshot/nydus-source-chainid"
LayerAnnotationNydusReferenceBlobIDs = "containerd.io/snapshot/nydus-reference-blob-ids"
LayerAnnotationUncompressed = "containerd.io/uncompressed"
)

View File

@ -5,6 +5,7 @@
package tests
import (
"fmt"
"testing"
)
@ -13,7 +14,7 @@ func testBasicConvert(t *testing.T) {
defer registry.Destory(t)
registry.Build(t, "image-basic")
nydusify := NewNydusify(registry, "image-basic", "image-basic-nydus", "")
nydusify := NewNydusify(registry, "image-basic", "image-basic-nydus", "", "")
nydusify.Convert(t)
nydusify.Check(t)
}
@ -23,26 +24,59 @@ func testConvertWithCache(t *testing.T) {
defer registry.Destory(t)
registry.Build(t, "image-basic")
nydusify1 := NewNydusify(registry, "image-basic", "image-basic-nydus-1", "cache:v1")
nydusify1 := NewNydusify(registry, "image-basic", "image-basic-nydus-1", "cache:v1", "")
nydusify1.Convert(t)
nydusify1.Check(t)
nydusify2 := NewNydusify(registry, "image-basic", "image-basic-nydus-2", "cache:v1")
nydusify2 := NewNydusify(registry, "image-basic", "image-basic-nydus-2", "cache:v1", "")
nydusify2.Convert(t)
nydusify2.Check(t)
registry.Build(t, "image-from-1")
nydusify3 := NewNydusify(registry, "image-from-1", "image-from-nydus-1", "cache:v1")
nydusify3 := NewNydusify(registry, "image-from-1", "image-from-nydus-1", "cache:v1", "")
nydusify3.Convert(t)
nydusify3.Check(t)
registry.Build(t, "image-from-2")
nydusify4 := NewNydusify(registry, "image-from-2", "image-from-nydus-2", "cache:v1")
nydusify4 := NewNydusify(registry, "image-from-2", "image-from-nydus-2", "cache:v1", "")
nydusify4.Convert(t)
nydusify4.Check(t)
}
func testConvertWithChunkDict(t *testing.T) {
registry := NewRegistry(t)
defer registry.Destory(t)
registry.Build(t, "chunk-dict-1")
// build chunk-dict-1 bootstrap
nydusify1 := NewNydusify(registry, "chunk-dict-1", "nydus:chunk-dict-1", "", "")
nydusify1.Convert(t)
nydusify1.Check(t)
chunkDictOpt := fmt.Sprintf("bootstrap:registry:%s/%s", registry.Host(), "nydus:chunk-dict-1")
// build without build-cache
registry.Build(t, "image-basic")
nydusify2 := NewNydusify(registry, "image-basic", "nydus:image-basic", "", chunkDictOpt)
nydusify2.Convert(t)
nydusify2.Check(t)
// build with build-cache
registry.Build(t, "image-from-1")
nydusify3 := NewNydusify(registry, "image-from-1", "nydus:image-from-1", "nydus:cache_v1", chunkDictOpt)
nydusify3.Convert(t)
nydusify3.Check(t)
// change chunk dict
registry.Build(t, "chunk-dict-2")
nydusify4 := NewNydusify(registry, "chunk-dict-2", "nydus:chunk-dict-2", "", "")
nydusify4.Convert(t)
nydusify4.Check(t)
chunkDictOpt = fmt.Sprintf("bootstrap:registry:%s/%s", registry.Host(), "nydus:chunk-dict-2")
registry.Build(t, "image-from-2")
nydusify5 := NewNydusify(registry, "image-from-2", "nydus:image-from-2", "nydus:cache_v1", chunkDictOpt)
nydusify5.Convert(t)
nydusify5.Check(t)
}
func TestSmoke(t *testing.T) {
testBasicConvert(t)
testConvertWithCache(t)
testConvertWithChunkDict(t)
}

View File

@ -38,7 +38,7 @@ func convert(t *testing.T, ref string) {
registry := NewRegistry(t)
defer registry.Destory(t)
transfer(t, ref)
nydusify := NewNydusify(registry, ref, fmt.Sprintf("%s-nydus", ref), "")
nydusify := NewNydusify(registry, ref, fmt.Sprintf("%s-nydus", ref), "", "")
nydusify.Convert(t)
nydusify.Check(t)
}

View File

@ -9,6 +9,7 @@ import (
"fmt"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
@ -40,20 +41,22 @@ type Nydusify struct {
Cache string
backendType string
backendConfig string
chunkDictArgs string
}
func NewNydusify(registry *Registry, source, target, cache string) *Nydusify {
func NewNydusify(registry *Registry, source, target, cache string, chunkDictArgs string) *Nydusify {
host := registry.Host()
backendType := "registry"
if os.Getenv("BACKEND_TYPE") != "" {
backendType = os.Getenv("BACKEND_TYPE")
}
repoTag := strings.Split(target, ":")
backendConfig := fmt.Sprintf(`{
"host": "%s",
"repo": "%s",
"scheme": "http"
}`, host, target)
}`, host, repoTag[0])
if os.Getenv("BACKEND_CONFIG") != "" {
backendConfig = os.Getenv("BACKEND_CONFIG")
}
@ -65,6 +68,7 @@ func NewNydusify(registry *Registry, source, target, cache string) *Nydusify {
Cache: cache,
backendType: backendType,
backendConfig: backendConfig,
chunkDictArgs: chunkDictArgs,
}
}
@ -118,6 +122,12 @@ func (nydusify *Nydusify) Convert(t *testing.T) {
BackendType: nydusify.backendType,
BackendConfig: nydusify.backendConfig,
ChunkDict: converter.ChunkDictOpt{
Args: nydusify.chunkDictArgs,
Insecure: false,
Platform: "linux/amd64",
},
}
cvt, err := converter.New(opt)

View File

@ -0,0 +1,5 @@
FROM ubuntu
RUN echo test3 > /test3
RUN echo test5 > /test5
RUN echo test7 > /test7

View File

@ -0,0 +1,5 @@
FROM ubuntu
RUN echo test5 > /test5
RUN echo test7 > /test7
RUN echo test9 > /test9

View File

@ -0,0 +1,27 @@
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 0
}
},
"cache": {
"type": "blobcache",
"config": {
"work_dir": "/cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": true,
"enable_xattr": true,
"fs_prefetch": {
"enable": true,
"threads_count": 4
}
}

View File

@ -25,16 +25,16 @@ sha2 = { version = "0.9.1" }
sha-1 = { version = "0.9.1", optional = true }
spmc = "0.3.0"
url = { version = "2.1.1", optional = true }
vm-memory = ">=0.2.0"
vm-memory = "0.6"
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", rev = "cfd2cca", package = "fuse-rs" }
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", rev = "afc7b69" }
nydus-utils = { path = "../utils" }
nydus-error = "0.1"
storage = { path = "../storage", features = ["backend-localfs"] }
[dev-dependencies]
vmm-sys-util = "0.6.0"
vmm-sys-util = ">=0.8.0"
assert_matches = "1.5.0"
[features]

View File

@ -442,6 +442,7 @@ impl Rafs {
.into(),
inode: 0,
generation: 0,
attr_flags: 0,
attr_timeout: self.sb.meta.attr_timeout,
entry_timeout: self.sb.meta.entry_timeout,
}
@ -464,6 +465,14 @@ impl Rafs {
attr.mtime = self.i_time;
}
// Only touch permissions bits. This trick is some sort of workaround
// since nydusify gives root directory permission of 0o750 and fuse mount
// options `rootmode=` does not affect root directory's permission bits, ending
// up with preventing other users from accessing the container rootfs.
if attr.ino == ROOT_ID {
attr.mode = attr.mode & !0o777 | 0o755;
}
Ok(attr)
}
@ -532,7 +541,7 @@ impl FileSystem for Rafs {
fn destroy(&self) {}
fn lookup(&self, _ctx: Context, ino: u64, name: &CStr) -> Result<Entry> {
fn lookup(&self, _ctx: &Context, ino: u64, name: &CStr) -> Result<Entry> {
let mut rec = FopRecorder::settle(Lookup, ino, &self.ios);
let target = OsStr::from_bytes(name.to_bytes());
let parent = self.sb.get_inode(ino, self.digest_validate)?;
@ -563,9 +572,9 @@ impl FileSystem for Rafs {
}
}
fn forget(&self, _ctx: Context, _inode: u64, _count: u64) {}
fn forget(&self, _ctx: &Context, _inode: u64, _count: u64) {}
fn batch_forget(&self, ctx: Context, requests: Vec<(u64, u64)>) {
fn batch_forget(&self, ctx: &Context, requests: Vec<(u64, u64)>) {
for (inode, count) in requests {
self.forget(ctx, inode, count)
}
@ -573,27 +582,20 @@ impl FileSystem for Rafs {
fn getattr(
&self,
_ctx: Context,
_ctx: &Context,
ino: u64,
_handle: Option<u64>,
) -> Result<(libc::stat64, Duration)> {
let mut recorder = FopRecorder::settle(Getattr, ino, &self.ios);
let attr = self.get_inode_attr(ino).map(|mut r| {
let attr = self.get_inode_attr(ino).map(|r| {
recorder.mark_success(0);
// Only touch permissions bits. This trick is some sort of workaround
// since nydusify gives root directory permission of 0o750 and fuse mount
// options `rootmode=` does not affect root directory's permission bits, ending
// up with preventing other users from accessing the container rootfs.
if ino == ROOT_ID {
r.mode = r.mode & !0o777 | 0o755;
}
r
})?;
Ok((attr.into(), self.sb.meta.attr_timeout))
}
fn readlink(&self, _ctx: Context, ino: u64) -> Result<Vec<u8>> {
fn readlink(&self, _ctx: &Context, ino: u64) -> Result<Vec<u8>> {
let mut rec = FopRecorder::settle(Readlink, ino, &self.ios);
let inode = self.sb.get_inode(ino, self.digest_validate)?;
Ok(inode
@ -609,10 +611,10 @@ impl FileSystem for Rafs {
#[allow(clippy::too_many_arguments)]
fn read(
&self,
_ctx: Context,
_ctx: &Context,
ino: u64,
_handle: u64,
w: &mut dyn ZeroCopyWriter,
w: &mut dyn ZeroCopyWriter<S = ()>,
size: u32,
offset: u64,
_lock_owner: Option<u64>,
@ -664,7 +666,7 @@ impl FileSystem for Rafs {
fn release(
&self,
_ctx: Context,
_ctx: &Context,
_inode: u64,
_flags: u32,
_handle: u64,
@ -675,7 +677,7 @@ impl FileSystem for Rafs {
Ok(())
}
fn statfs(&self, _ctx: Context, _inode: u64) -> Result<libc::statvfs64> {
fn statfs(&self, _ctx: &Context, _inode: u64) -> Result<libc::statvfs64> {
// Safe because we are zero-initializing a struct with only POD fields.
let mut st: libc::statvfs64 = unsafe { std::mem::zeroed() };
@ -689,7 +691,13 @@ impl FileSystem for Rafs {
Ok(st)
}
fn getxattr(&self, _ctx: Context, inode: u64, name: &CStr, size: u32) -> Result<GetxattrReply> {
fn getxattr(
&self,
_ctx: &Context,
inode: u64,
name: &CStr,
size: u32,
) -> Result<GetxattrReply> {
let mut recorder = FopRecorder::settle(Getxattr, inode, &self.ios);
if !self.xattr_supported() {
@ -721,7 +729,7 @@ impl FileSystem for Rafs {
})
}
fn listxattr(&self, _ctx: Context, inode: u64, size: u32) -> Result<ListxattrReply> {
fn listxattr(&self, _ctx: &Context, inode: u64, size: u32) -> Result<ListxattrReply> {
let mut rec = FopRecorder::settle(Listxattr, inode, &self.ios);
if !self.xattr_supported() {
return Err(std::io::Error::from_raw_os_error(libc::ENOSYS));
@ -751,7 +759,7 @@ impl FileSystem for Rafs {
fn readdir(
&self,
_ctx: Context,
_ctx: &Context,
inode: u64,
_handle: u64,
size: u32,
@ -767,7 +775,7 @@ impl FileSystem for Rafs {
fn readdirplus(
&self,
_ctx: Context,
_ctx: &Context,
ino: u64,
_handle: u64,
size: u32,
@ -785,11 +793,11 @@ impl FileSystem for Rafs {
})
}
fn releasedir(&self, _ctx: Context, _inode: u64, _flags: u32, _handle: u64) -> Result<()> {
fn releasedir(&self, _ctx: &Context, _inode: u64, _flags: u32, _handle: u64) -> Result<()> {
Ok(())
}
fn access(&self, ctx: Context, ino: u64, mask: u32) -> Result<()> {
fn access(&self, ctx: &Context, ino: u64, mask: u32) -> Result<()> {
let mut rec = FopRecorder::settle(Access, ino, &self.ios);
let st = self.get_inode_attr(ino)?;
let mode = mask as i32 & (libc::R_OK | libc::W_OK | libc::X_OK);
@ -883,12 +891,14 @@ mod tests {
assert_eq!(attr.ino, 1);
assert_eq!(attr.blocks, 8);
assert_eq!(attr.uid, 0);
// Root inode mode must be 0755
assert_eq!(attr.mode & 0o777, 0o755);
}
#[test]
fn it_should_access() {
let rafs = new_rafs_backend();
let ctx = Context {
let ctx = &Context {
gid: 0,
pid: 1,
uid: 0,
@ -901,7 +911,7 @@ mod tests {
#[test]
fn it_should_listxattr() {
let rafs = new_rafs_backend();
let ctx = Context {
let ctx = &Context {
gid: 0,
pid: 1,
uid: 0,
@ -918,7 +928,7 @@ mod tests {
#[test]
fn it_should_get_statfs() {
let rafs = new_rafs_backend();
let ctx = Context {
let ctx = &Context {
gid: 0,
pid: 1,
uid: 0,
@ -944,7 +954,7 @@ mod tests {
#[test]
fn it_should_lookup_entry() {
let rafs = new_rafs_backend();
let ctx = Context {
let ctx = &Context {
gid: 0,
pid: 1,
uid: 0,

View File

@ -357,6 +357,7 @@ impl RafsInode for CachedInodeV5 {
attr: self.get_attr().into(),
inode: self.i_ino,
generation: 0,
attr_flags: 0,
attr_timeout: self.i_meta.attr_timeout,
entry_timeout: self.i_meta.entry_timeout,
}

View File

@ -528,6 +528,7 @@ impl RafsInode for OndiskInodeWrapper {
attr: self.get_attr().into(),
inode: inode.i_ino,
generation: 0,
attr_flags: 0,
attr_timeout: state.meta.attr_timeout,
entry_timeout: state.meta.entry_timeout,
}

View File

@ -89,6 +89,7 @@ impl RafsInode for MockInode {
attr: self.get_attr().into(),
inode: self.i_ino,
generation: 0,
attr_flags: 0,
attr_timeout: self.i_meta.attr_timeout,
entry_timeout: self.i_meta.entry_timeout,
}

View File

@ -339,11 +339,17 @@ Blocks: {blocks}"#,
let path = self.path_from_ino(inode.i_parent).unwrap();
println!(
r#"
{:width$} Parent Path {:width$}
File: {:width$} Parent Path: {:width$}
Compressed Offset: {}, Compressed Size: {}
Decompressed Offset: {}, Decompressed Size: {}
Chunk ID: {:50}, Blob ID: {}
"#,
name.to_string_lossy(),
path.to_string_lossy(),
c.compress_offset,
c.compress_size,
c.decompress_offset,
c.decompress_size,
c.block_id,
if let Ok(ref blob) = self.blobs_table.get(c.blob_index) {
&blob.blob_id
@ -653,9 +659,12 @@ impl Executor {
inspector: &mut RafsInspector,
input: String,
) -> std::result::Result<Option<Value>, ExecuteError> {
let mut raw = input.strip_suffix("\n").unwrap_or(&input).split(' ');
let mut raw = input
.strip_suffix("\n")
.unwrap_or(&input)
.split_ascii_whitespace();
let cmd = raw.next().unwrap();
let args = raw.next();
let args = raw.next().map(|a| a.trim());
debug!("execute {:?} {:?}", cmd, args);

View File

@ -25,9 +25,8 @@ use std::{error, fmt, io};
use event_manager::{EventOps, EventSubscriber, Events};
use fuse_backend_rs::api::{vfs::VfsError, BackendFileSystem, Vfs};
use fuse_backend_rs::passthrough::{Config, PassthroughFs};
#[cfg(feature = "virtiofs")]
use fuse_backend_rs::transport::Error as FuseTransportError;
use fuse_backend_rs::Error as VhostUserFsError;
use fuse_backend_rs::Error as FuseError;
use vmm_sys_util::{epoll::EventSet, eventfd::EventFd};
@ -99,11 +98,12 @@ pub enum DaemonError {
HandleEventUnknownEvent,
/// No memory configured.
NoMemoryConfigured,
/// Fail to walk descriptor chain
IterateQueue,
/// Invalid Virtio descriptor chain.
#[cfg(feature = "virtiofs")]
InvalidDescriptorChain(FuseTransportError),
/// Processing queue failed.
ProcessQueue(VhostUserFsError),
ProcessQueue(FuseError),
/// Cannot create epoll context.
Epoll(io::Error),
/// Cannot clone event fd.
@ -136,7 +136,7 @@ pub enum DaemonError {
ServiceStop,
/// Wait daemon failure
WaitDaemon(io::Error),
SessionShutdown(io::Error),
SessionShutdown(FuseTransportError),
Downcast(String),
FsTypeMismatch(String),
}

View File

@ -29,6 +29,7 @@ use fuse_backend_rs::api::{
};
use fuse_backend_rs::abi::linux_abi::{InHeader, OutHeader};
use fuse_backend_rs::transport::fusedev::{FuseChannel, FuseSession};
use vmm_sys_util::eventfd::EventFd;
use crate::upgrade::{self, FailoverPolicy, UpgradeManager};
@ -38,7 +39,6 @@ use daemon::{
DaemonStateMachineSubscriber, FsBackendCollection, FsBackendMountCmd, NydusDaemon, Trigger,
};
use nydus_app::BuildTimeInfo;
use nydus_utils::{FuseChannel, FuseSession};
#[derive(Serialize)]
struct FuseOp {
@ -71,8 +71,6 @@ impl Default for FuseOp {
pub(crate) struct FuseServer {
server: Arc<Server<Arc<Vfs>>>,
ch: FuseChannel,
// read buffer for fuse requests
buf: Vec<u8>,
}
impl FuseServer {
@ -80,21 +78,18 @@ impl FuseServer {
Ok(FuseServer {
server,
ch: se.new_channel(evtfd)?,
buf: Vec::with_capacity(se.bufsize()),
})
}
fn svc_loop(&mut self, metrics_hook: &dyn MetricsHook) -> Result<()> {
// Safe because we have already reserved the capacity
unsafe {
self.buf.set_len(self.buf.capacity());
}
// Given error EBADF, it means kernel has shut down this session.
let _ebadf = std::io::Error::from_raw_os_error(libc::EBADF);
loop {
if let Some(reader) = self.ch.get_reader(&mut self.buf)? {
let writer = self.ch.get_writer()?;
if let Some((reader, writer)) = self
.ch
.get_request()
.map_err(|_| std::io::Error::from_raw_os_error(libc::EINVAL))?
{
if let Err(e) = self
.server
.handle_message(reader, writer, None, Some(metrics_hook))

View File

@ -17,9 +17,15 @@ use libc::EFD_NONBLOCK;
use fuse_backend_rs::api::{server::Server, Vfs};
use fuse_backend_rs::transport::{FsCacheReqHandler, Reader, Writer};
use vhost_rs::vhost_user::{message::*, Listener, SlaveFsCacheReq};
use vhost_user_backend::{VhostUserBackend, VhostUserDaemon, Vring};
use vm_memory::GuestMemoryMmap;
use vhost::vhost_user::{message::*, Listener, SlaveFsCacheReq};
use vhost_user_backend::{
VhostUserBackend, VhostUserBackendMut, VhostUserDaemon, VringMutex, VringStateMutGuard, VringT,
};
use virtio_bindings::bindings::virtio_ring::{
VIRTIO_RING_F_EVENT_IDX, VIRTIO_RING_F_INDIRECT_DESC,
};
use virtio_queue::DescriptorChain;
use vm_memory::{GuestAddressSpace, GuestMemoryAtomic, GuestMemoryMmap};
use vmm_sys_util::eventfd::EventFd;
use nydus_app::BuildTimeInfo;
@ -39,22 +45,21 @@ const HIPRIO_QUEUE_EVENT: u16 = 0;
// The guest queued an available buffer for the request queue.
const REQ_QUEUE_EVENT: u16 = 1;
// The device has been dropped.
const KILL_EVENT: u16 = 2;
// const KILL_EVENT: u16 = 2;
type VhostUserBackendResult<T> = std::result::Result<T, std::io::Error>;
#[allow(dead_code)]
struct VhostUserFsBackendHandler {
backend: Mutex<VhostUserFsBackend>,
}
struct VhostUserFsBackend {
mem: Option<GuestMemoryMmap>,
mem: Option<GuestMemoryAtomic<GuestMemoryMmap>>,
kill_evt: EventFd,
event_idx: bool,
server: Arc<Server<Arc<Vfs>>>,
// handle request from slave to master
vu_req: Option<SlaveFsCacheReq>,
used_descs: Vec<(u16, u32)>,
}
impl VhostUserFsBackendHandler {
@ -62,9 +67,9 @@ impl VhostUserFsBackendHandler {
let backend = VhostUserFsBackend {
mem: None,
kill_evt: EventFd::new(EFD_NONBLOCK).map_err(DaemonError::Epoll)?,
event_idx: false,
server: Arc::new(Server::new(vfs)),
vu_req: None,
used_descs: Vec::with_capacity(QUEUE_SIZE),
};
Ok(VhostUserFsBackendHandler {
backend: Mutex::new(backend),
@ -72,21 +77,49 @@ impl VhostUserFsBackendHandler {
}
}
impl Clone for VhostUserFsBackend {
fn clone(&self) -> Self {
VhostUserFsBackend {
mem: self.mem.clone(),
kill_evt: self.kill_evt.try_clone().unwrap(),
event_idx: self.event_idx,
server: self.server.clone(),
vu_req: self.vu_req.clone(),
}
}
}
impl VhostUserFsBackend {
// There's no way to recover if error happens during processing a virtq, let the caller
// to handle it.
fn process_queue(&mut self, vring: &mut Vring) -> Result<()> {
let mem = self.mem.as_ref().ok_or(DaemonError::NoMemoryConfigured)?;
fn process_queue(
&mut self,
vring_state: &mut VringStateMutGuard<GuestMemoryAtomic<GuestMemoryMmap>>,
) -> Result<bool> {
let mut used_any = false;
let mem = self
.mem
.as_ref()
.ok_or(DaemonError::NoMemoryConfigured)?
.memory();
while let Some(avail_desc) = vring.mut_queue().iter(mem).next() {
let head_index = avail_desc.index();
let reader = Reader::new(mem, avail_desc.clone())
.map_err(DaemonError::InvalidDescriptorChain)?;
let avail_chains: Vec<DescriptorChain<GuestMemoryAtomic<GuestMemoryMmap>>> = vring_state
.get_queue_mut()
.iter()
.map_err(|_| DaemonError::IterateQueue)?
.collect();
for chain in avail_chains {
used_any = true;
let head_index = chain.head_index();
let reader =
Reader::new(&mem, chain.clone()).map_err(DaemonError::InvalidDescriptorChain)?;
let writer =
Writer::new(mem, avail_desc).map_err(DaemonError::InvalidDescriptorChain)?;
Writer::new(&mem, chain.clone()).map_err(DaemonError::InvalidDescriptorChain)?;
let total = self
.server
self.server
.handle_message(
reader,
writer,
@ -97,28 +130,35 @@ impl VhostUserFsBackend {
)
.map_err(DaemonError::ProcessQueue)?;
self.used_descs.push((head_index, total as u32));
}
if self.event_idx {
if vring_state.add_used(head_index, 0).is_err() {
warn!("Couldn't return used descriptors to the ring");
}
if !self.used_descs.is_empty() {
for (desc_index, data_sz) in &self.used_descs {
trace!(
"used desc index {} bytes {} total_used {}",
desc_index,
data_sz,
self.used_descs.len()
);
vring.mut_queue().add_used(mem, *desc_index, *data_sz);
match vring_state.needs_notification() {
Err(_) => {
warn!("Couldn't check if queue needs to be notified");
vring_state.signal_used_queue().unwrap();
}
Ok(needs_notification) => {
if needs_notification {
vring_state.signal_used_queue().unwrap();
}
}
}
} else {
if vring_state.add_used(head_index, 0).is_err() {
warn!("Couldn't return used descriptors to the ring");
}
vring_state.signal_used_queue().unwrap();
}
self.used_descs.clear();
vring.signal_used_queue().unwrap();
}
Ok(())
Ok(used_any)
}
}
impl VhostUserBackend for VhostUserFsBackendHandler {
impl VhostUserBackendMut<VringMutex> for VhostUserFsBackendHandler {
fn num_queues(&self) -> usize {
NUM_QUEUES
}
@ -128,53 +168,81 @@ impl VhostUserBackend for VhostUserFsBackendHandler {
}
fn features(&self) -> u64 {
1 << VIRTIO_F_VERSION_1 | VhostUserVirtioFeatures::PROTOCOL_FEATURES.bits()
1 << VIRTIO_F_VERSION_1
| 1 << VIRTIO_RING_F_INDIRECT_DESC
| 1 << VIRTIO_RING_F_EVENT_IDX
| VhostUserVirtioFeatures::PROTOCOL_FEATURES.bits()
}
fn protocol_features(&self) -> VhostUserProtocolFeatures {
VhostUserProtocolFeatures::MQ | VhostUserProtocolFeatures::SLAVE_REQ
}
fn set_event_idx(&mut self, _enabled: bool) {}
fn set_event_idx(&mut self, _enabled: bool) {
self.backend.lock().unwrap().event_idx = true
}
fn update_memory(&mut self, mem: GuestMemoryMmap) -> VhostUserBackendResult<()> {
fn update_memory(
&mut self,
mem: GuestMemoryAtomic<GuestMemoryMmap>,
) -> VhostUserBackendResult<()> {
self.backend.lock().unwrap().mem = Some(mem);
Ok(())
}
fn handle_event(
&self,
index: u16,
&mut self,
device_event: u16,
evset: epoll::Events,
vrings: &[Arc<RwLock<Vring>>],
vrings: &[VringMutex],
_thread_id: usize,
) -> VhostUserBackendResult<bool> {
if evset != epoll::Events::EPOLLIN {
return Err(DaemonError::HandleEventNotEpollIn.into());
}
match index {
let mut vring_state = match device_event {
HIPRIO_QUEUE_EVENT => {
let mut vring = vrings[HIPRIO_QUEUE_EVENT as usize].write().unwrap();
// high priority requests are also just plain fuse requests, just in a
// different queue
self.backend.lock().unwrap().process_queue(&mut vring)?;
debug!("HIPRIO_QUEUE_EVENT");
vrings[0].get_mut()
}
x if x >= REQ_QUEUE_EVENT && x < vrings.len() as u16 => {
let mut vring = vrings[x as usize].write().unwrap();
self.backend.lock().unwrap().process_queue(&mut vring)?;
REQ_QUEUE_EVENT => {
debug!("QUEUE_EVENT");
vrings[1].get_mut()
}
_ => return Err(DaemonError::HandleEventUnknownEvent.into()),
};
if self.backend.lock().unwrap().event_idx {
// vm-virtio's Queue implementation only checks avail_index
// once, so to properly support EVENT_IDX we need to keep
// calling process_queue() until it stops finding new
// requests on the queue.
loop {
vring_state.disable_notification().unwrap();
self.backend
.lock()
.unwrap()
.process_queue(&mut vring_state)?;
if !vring_state.enable_notification().unwrap() {
break;
}
}
} else {
// Without EVENT_IDX, a single call is enough.
self.backend
.lock()
.unwrap()
.process_queue(&mut vring_state)?;
}
Ok(false)
}
fn exit_event(&self, _thread_index: usize) -> Option<(EventFd, Option<u16>)> {
Some((
self.backend.lock().unwrap().kill_evt.try_clone().unwrap(),
Some(KILL_EVENT),
))
fn exit_event(&self, _thread_index: usize) -> Option<EventFd> {
// FIXME: need to patch vhost-user-backend to return KILL_EVENT
// so that daemon stop event gets popped up.
Some(self.backend.lock().unwrap().kill_evt.try_clone().unwrap())
}
fn set_slave_req_fd(&mut self, vu_req: SlaveFsCacheReq) {
@ -182,9 +250,9 @@ impl VhostUserBackend for VhostUserFsBackendHandler {
}
}
struct VirtiofsDaemon<S: VhostUserBackend> {
struct VirtiofsDaemon<S: VhostUserBackend<VringMutex> + Clone> {
vfs: Arc<Vfs>,
daemon: Arc<Mutex<VhostUserDaemon<S>>>,
daemon: Arc<Mutex<VhostUserDaemon<S, VringMutex>>>,
sock: String,
id: Option<String>,
supervisor: Option<String>,
@ -195,7 +263,7 @@ struct VirtiofsDaemon<S: VhostUserBackend> {
bti: BuildTimeInfo,
}
impl<S: VhostUserBackend> NydusDaemon for VirtiofsDaemon<S> {
impl<S: VhostUserBackend<VringMutex> + Clone> NydusDaemon for VirtiofsDaemon<S> {
fn start(&self) -> DaemonResult<()> {
let listener = Listener::new(&self.sock, true)
.map_err(|e| DaemonError::StartService(format!("{:?}", e)))?;
@ -274,7 +342,7 @@ impl<S: VhostUserBackend> NydusDaemon for VirtiofsDaemon<S> {
}
}
impl<S: VhostUserBackend> DaemonStateMachineSubscriber for VirtiofsDaemon<S> {
impl<S: VhostUserBackend<VringMutex> + Clone> DaemonStateMachineSubscriber for VirtiofsDaemon<S> {
fn on_event(&self, event: DaemonStateMachineInput) -> DaemonResult<()> {
self.trigger
.lock()
@ -301,6 +369,7 @@ pub fn create_nydus_daemon(
let vu_daemon = VhostUserDaemon::new(
String::from("vhost-user-fs-backend"),
Arc::new(RwLock::new(VhostUserFsBackendHandler::new(vfs.clone())?)),
GuestMemoryAtomic::new(GuestMemoryMmap::new()),
)
.map_err(|e| DaemonError::DaemonFailure(format!("{:?}", e)))?;

View File

@ -11,7 +11,7 @@ anyhow = "1.0.35"
arc-swap = "0.4.6"
libc = "0.2"
nix = "0.17.0"
vm-memory = ">=0.2.0"
vm-memory = "0.6"
governor = "0.3.1"
log = "0.4.8"
serde = { version = ">=1.0.27", features = ["serde_derive", "rc"] }
@ -31,13 +31,13 @@ httpdate = { version = "1.0", optional = true }
reqwest = { version = "0.11.0", features = ["blocking", "json"], optional = true }
tokio = { version = "1.5.0", features = ["rt-multi-thread"] }
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", rev = "cfd2cca", package = "fuse-rs" }
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", rev = "afc7b69"}
nydus-utils = { path = "../utils" }
nydus-error = "0.1"
[dev-dependencies]
vmm-sys-util = ">=0.3.1"
vmm-sys-util = ">=0.8.0"
[features]
backend-localfs = ["sha2"]

View File

@ -2,6 +2,8 @@
//
// SPDX-License-Identifier: Apache-2.0
use nix::sys::uio;
use nix::unistd::dup;
use std::collections::HashMap;
use std::fs::{self, File, OpenOptions};
use std::io::{ErrorKind, Result, Seek, SeekFrom};
@ -14,11 +16,9 @@ use std::sync::{
Arc, Mutex, RwLock,
};
use std::thread::{self, JoinHandle};
use std::time::Duration;
use nix::sys::uio;
use nix::unistd::dup;
use tokio::{self, runtime::Runtime};
use tokio::runtime::{Builder, Runtime};
use futures::executor::block_on;
use governor::{
@ -340,7 +340,7 @@ impl BlobCache {
let compressed = self.is_compressed;
self.metrics.buffered_backend_size.add(buffer.size() as u64);
let metrics = self.metrics.clone();
self.runtime.spawn(async move {
self.runtime.spawn_blocking(move || {
metrics.buffered_backend_size.sub(buffer.size() as u64);
match Self::persist_chunk(compressed, fd, delayed_chunk.as_ref(), buffer.slice()) {
Err(e) => {
@ -727,9 +727,8 @@ impl BlobCache {
if i != 0 && self.compressor() != compress::Algorithm::GZip {
let prior_cki = &req.chunks[i - 1];
assert!(
chunk.decompress_offset()
== prior_cki.decompress_offset()
+ prior_cki.decompress_size() as u64
chunk.compress_offset()
== prior_cki.compress_offset() + prior_cki.compress_size() as u64
)
}
@ -1384,7 +1383,15 @@ pub fn new(
mr_sender: Arc::new(Mutex::new(tx)),
mr_receiver: rx,
metrics,
runtime: Arc::new(Runtime::new().unwrap()),
runtime: Arc::new(
Builder::new_multi_thread()
.worker_threads(1) // Limit the number of worker thread to 1 since this runtime is generally used to do blocking IO.
.thread_keep_alive(Duration::from_secs(10))
.max_blocking_threads(8)
.thread_name("cache-flusher")
.build()
.map_err(|e| eother!(e))?,
),
});
if enabled {

View File

@ -112,7 +112,11 @@ impl RafsDevice {
}
/// Read a range of data from blob into the provided writer
pub fn read_to(&self, w: &mut dyn ZeroCopyWriter, desc: &mut RafsBioDesc) -> io::Result<usize> {
pub fn read_to(
&self,
w: &mut dyn ZeroCopyWriter<S = ()>,
desc: &mut RafsBioDesc,
) -> io::Result<usize> {
let offset = desc.bi_vec[0].offset;
let size = desc.bi_size;
let mut f = RafsBioDevice::new(desc, self);
@ -122,7 +126,11 @@ impl RafsDevice {
}
/// Write a range of data to blob from the provided reader
pub fn write_from(&self, _r: &mut dyn ZeroCopyReader, _desc: RafsBioDesc) -> io::Result<usize> {
pub fn write_from(
&self,
_r: &mut dyn ZeroCopyReader<S = ()>,
_desc: RafsBioDesc,
) -> io::Result<usize> {
unimplemented!()
}

View File

@ -14,12 +14,7 @@ sha2 = "0.9.1"
blake3 = "0.3.6"
serde = { version = ">=1.0.27", features = ["serde_derive", "rc"] }
serde_json = ">=1.0.9"
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", optional = true, rev = "cfd2cca", package = "fuse-rs" }
# used by fuse.rs, should be moved into fuse-backend-rs
nix = "0.17"
epoll = "4.0"
vmm-sys-util = "0.6"
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", rev = "afc7b69" }
nydus-error = "0.1"

View File

@ -1,319 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::cell::RefCell;
use std::fs::{File, OpenOptions};
use std::io;
use std::ops::Deref;
use std::os::unix::fs::PermissionsExt;
use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
use std::path::{Path, PathBuf};
use epoll::{ControlOptions, Event, Events};
use libc::{c_int, sysconf, _SC_PAGESIZE};
use nix::errno::Errno;
use nix::fcntl::{fcntl, FcntlArg, OFlag};
use nix::mount::{mount, umount2, MntFlags, MsFlags};
use nix::poll::{poll, PollFd, PollFlags};
use nix::unistd::{close, dup, getgid, getuid, read};
use nix::Error as nixError;
use fuse_backend_rs::transport::{FuseBuf, Reader, Writer};
use vmm_sys_util::eventfd::EventFd;
/// These follows definition from libfuse
const FUSE_KERN_BUF_SIZE: usize = 256;
const FUSE_HEADER_SIZE: usize = 0x1000;
const FUSE_DEVICE: &str = "/dev/fuse";
const FUSE_FSTYPE: &str = "fuse";
const EXIT_FUSE_SERVICE: u64 = 1;
/// A fuse session representation
pub struct FuseSession {
mountpoint: PathBuf,
fsname: String,
subtype: String,
file: Option<File>,
bufsize: usize,
}
impl FuseSession {
pub fn new(mountpoint: &Path, fsname: &str, subtype: &str) -> io::Result<FuseSession> {
let dest = mountpoint.canonicalize()?;
if !dest.is_dir() {
return Err(enotdir!());
}
Ok(FuseSession {
mountpoint: dest,
fsname: fsname.to_owned(),
subtype: subtype.to_owned(),
file: None,
bufsize: FUSE_KERN_BUF_SIZE * pagesize() + FUSE_HEADER_SIZE,
})
}
pub fn mount(&mut self) -> io::Result<()> {
let flags = MsFlags::MS_NODEV | MsFlags::MS_NOATIME | MsFlags::MS_RDONLY;
let file = fuse_kern_mount(&self.mountpoint, &self.fsname, &self.subtype, flags)?;
fcntl(file.as_raw_fd(), FcntlArg::F_SETFL(OFlag::O_NONBLOCK)).map_err(|e| einval!(e))?;
self.file = Some(file);
Ok(())
}
pub fn get_fuse_fd(&mut self) -> Option<RawFd> {
self.file.as_ref().map(|file| file.as_raw_fd())
}
pub fn set_fuse_fd(&mut self, fd: RawFd) {
self.file = Some(unsafe { File::from_raw_fd(fd) });
}
/// destroy a fuse session
pub fn umount(&mut self) -> io::Result<()> {
if let Some(file) = self.file.take() {
fuse_kern_umount(self.mountpoint.to_str().unwrap(), file)
} else {
Ok(())
}
}
/// return the mountpoint
pub fn mountpoint(&self) -> &Path {
&self.mountpoint
}
/// return the fsname
pub fn fsname(&self) -> &str {
&self.fsname
}
/// return the subtype
pub fn subtype(&self) -> &str {
&self.subtype
}
/// return the default buffer size
pub fn bufsize(&self) -> usize {
self.bufsize
}
/// create a new fuse message channel
pub fn new_channel(&self, evtfd: EventFd) -> io::Result<FuseChannel> {
if let Some(file) = &self.file {
FuseChannel::new(file.as_raw_fd(), evtfd, self.bufsize)
} else {
Err(einval!("invalid fuse session"))
}
}
}
impl Drop for FuseSession {
fn drop(&mut self) {
let _ = self.umount();
}
}
pub struct FuseChannel {
fd: c_int,
epoll_fd: RawFd,
bufsize: usize,
events: RefCell<Vec<Event>>,
// XXX: Ideally we should have write buffer as well
// write_buf: Vec<u8>,
}
fn register_event(epoll_fd: c_int, fd: RawFd, evt: Events, data: u64) -> io::Result<()> {
let event = Event::new(evt, data);
epoll::ctl(epoll_fd, ControlOptions::EPOLL_CTL_ADD, fd, event)
}
impl FuseChannel {
fn new(fd: c_int, evtfd: EventFd, bufsize: usize) -> io::Result<Self> {
const EPOLL_EVENTS_LEN: usize = 100;
let epoll_fd = epoll::create(true)?;
register_event(epoll_fd, fd, Events::EPOLLIN, 0)?;
let exit_evtfd = evtfd.try_clone().unwrap();
register_event(
epoll_fd,
exit_evtfd.as_raw_fd(),
Events::EPOLLIN,
EXIT_FUSE_SERVICE,
)?;
Ok(FuseChannel {
fd: dup(fd).map_err(|e| last_error!(e))?,
epoll_fd,
bufsize,
events: RefCell::new(vec![Event::new(Events::empty(), 0); EPOLL_EVENTS_LEN]),
})
}
pub fn get_reader<'b>(&self, buf: &'b mut Vec<u8>) -> io::Result<Option<Reader<'b>>> {
loop {
let num_events = epoll::wait(self.epoll_fd, -1, &mut self.events.borrow_mut())?;
for event in self.events.borrow().iter().take(num_events) {
let evset = match epoll::Events::from_bits(event.events) {
Some(evset) => evset,
None => {
let evbits = event.events;
warn!("epoll: ignoring unknown event set: 0x{:x}", evbits);
continue;
}
};
match evset {
Events::EPOLLIN => {
if event.data == EXIT_FUSE_SERVICE {
// Directly return from here is reliable as we handle only one epoll event
// which is `Read` or `Exit` once this function is called.
// One more trick is we don't read the event fd so as to make all fuse threads exit.
// That is because we configure this event fd as LEVEL triggered.
info!("Will exit from fuse service");
return Ok(None);
}
match read(self.fd, buf.as_mut_slice()) {
Ok(len) => {
return Ok(Some(
Reader::new(FuseBuf::new(&mut buf[..len]))
.map_err(|e| eother!(e))?,
));
}
Err(nixError::Sys(e)) => match e {
Errno::ENOENT => {
// ENOENT means the operation was interrupted, it's safe
// to restart
trace!("restart reading");
continue;
}
Errno::ENODEV => {
info!("fuse filesystem umounted");
return Ok(None);
}
Errno::EAGAIN => {
continue;
}
e => {
warn! {"read fuse dev failed on fd {}: {}", self.fd, e};
return Err(io::Error::from_raw_os_error(e as i32));
}
},
Err(e) => {
return Err(eother!(e));
}
}
}
x if (Events::EPOLLERR | Events::EPOLLHUP).contains(x) => {
warn!("Seems file was already closed!");
return Err(eio!());
}
_ => {
// We should not step into this branch as other event is not registered.
continue;
}
}
}
}
}
pub fn get_writer(&self) -> io::Result<Writer> {
Ok(Writer::new(self.fd, self.bufsize).unwrap())
}
}
impl Drop for FuseChannel {
fn drop(&mut self) {
let _ = close(self.fd);
}
}
/// Safe wrapper for `sysconf(_SC_PAGESIZE)`.
#[inline(always)]
fn pagesize() -> usize {
// Trivially safe
unsafe { sysconf(_SC_PAGESIZE) as usize }
}
/// Mount a fuse file system
fn fuse_kern_mount(
mountpoint: &Path,
fsname: &str,
subtype: &str,
flags: MsFlags,
) -> io::Result<File> {
let file = OpenOptions::new()
.create(false)
.read(true)
.write(true)
.open(FUSE_DEVICE)
.map_err(|e| {
error!("FUSE failed to open. {}", e);
e
})?;
let meta = mountpoint.metadata().map_err(|e| {
error!("Can not get metadata from mount point. {}", e);
e
})?;
let opts = format!(
"default_permissions,allow_other,fd={},rootmode={:o},user_id={},group_id={}",
file.as_raw_fd(),
meta.permissions().mode() & libc::S_IFMT,
getuid(),
getgid(),
);
let mut fstype = String::from(FUSE_FSTYPE);
if !subtype.is_empty() {
fstype.push('.');
fstype.push_str(subtype);
}
info!(
"mount source {} dest {} with fstype {} opts {} fd {}",
fsname,
mountpoint.to_str().unwrap(),
fstype,
opts,
file.as_raw_fd(),
);
mount(
Some(fsname),
mountpoint,
Some(fstype.deref()),
flags,
Some(opts.deref()),
)
.map_err(|e| eother!(format!("mount failed: {:}", e)))?;
Ok(file)
}
/// Umount a fuse file system
fn fuse_kern_umount(mountpoint: &str, file: File) -> io::Result<()> {
let mut fds = [PollFd::new(file.as_raw_fd(), PollFlags::empty())];
let res = poll(&mut fds, 0);
// Drop to close fuse session fd, otherwise synchronous umount
// can recurse into filesystem and deadlock.
drop(file);
if res.is_ok() {
// POLLERR means the file system is already umounted,
// or the connection was severed via /sys/fs/fuse/connections/NNN/abort
if let Some(event) = fds[0].revents() {
if event == PollFlags::POLLERR {
return Ok(());
}
}
}
umount2(mountpoint, MntFlags::MNT_DETACH).map_err(|e| eother!(e))
}

View File

@ -14,15 +14,11 @@ extern crate lazy_static;
use std::convert::{Into, TryFrom, TryInto};
pub use self::exec::*;
#[cfg(feature = "fusedev")]
pub use self::fuse::{FuseChannel, FuseSession};
pub use self::inode_bitmap::InodeBitmap;
pub use self::types::*;
pub mod digest;
pub mod exec;
#[cfg(feature = "fusedev")]
pub mod fuse;
pub mod inode_bitmap;
pub mod metrics;
pub mod types;

View File

@ -596,7 +596,7 @@ pub struct BackendMetrics {
read_cumulative_latency_millis_dist: [BasicMetric; BLOCK_READ_SIZES_MAX],
read_count_block_size_dist: [BasicMetric; BLOCK_READ_SIZES_MAX],
// Categorize metrics as per their latency and request size
read_latency_hits_dist: [[BasicMetric; READ_LATENCY_RANGE_MAX]; BLOCK_READ_SIZES_MAX],
read_latency_sizes_dist: [[BasicMetric; READ_LATENCY_RANGE_MAX]; BLOCK_READ_SIZES_MAX],
}
impl Metric for BasicMetric {
@ -680,7 +680,7 @@ impl BackendMetrics {
let size_idx = request_size_index(size);
self.read_cumulative_latency_millis_dist[size_idx].add(elapsed);
self.read_count_block_size_dist[size_idx].inc();
self.read_latency_hits_dist[size_idx][lat_idx].inc();
self.read_latency_sizes_dist[size_idx][lat_idx].inc();
}
}