Compare commits

..

35 Commits

Author SHA1 Message Date
Changwei Ge 0233c42d45 cargo: dump package version to 1.1.2
Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 15:40:39 +08:00
Peng Tao bb6236ec34
Merge pull request #278 from changweige/impr-nydusify
Improve chunk-dict processing of nydusify
2022-01-18 14:26:23 +08:00
Peng Tao e4295f4119
Merge pull request #277 from changweige/fix-continuity-check
blobcache: check chunks continuity by their compressed size
2022-01-18 14:25:48 +08:00
Peng Tao 9fc27c6019
Merge pull request #276 from changweige/impr-image-inspect
Slightly improve nydus-image inspect
2022-01-18 14:25:19 +08:00
Peng Tao c10030142e
Merge pull request #275 from changweige/rename-metrics
metrics: rename metric read_latency_hits_dist
2022-01-18 14:24:41 +08:00
Peng Tao 1bd83773f3
Merge pull request #274 from changweige/fix-root-permission
rafs: fix up access API root mode
2022-01-18 14:23:51 +08:00
henry.hj 9a2515cc91 nydusify: update examples for chunk-dict
Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
henry.hj 01d202d740 nydusify: add reference blob layers to manifests
Only for registry backend:
We include a new type cache layer:
    blob layers without SourceTrainID which are referenced from chunk dict

For example:
Chunk-dict1 layers:
    c-layer1 -- c-layer2
Original layers:
    blob-layer1 -- blob-layer2 -- blob-layer3 -- bootstrap
With chunk-dict:
    c-layer1 -- c-layer2 -- blob-layer1' -- blob-layer2' -- blob-layer3'
    -- bootstrap

Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
henry.hj af9d8cb881 nydusify e2e smoke: add chunk-dict testcases
Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
henry.hj 57219e799e nydusify: fix wrong blobs list on manifests
Problem:
    After we change chunk-dict, we lose old chunk-dict info on
manifests which still has been used in bootstrap.

Cause:
    When enable build-cache, we use parent bootstraps which are build
with old chunk-dict from remote cache. But we only combine new chunk-dict
blobs and layer blobs to final blobs-list which would be set on manifests
annotations. Unfortunately we lose old chunk-dict blobs which still
are referenced by parent layers.

Solution:
    Record reference blobs on build-cache records with key
"containerd.io/snapshot/nydus-reference-blob-ids".
    For final blobs-list, we append each layer blobs and its referenced blobs together.

Note:
    We should use new build-cache-version to clear old build-cache records
when we first use versions of nydusify which are build based on this patch.

TODO:
    Let build-cache be aware of verison of chunk-dict. Auto invalidate
build-cache if chunk-dict changed

Signed-off-by: henry.hj <henry.hj@antgroup.com>
2022-01-18 11:06:15 +08:00
Changwei Ge 81e86f13d7 blobcache: check chunks continuity by their compressed size
Building nydus image with --chunk-aligned option set, decompressed_offset +
decompressed_size = next_decompressed_offset can't be ruled.
So we use compressed part now.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 11:03:39 +08:00
Changwei Ge 9ca6cc5486 nydus-image/inspect: trim white spaces before parsing
Otherwise, the string parsing may fail resulting in inspector's
error.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 10:48:03 +08:00
Changwei Ge 55e3c0da85 nydus-image/inspect: print sizes info of chunks
`chunk` subcommand prints more info of chunks compressed and
decompressed sizes info. It helps analyze rafs layout.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 10:46:32 +08:00
Changwei Ge ef5f362e10 metrics: rename metric read_latency_hits_dist
Thus to make it more suggestive

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2022-01-18 10:44:14 +08:00
Peng Tao 2c24d49b9b rafs: fix up accese API root mode
Make sure all root inode mode is 0755.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-01-18 10:42:02 +08:00
imeoer 250aad442a
Merge pull request #247 from changweige/pick-stable-tokio-threads
cache: set the number of worker threads of tokio threads pool
2021-12-28 10:01:15 +08:00
Changwei Ge 952daab44a cache: set the number of worker threads of tokio threads pool
It previously uses the default runtime builder which creates
a thread for each cpu core. Nydusd on a server equipped many
cpu sockets and cores will start many threads most of which are
idle.

In addition, use `spawn_blocking` instead which is more reasonable
within blobcache scenario

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-12-27 15:14:25 +08:00
imeoer 560878e373
Merge pull request #228 from changweige/release-upstream-v1.1.1
cargo: dump package version to 1.1.1
2021-11-26 13:58:53 +08:00
Changwei Ge 393f1f2611 cargo: dump package version to 1.1.1
Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-26 11:49:07 +08:00
Peng Tao 484a8cbd28
Merge pull request #227 from changweige/add-ci
action/ci: add stable-1.x branch to CI target branches list
2021-11-26 11:10:48 +08:00
Changwei Ge a4966b3ccf action/ci: add stable-1.x branch to CI target branches list
Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-26 09:52:17 +08:00
Changwei Ge 45e44d8e43
Merge pull request #225 from bergwolf/upstream/update-1.x
backport master commits for 1.1.1 release
2021-11-26 09:31:20 +08:00
dependabot[bot] a749894b43 build(deps): bump github.com/containerd/containerd
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.5.7 to 1.5.8.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.5.7...v1.5.8)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
dependabot[bot] c0962858eb build(deps): bump github.com/containerd/containerd in /contrib/nydusify
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.4.11 to 1.4.12.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.4.11...v1.4.12)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
dependabot[bot] 5ad4e0def1 build(deps): bump github.com/containerd/containerd
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.4.11 to 1.4.12.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.4.11...v1.4.12)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
dependabot[bot] 5e2f549e49 build(deps): bump github.com/opencontainers/image-spec
Bumps [github.com/opencontainers/image-spec](https://github.com/opencontainers/image-spec) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/opencontainers/image-spec/releases)
- [Changelog](https://github.com/opencontainers/image-spec/blob/main/RELEASES.md)
- [Commits](https://github.com/opencontainers/image-spec/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/image-spec
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 15:39:55 +08:00
Peng Tao 9ca8e82300 release: include an example nydusd config in the release tarball
So that user can just use it in normal use cases.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:39:55 +08:00
Peng Tao 1b85064186 release: package all static binaries when tagging releases
Right now we have a few binaries and we should pack them all in the
release tarball.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:34:08 +08:00
Peng Tao 5109590561 makefile: static-release is missing virtiofs target
We should build both fusedev and virtiofs targets for static releases.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:33:54 +08:00
Peng Tao ebb495e272 cargo: use event-manger from crate.io
Instead of getting it from github.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:33:49 +08:00
Peng Tao dacc27446e vendor: update fuse-backend-rs dependency
To get the latest features and improvements.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-25 15:33:36 +08:00
Changwei Ge d12c480213 snapshotter: don't touch original nydusd auth if no auth in labels
Nydusd must pull data from registry with auth if the repo is private.
Snapshotter only fetches auth from labels and if no auth in the labels,
it will use empty string to replace the original auth.

This causes nydusd lossing auth to access registry.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-25 15:33:29 +08:00
Changwei Ge 887267ceb4 makefile: a option to build golang components without docker
Currently, golang components like snapshotter, ctr-remote, nydusify are
built within containers which bind-mount host GOPATH. When user is root
insides container, it will change files' owner in GOPATH to root. So
other users work on the same host can not access those files in GOPATH anymore.

In addition, current golang build will cause dind(docker in docker) if customers'
software build system is on top of container.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-25 15:33:19 +08:00
Peng Tao e9e81752b0
Merge pull request #219 from dragonflyoss/prepare-v1.1.1
snapshotter: don't touch original nydusd auth if no auth in labels
2021-11-24 17:33:50 +08:00
Changwei Ge 2421b06840 snapshotter: don't touch original nydusd auth if no auth in labels
Nydusd must pull data from registry with auth if the repo is private.
Snapshotter only fetches auth from labels and if no auth in the labels,
it will use empty string to replace the original auth.

This causes nydusd lossing auth to access registry.

Signed-off-by: Changwei Ge <chge@linux.alibaba.com>
2021-11-23 17:22:23 +08:00
588 changed files with 75977 additions and 101172 deletions

7
.github/CODEOWNERS vendored
View File

@ -1,7 +0,0 @@
# A CODEOWNERS file uses a pattern that follows the same rules used in gitignore files.
# The pattern is followed by one or more GitHub usernames or team names using the
# standard @username or @org/team-name format. You can also refer to a user by an
# email address that has been added to their GitHub account, for example user@example.com
* @dragonflyoss/nydus-reviewers
.github @dragonflyoss/nydus-maintainers

View File

@ -1,44 +0,0 @@
## Additional Information
_The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._
### Version of nydus being used (nydusd --version)
<!-- Example:
Version: v2.2.0
Git Commit: a38f6b8d6257af90d59880265335dd55fab07668
Build Time: 2023-03-01T10:05:57.267573846Z
Profile: release
Rustc: rustc 1.66.1 (90743e729 2023-01-10)
-->
### Version of nydus-snapshotter being used (containerd-nydus-grpc --version)
<!-- Example:
Version: v0.5.1
Revision: a4b21d7e93481b713ed5c620694e77abac637abb
Go version: go1.18.6
Build time: 2023-01-28T06:05:42
-->
### Kernel information (uname -r)
_command result: uname -r_
### GNU/Linux Distribution, if applicable (cat /etc/os-release)
_command result: cat /etc/os-release_
### containerd-nydus-grpc command line used, if applicable (ps aux | grep containerd-nydus-grpc)
```
```
### client command line used, if applicable (such as: nerdctl, docker, kubectl, ctr)
```
```
### Screenshots (if applicable)
## Details about issue

View File

@ -1,21 +0,0 @@
## Relevant Issue (if applicable)
_If there are Issues related to this PullRequest, please list it._
## Details
_Please describe the details of PullRequest._
## Types of changes
_What types of changes does your PullRequest introduce? Put an `x` in all the boxes that apply:_
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation Update (if none of the other choices apply)
## Checklist
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have updated the documentation accordingly.
- [ ] I have added tests to cover my changes.

23
.github/codecov.yml vendored
View File

@ -1,23 +0,0 @@
coverage:
status:
project:
default:
enabled: yes
target: auto # auto compares coverage to the previous base commit
# adjust accordingly based on how flaky your tests are
# this allows a 0.2% drop from the previous base commit coverage
threshold: 0.2%
patch: false
comment:
layout: "reach, diff, flags, files"
behavior: default
require_changes: true # if true: only post the comment if coverage changes
codecov:
require_ci_to_pass: false
notify:
wait_for_ci: true
# When modifying this file, please validate using
# curl -X POST --data-binary @codecov.yml https://codecov.io/validate

View File

@ -1,250 +0,0 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

View File

@ -1,40 +0,0 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

View File

@ -1,329 +0,0 @@
name: Benchmark
on:
schedule:
# Run at 03:00 clock UTC on Monday and Wednesday
- cron: "0 03 * * 1,3"
pull_request:
paths:
- '.github/workflows/benchmark.yml'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
benchmark-description:
runs-on: ubuntu-latest
steps:
- name: Description
run: |
echo "## Benchmark Environment" > $GITHUB_STEP_SUMMARY
echo "| operating system | cpu | memory " >> $GITHUB_STEP_SUMMARY
echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=oci
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-oci.json
export SNAPSHOTTER=overlayfs
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: smoke/${{ matrix.image }}-oci.json
benchmark-fsversion-v5:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-5
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v5.json
benchmark-fsversion-v6:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-6
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v6.json
benchmark-zran:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=zran
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: smoke/${{ matrix.image }}-zran.json
benchmark-result:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download benchmark-oci
uses: actions/download-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v5
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v6
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-zran
uses: actions/download-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: benchmark-result
- name: Benchmark Summary
run: |
case ${{matrix.image}} in
"wordpress")
echo "### workload: wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"node")
echo "### workload: node index.js; wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"python")
echo "### workload: python -c 'print("hello")'" > $GITHUB_STEP_SUMMARY
;;
"golang")
echo "### workload: go run main.go" > $GITHUB_STEP_SUMMARY
;;
"ruby")
echo "### workload: ruby -e "puts \"hello\""" > $GITHUB_STEP_SUMMARY
;;
"amazoncorretto")
echo "### workload: javac Main.java; java Main" > $GITHUB_STEP_SUMMARY
;;
esac
cd benchmark-result
metric_files=(
"${{ matrix.image }}-oci.json"
"${{ matrix.image }}-fsversion-v5.json"
"${{ matrix.image }}-fsversion-v6.json"
"${{ matrix.image }}-zran.json"
)
echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |convert-time(s)|" >> $GITHUB_STEP_SUMMARY
echo "|:-------------|:-----------:|:----------:|:---------------:|:--------------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for file in "${metric_files[@]}"; do
name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/')
data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024)) \(.conversion_elapsed / 1e9)"' "$file" | \
awk '{ printf "%.2f | %.0f | %.2f | %.2f | %.2f", $1, $2, $3, $4, $5 }')
echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY
done

30
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,30 @@
name: CI
on:
push:
branches: ["*"]
pull_request:
branches: [master, stable-1.x]
env:
CARGO_TERM_COLOR: always
jobs:
smoke:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Cache Nydus
uses: Swatinem/rust-cache@v1
with:
target-dir: ./target-fusedev
cache-on-failure: true
- name: Cache Docker Layers
uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Smoke Test
run: |
echo Cargo Home: $CARGO_HOME
echo Running User: $(whoami)
make docker-smoke

View File

@ -1,389 +0,0 @@
name: Convert & Check Images
on:
schedule:
# Do conversion every day at 00:03 clock UTC
- cron: "3 0 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
REGISTRY: ghcr.io
ORGANIZATION: ${{ github.repository }}
IMAGE_LIST_PATH: misc/top_images/image_list.txt
FSCK_PATCH_PATH: misc/top_images/fsck.patch
jobs:
nydusify-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
fsck-erofs-build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Build fsck.erofs
run: |
sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev
git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/
- name: Upload fsck.erofs
uses: actions/upload-artifact@v4
with:
name: fsck-erofs-artifact
path: |
/usr/local/bin/fsck.erofs
convert-zran:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check zran images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-zran
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-oci-ref"
ghcr_repo=${{ env.REGISTRY }}/${{ env.ORGANIZATION }}
# push oci image to ghcr/local for zran reference
sudo docker pull $I:latest
sudo docker tag $I:latest $ghcr_repo/$I
sudo docker tag $I:latest localhost:5000/$I
sudo DOCKER_CONFIG=$HOME/.docker docker push $ghcr_repo/$I
sudo docker push localhost:5000/$I
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--oci-ref \
--source $ghcr_repo/$I \
--target $ghcr_repo/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--oci-ref \
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64 \
--output-json convert-zran/${I}.json
# check zran image and referenced oci image
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check \
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
convert-native-v5:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Convert and check RAFS v5 images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v5
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v5"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v5 \
--fs-version 5 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5 \
--fs-version 5 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v5/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
convert-native-v6:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6 \
--fs-version 6 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 \
--fs-version 6 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
convert-native-v6-batch:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 batch images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6-batch
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6-batch"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6-batch/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
convert-metric:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6, convert-native-v6-batch]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download Zran Metric
uses: actions/download-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
- name: Download V5 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
- name: Download V6 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
- name: Download V6 Batch Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
- name: Summary
run: |
echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY
echo "> Compare the size of OCI image and Nydus image."
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|oci/nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")")
zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")")
v5SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v5/${I}.json) / 1048576")")
v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")")
v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")")
v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")")
batchSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
batchTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|${batchSourceImageSize}/${batchTargetImageSize}|" >> $GITHUB_STEP_SUMMARY
done
echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY
echo "> Time elapsed to convert OCI image to Nydus image."
echo "|image name|nydus-zran|nydus-v5|nydus-v6|nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")")
v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")")
v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")")
batchConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6-batch/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|${batchConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
done
- uses: geekyeggo/delete-artifact@v2
with:
name: '*'

View File

@ -1,45 +0,0 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -1,325 +1,98 @@
name: Release
name: release
on:
push:
tags:
- v[0-9]+.[0-9]+.[0-9]+*
schedule:
# Run daily sanity check at 22:08 clock UTC
- cron: "8 22 * * *"
workflow_dispatch:
- v[0-9]+.[0-9]+.[0-9]+
env:
CARGO_TERM_COLOR: always
jobs:
nydus-linux:
build-nydus-rs:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: actions/cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl .
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-linux-${{ matrix.arch }}
path: |
nydusd
~/.cargo/registry
~/.cargo/git
target-fusedev
target-virtiofs
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo
- name: Build nydus-rs
run: |
make docker-static
sudo mv target-fusedev/x86_64-unknown-linux-musl/release/nydusd nydusd-fusedev
sudo mv target-fusedev/x86_64-unknown-linux-musl/release/nydus-image .
sudo mv target-fusedev/x86_64-unknown-linux-musl/release/nydusctl .
sudo mv target-virtiofs/x86_64-unknown-linux-musl/release/nydusd nydusd-virtiofs
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) .
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: nydus-artifacts
path: |
nydusd-fusedev
nydusd-virtiofs
nydus-image
nydusctl
configs
nydus-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-darwin-${{ matrix.arch }}
path: |
nydusctl
nydusd
nydus-image
configs
contrib-linux:
build-contrib:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
env:
DOCKER: false
steps:
- uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
- uses: actions/checkout@v2
- name: cache go mod
uses: actions/cache@v2
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydus-snapshotter/go.sum', '**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/docker-nydus-graphdriver/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
- name: build contrib go components
run: |
make -e GOARCH=${{ matrix.arch }} contrib-release
make all-contrib-static-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/docker-nydus-graphdriver/bin/nydus_graphdriver .
sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
sudo mv contrib/nydus-snapshotter/bin/containerd-nydus-grpc .
- name: store-artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: nydus-artifacts-linux-${{ matrix.arch }}-contrib
name: nydus-artifacts
path: |
ctr-remote
nydus_graphdriver
nydusify
nydus-overlayfs
containerd-nydus-grpc
prepare-tarball-linux:
upload-artifacts:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
os: [linux]
needs: [nydus-linux, contrib-linux]
needs: [build-nydus-rs, build-contrib]
steps:
- uses: actions/checkout@v2
- name: install hub
run: |
HUB_VER=$(curl -s "https://api.github.com/repos/github/hub/releases/latest" | jq -r .tag_name | sed 's/^v//')
wget -q -O- https://github.com/github/hub/releases/download/v$HUB_VER/hub-linux-amd64-$HUB_VER.tgz | \
tar xz --strip-components=2 --wildcards '*/bin/hub'
sudo mv hub /usr/local/bin/hub
- name: download artifacts
uses: actions/download-artifact@v4
uses: actions/download-artifact@v2
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
name: nydus-artifacts
path: nydus-static
- name: prepare release tarball
- name: upload artifacts
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="nydus-static-$tag-${{ matrix.os }}-${{ matrix.arch }}.tgz"
tarball="nydus-static-$tag-x86_64.tgz"
chmod +x nydus-static/*
tar cf - nydus-static | gzip > ${tarball}
echo "tarball=${tarball}" >> $GITHUB_ENV
shasum="$tarball.sha256sum"
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
# use a seperate job for darwin because github action if: condition cannot handle && properly.
prepare-tarball-darwin:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64]
os: [darwin]
needs: [nydus-macos]
steps:
- name: download artifacts
uses: actions/download-artifact@v4
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static
- name: prepare release tarball
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="nydus-static-$tag-${{ matrix.os }}-${{ matrix.arch }}.tgz"
chmod +x nydus-static/*
tar cf - nydus-static | gzip > ${tarball}
echo "tarball=${tarball}" >> $GITHUB_ENV
shasum="$tarball.sha256sum"
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
create-release:
runs-on: ubuntu-latest
needs: [prepare-tarball-linux, prepare-tarball-darwin]
steps:
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-release-tarball-*
merge-multiple: true
path: nydus-tarball
- name: prepare release env
run: |
echo "tarballs<<EOF" >> $GITHUB_ENV
for I in $(ls nydus-tarball);do echo "nydus-tarball/${I}" >> $GITHUB_ENV; done
echo "EOF" >> $GITHUB_ENV
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
echo "tag=${tag}" >> $GITHUB_ENV
cat $GITHUB_ENV
- name: push release
if: github.event_name == 'push'
uses: softprops/action-gh-release@v1
with:
name: "Nydus Image Service ${{ env.tag }}"
body: |
Binaries download mirror (sync within a few hours): https://registry.npmmirror.com/binary.html?path=nydus/${{ env.tag }}/
generate_release_notes: true
files: |
${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true
echo "uploading ${tarball} for tag $tag ..."
GITHUB_TOKEN=${{ secrets.HUB_UPLOAD_TOKEN }} hub release create -m "Nydus Image Service $tag" -m "Nydus Image Service $tag release" -a "${tarball}" "$tag"

View File

@ -1,386 +0,0 @@
name: Smoke Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false GOARCH=${{ matrix.arch }} contrib-release
- name: Upload Nydusify
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
contrib-lint:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- path: contrib/nydusify
- path: contrib/nydus-overlayfs
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache: false
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.64
working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose
nydus-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
nydus-image
nydusd
nydusd-build-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
nydus-integration-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Docker Cache
uses: jpribyl/action-docker-layer-caching@v0.1.0
continue-on-error: true
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: |
target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Older Binaries
id: prepare-binaries
run: |
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
versions=(v0.1.0 ${NYDUS_STABLE_VERSION})
version_archs=(v0.1.0-x86_64 ${NYDUS_STABLE_VERSION}-linux-amd64)
for i in ${!versions[@]}; do
version=${versions[$i]}
version_arch=${version_archs[$i]}
wget -q https://github.com/dragonflyoss/nydus/releases/download/$version/nydus-static-$version_arch.tgz
sudo mkdir nydus-$version /usr/bin/nydus-$version
sudo tar xzf nydus-static-$version_arch.tgz -C nydus-$version
sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/
done
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test
run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest
sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest
sudo bash misc/prepare.sh
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}"
versions=(v0.1.0 ${NYDUS_STABLE_VERSION} latest)
version_exports=(v0_1_0 ${NYDUS_STABLE_VERSION_EXPORT} latest)
for i in ${!version_exports[@]}; do
version=${versions[$i]}
version_export=${version_exports[$i]}
export NYDUS_BUILDER_$version_export=/usr/bin/nydus-$version/nydus-image
export NYDUS_NYDUSD_$version_export=/usr/bin/nydus-$version/nydusd
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
sudo -E make smoke-only
nydus-unit-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Unit Test
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Unit Test
run: |
make -e DOCKER=false contrib-test
- name: Upload contrib coverage file
uses: actions/upload-artifact@v4
with:
name: contrib-test-coverage-artifact
path: |
contrib/nydusify/coverage.txt
nydus-unit-test-coverage:
runs-on: ubuntu-latest
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Generate code coverage
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
- name: Upload nydus coverage file
uses: actions/upload-artifact@v4
with:
name: nydus-test-coverage-artifact
path: |
codecov.json
upload-coverage-to-codecov:
runs-on: ubuntu-latest
needs: [contrib-unit-test-coverage, nydus-unit-test-coverage]
steps:
- uses: actions/checkout@v4
- name: Download nydus coverage file
uses: actions/download-artifact@v4
with:
name: nydus-test-coverage-artifact
- name: Download contrib coverage file
uses: actions/download-artifact@v4
with:
name: contrib-test-coverage-artifact
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: ./codecov.json,./coverage.txt
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
fail_ci_if_error: true
nydus-cargo-deny:
name: cargo-deny
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v2
performance-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- mode: fs-version-5
- mode: fs-version-6
- mode: zran
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh
- name: Performance Test
run: |
export PERFORMANCE_TEST_MODE=${{ matrix.mode }}
sudo -E make smoke-performance
takeover-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh takeover_test
- name: Takeover Test
run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover

View File

@ -1,31 +0,0 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

13
.gitignore vendored
View File

@ -1,14 +1,5 @@
**/target*
/target*
**/*.rs.bk
**/.vscode
/.vscode
.idea
.cargo
**/.pyc
__pycache__
.DS_Store
go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

View File

@ -1,16 +0,0 @@
## CNCF Dragonfly Nydus Adopters
A non-exhaustive list of Nydus adopters is provided below.
Please kindly share your experience about Nydus with us and help us to improve Nydus ❤️.
**_[Alibaba Cloud](https://www.alibabacloud.com)_** - Aliyun serverless image pull time drops from 20 seconds to 0.8s seconds.
**_[Ant Group](https://www.antgroup.com)_** - Serving large-scale clusters with millions of container creations each day.
**_[ByteDance](https://www.bytedance.com)_** - Serving container image acceleration in Technical Infrastructure of ByteDance.
**_[KuaiShou](https://www.kuaishou.com)_** - Starting to deploy millions of containers with Dragonfly and Nydus.
**_[Yue Miao](https://www.laiyuemiao.com)_** - The startup time of micro service has been greatly improved, and reduced the network consumption.
**_[CoreWeave](https://coreweave.com/)_** - Dramatically reduce the pull time of container image which embedded machine learning models.

2982
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,130 +1,70 @@
[package]
name = "nydus-rs"
# will be overridden by real git tag during cargo build
version = "0.0.0-git"
description = "Nydus Image Service"
version = "1.1.2"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
exclude = ["contrib/", "smoke/", "tests/"]
edition = "2021"
resolver = "2"
build = "build.rs"
edition = "2018"
[profile.release]
panic = "abort"
[[bin]]
name = "nydusctl"
path = "src/bin/nydusctl/main.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[[bin]]
name = "nydusd"
path = "src/bin/nydusd/main.rs"
[[bin]]
name = "nydus-image"
path = "src/bin/nydus-image/main.rs"
[lib]
name = "nydus"
path = "src/lib.rs"
[dependencies]
anyhow = "1"
clap = { version = "4.0.18", features = ["derive", "cargo"] }
flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.12.0"
hex = "0.4.3"
hyper = "0.14.11"
hyperlocal = "0.8.0"
lazy_static = "1"
libc = "0.2"
rlimit = "0.3.0"
log = "0.4.8"
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
mio = { version = "0.8", features = ["os-poll", "os-ext"] }
nix = "0.24.0"
rlimit = "0.9.0"
rusqlite = { version = "0.30.0", features = ["bundled"] }
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
epoll = ">=4.0.1"
libc = "0.2"
vmm-sys-util = ">=0.8.0"
clap = "2.33"
flexi_logger = { version = "0.17" }
serde = { version = ">=1.0.27", features = ["serde_derive", "rc"] }
serde_json = "1.0.51"
tar = "0.4.40"
tokio = { version = "1.35.1", features = ["macros"] }
serde_with = { version = "1.6.0", features = ["macros"] }
sha2 = "0.9.1"
lazy_static = "1.4.0"
xattr = "0.2.2"
nix = "0.17"
anyhow = "1.0.35"
base64 = { version = ">=0.12.0" }
rust-fsm = "0.6.0"
chrono = "0.4.19"
openssl = { version = "0.10.35", features = ["vendored"] }
hyperlocal = "0.8.0"
tokio = { version = "1.9.0", features = ["macros"] }
hyper = "0.14.11"
# Build static linked openssl library
openssl = { version = '0.10.72', features = ["vendored"] }
event-manager = "0.2.1"
vm-memory = { version = "0.6.0", features = ["backend-mmap"], optional = true }
fuse-backend-rs = { git = "https://github.com/cloud-hypervisor/fuse-backend-rs.git", rev = "afc7b69", optional = true }
vhost = { version = "0.2.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { git = "https://github.com/rust-vmm/vhost-user-backend", rev = "3242b37", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { git = "https://github.com/rust-vmm/vm-virtio", rev = "6013dd9", optional = true }
nydus-api = { version = "0.4.0", path = "api", features = [
"error-backtrace",
"handler",
] }
nydus-builder = { version = "0.2.0", path = "builder" }
nydus-rafs = { version = "0.4.0", path = "rafs" }
nydus-service = { version = "0.4.0", path = "service", features = [
"block-device",
] }
nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit",
] }
nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
vhost-user-backend = { version = "0.15.0", optional = true }
virtio-bindings = { version = "0.1", features = [
"virtio-v5_0_0",
], optional = true }
virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
nydus-api = { path = "api" }
nydus-app = { path = "app" }
nydus-error = "0.1"
nydus-utils = { path = "utils" }
rafs = { path = "rafs", features = ["backend-registry", "backend-oss"] }
storage = { path = "storage" }
[dev-dependencies]
xattr = "1.0.1"
vmm-sys-util = "0.12.1"
sendfd = "0.3.3"
vmm-sys-util = ">=0.8.0"
env_logger = "0.8.2"
[features]
default = [
"fuse-backend-rs/fusedev",
"backend-registry",
"backend-oss",
"backend-s3",
"backend-http-proxy",
"backend-localdisk",
"dedup",
]
virtiofs = [
"nydus-service/virtiofs",
"vhost",
"vhost-user-backend",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vmm-sys-util",
]
block-nbd = ["nydus-service/block-nbd"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = [
"nydus-storage/backend-localdisk",
"nydus-storage/backend-localdisk-gpt",
]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
fusedev = ["nydus-utils/fusedev", "fuse-backend-rs/fusedev"]
virtiofs = ["fuse-backend-rs/vhost-user-fs", "vm-memory", "vhost", "vhost-user-backend", "virtio-queue", "virtio-bindings"]
[workspace]
members = [
"api",
"builder",
"clib",
"rafs",
"storage",
"service",
"upgrade",
"utils",
]
members = ["api", "app", "error", "rafs", "storage", "utils"]

View File

@ -1,2 +0,0 @@
[build]
pre-build = ["apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y cmake"]

View File

@ -1,27 +0,0 @@
Copyright 2022 The Nydus Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,15 +0,0 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

273
Makefile
View File

@ -1,191 +1,188 @@
all: release
all-build: build contrib-build
all-release: release contrib-release
all-static-release: static-release docker-static contrib-release
all-install: install contrib-install
all-clean: clean contrib-clean
all: build
TEST_WORKDIR_PREFIX ?= "/tmp"
INSTALL_DIR_PREFIX ?= "/usr/local/bin"
DOCKER ?= "true"
CARGO ?= $(shell which cargo)
RUSTUP ?= $(shell which rustup)
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
SUDO = $(shell which sudo)
CARGO_COMMON ?=
EXCLUDE_PACKAGES =
UNAME_M := $(shell uname -m)
UNAME_S := $(shell uname -s)
STATIC_TARGET = $(UNAME_M)-unknown-linux-musl
ifeq ($(UNAME_S),Linux)
CARGO_COMMON += --features=virtiofs
ifeq ($(UNAME_M),ppc64le)
STATIC_TARGET = powerpc64le-unknown-linux-gnu
endif
ifeq ($(UNAME_M),riscv64)
STATIC_TARGET = riscv64gc-unknown-linux-gnu
endif
endif
ifeq ($(UNAME_S),Darwin)
EXCLUDE_PACKAGES += --exclude nydus-blobfs
ifeq ($(UNAME_M),amd64)
STATIC_TARGET = x86_64-apple-darwin
endif
ifeq ($(UNAME_M),arm64)
STATIC_TARGET = aarch64-apple-darwin
endif
endif
RUST_TARGET_STATIC ?= $(STATIC_TARGET)
NYDUSIFY_PATH = contrib/nydusify
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
ARCH := $(shell uname -p)
env_go_path := $(shell go env GOPATH 2> /dev/null)
go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
# Set the env DIND_CACHE_DIR to specify a cache directory for
# docker-in-docker container, used to cache data for docker pull,
# then mitigate the impact of docker hub rate limit, for example:
# env DIND_CACHE_DIR=/path/to/host/var-lib-docker make docker-nydusify-smoke
dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,)
FUSEDEV_COMMON = --target-dir target-fusedev --features=fusedev --release
VIRIOFS_COMMON = --target-dir target-virtiofs --features=virtiofs --release
# Functions
# Func: build golang target in docker
# Args:
# $(1): The path where go build a golang project
# $(2): How to build the golang project
# $(2): How to build the golang project
define build_golang
echo "Building target $@ by invoking: $(2)"
if [ $(DOCKER) = "true" ]; then \
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.21 $(2) ;\
else \
$(2) -C $(1); \
if [ $(DOCKER) = "true" ]; then
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.15 $(2)
else
$(2) -C $(1)
fi
endef
.PHONY: .release_version .format .musl_target .clean_libz_sys \
all all-build all-release all-static-release build release static-release
# Build nydus respecting different features
# $(1) is the specified feature. [fusedev, virtiofs]
define build_nydus
cargo build --features=$(1) --target-dir target-$(1) $(CARGO_BUILD_FLAGS)
endef
define static_check
# Cargo will skip checking if it is already checked
cargo clippy --features=$(1) --workspace --bins --tests --target-dir target-$(1) -- -Dclippy::all
endef
.PHONY: all .release_version .format .musl_target build release static-release fusedev-release virtiofs-release virtiofs fusedev
.release_version:
$(eval CARGO_BUILD_FLAGS += --release)
.format:
${CARGO} fmt -- --check
cargo fmt -- --check
.musl_target:
$(eval CARGO_BUILD_FLAGS += --target ${RUST_TARGET_STATIC})
# Workaround to clean up stale cache for libz-sys
.clean_libz_sys:
@${CARGO} clean --target ${RUST_TARGET_STATIC} -p libz-sys
@${CARGO} clean --target ${RUST_TARGET_STATIC} --release -p libz-sys
$(eval CARGO_BUILD_FLAGS += --target ${ARCH}-unknown-linux-musl)
# Targets that are exposed to developers and users.
build: .format
${CARGO} build $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# Cargo will skip checking if it is already checked
${CARGO} clippy --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --bins --tests -- -Dwarnings --allow clippy::unnecessary_cast --allow clippy::needless_borrow
build: .format fusedev virtiofs
release: .format .release_version fusedev virtiofs
static-release: .musl_target .format .release_version fusedev virtiofs
fusedev-release: .format .release_version fusedev
virtiofs-release: .format .release_version virtiofs
release: .format .release_version build
virtiofs:
# TODO: switch to --out-dir when it moves to stable
# For now we build with separate target directories
$(call build_nydus,$@,$@)
$(call static_check,$@,target-$@)
static-release: .clean_libz_sys .musl_target .format .release_version build
fusedev:
$(call build_nydus,$@,$@)
$(call static_check,$@,target-$@)
clean:
${CARGO} clean
PACKAGES = rafs storage
install: release
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 target/release/nydusd $(INSTALL_DIR_PREFIX)/nydusd
@sudo install -m 755 target/release/nydus-image $(INSTALL_DIR_PREFIX)/nydus-image
@sudo install -m 755 target/release/nydusctl $(INSTALL_DIR_PREFIX)/nydusctl
# If virtiofs test must be performed, only run binary part
# Use same traget to avoid re-compile for differnt targets like gnu and musl
ut:
echo "Testing packages: ${PACKAGES}"
$(foreach var,$(PACKAGES),cargo test $(FUSEDEV_COMMON) -p $(var);)
RUST_BACKTRACE=1 cargo test $(FUSEDEV_COMMON) --bins -- --nocapture --test-threads=8
ifdef NYDUS_TEST_VIRTIOFS
# If virtiofs test must be performed, only run binary part since other package is not affected by feature - virtiofs
# Use same traget to avoid re-compile for differnt targets like gnu and musl
RUST_BACKTRACE=1 cargo test $(VIRIOFS_COMMON) --bin nydusd -- --nocapture --test-threads=8
endif
# unit test
ut: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --no-fail-fast --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
CARGO = $(shell which cargo)
SUDO = $(shell which sudo)
# install miri first from https://github.com/rust-lang/miri/
miri-ut-nextest: .release_version
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
docker-static:
docker build -t nydus-rs-static --build-arg ARCH=${ARCH} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static
# install test dependencies
pre-coverage:
${CARGO} +stable install cargo-llvm-cov --locked
${RUSTUP} component add llvm-tools-preview
# Run smoke test including general integration tests and unit tests in container.
# Nydus binaries should already be prepared.
smoke: ut
# No need to involve `clippy check` here as build from target `virtiofs` or `fusedev` always does so.
# TODO: Put each test function into separated rs file.
$(SUDO) TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) $(CARGO) test --test '*' $(FUSEDEV_COMMON) -- --nocapture --test-threads=8
# print unit test coverage to console
coverage: pre-coverage
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
docker-nydus-smoke:
docker build -t nydus-smoke --build-arg ARCH=${ARCH} misc/nydus-smoke
docker run --rm --privileged ${CARGO_BUILD_GEARS} \
-e TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) \
-v $(TEST_WORKDIR_PREFIX) \
-v ${current_dir}:/nydus-rs \
nydus-smoke
# write unit teset coverage to codecov.json, used for Github CI
coverage-codecov:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
NYDUSIFY_PATH = contrib/nydusify
# TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image.
# So musl compliation must be involved.
# And docker-in-docker deployment invovles image buiding?
docker-nydusify-smoke: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestSmoke
smoke-only:
make -C smoke test
docker-nydusify-image-test: docker-static
$(call build_golang,make -C contrib/nydusify build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestDockerHubImage
smoke-performance:
make -C smoke test-performance
smoke-benchmark:
make -C smoke test-benchmark
smoke-takeover:
make -C smoke test-takeover
smoke: release smoke-only
contrib-build: nydusify nydus-overlayfs
contrib-release: nydusify-release nydus-overlayfs-release
contrib-test: nydusify-test nydus-overlayfs-test
contrib-lint: nydusify-lint nydus-overlayfs-lint
contrib-clean: nydusify-clean nydus-overlayfs-clean
contrib-install:
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify
docker-smoke: docker-nydus-smoke docker-nydusify-smoke nydus-snapshotter
nydusify:
$(call build_golang,${NYDUSIFY_PATH},make)
$(call build_golang,${NYDUSIFY_PATH},make build-smoke)
nydusify-release:
$(call build_golang,${NYDUSIFY_PATH},make release)
nydusify-static:
$(call build_golang,${NYDUSIFY_PATH},make static-release)
nydusify-test:
$(call build_golang,${NYDUSIFY_PATH},make test)
SNAPSHOTTER_PATH = contrib/nydus-snapshotter
nydus-snapshotter:
$(call build_golang,${SNAPSHOTTER_PATH},make static-release build test)
nydusify-clean:
$(call build_golang,${NYDUSIFY_PATH},make clean)
nydus-snapshotter-static:
$(call build_golang,${SNAPSHOTTER_PATH},make static-release)
nydusify-lint:
$(call build_golang,${NYDUSIFY_PATH},make lint)
CTR-REMOTE_PATH = contrib/ctr-remote
ctr-remote:
$(call build_golang,${CTR-REMOTE_PATH},make)
ctr-remote-static:
$(call build_golang,${CTR-REMOTE_PATH},make static-release)
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
nydus-overlayfs-release:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make release)
nydus-overlayfs-static:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make static-release)
nydus-overlayfs-test:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make test)
DOCKER-GRAPHDRIVER_PATH = contrib/docker-nydus-graphdriver
docker-nydus-graphdriver:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make)
nydus-overlayfs-clean:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean)
docker-nydus-graphdriver-static:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make static-release)
nydus-overlayfs-lint:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make lint)
# Run integration smoke test in docker-in-docker container. It requires some special settings,
# refer to `misc/example/README.md` for details.
all-static-release: docker-static all-contrib-static-release
docker-static:
docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static
all-contrib-static-release: nydusify-static nydus-snapshotter-static ctr-remote-static \
nydus-overlayfs-static docker-nydus-graphdriver-static
# https://www.gnu.org/software/make/manual/html_node/One-Shell.html
.ONESHELL:
docker-example: all-static-release
cp ${current_dir}/target-fusedev/${ARCH}-unknown-linux-musl/release/nydusd misc/example
cp ${current_dir}/target-fusedev/${ARCH}-unknown-linux-musl/release/nydus-image misc/example
cp contrib/nydusify/cmd/nydusify misc/example
cp contrib/nydus-snapshotter/bin/containerd-nydus-grpc misc/example
docker build -t nydus-rs-example misc/example
@cid=$(shell docker run --rm -t -d --privileged $(dind_cache_mount) nydus-rs-example)
@docker exec $$cid /run.sh
@EXIT_CODE=$$?
@docker rm -f $$cid
@exit $$EXIT_CODE

184
README.md
View File

@ -1,82 +1,49 @@
[**[⬇️ Download]**](https://github.com/dragonflyoss/nydus/releases)
[**[📖 Website]**](https://nydus.dev/)
[**[☸ Quick Start (Kubernetes)**]](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md)
[**[🤓 Quick Start (nerdctl)**]](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md)
[**[❓ FAQs & Troubleshooting]**](https://github.com/dragonflyoss/nydus/wiki/FAQ)
# Nydus: Dragonfly Container Image Service
<p><img src="misc/logo.svg" width="170"></p>
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases)
[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
The nydus project implements a user space filesystem on top of a container image format that improves over the current OCI image specification, in terms of container launching speed, image space, and network bandwidth efficiency, as well as data integrity.
[![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
[![Release Test Daily](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml?query=event%3Aschedule)
[![Benchmark](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml?query=event%3Aschedule)
[![Coverage](https://codecov.io/gh/dragonflyoss/nydus/branch/master/graph/badge.svg)](https://codecov.io/gh/dragonflyoss/nydus)
## Introduction
Nydus implements a content-addressable file system on the RAFS format, which enhances the current OCI image specification by improving container launch speed, image space and network bandwidth efficiency, and data integrity.
The following Benchmarking results demonstrate that Nydus images significantly outperform OCI images in terms of container cold startup elapsed time on Containerd, particularly as the OCI image size increases.
The following benchmarking result shows the performance improvement compared with the OCI image for the container cold startup elapsed time on containerd. As the OCI image size increases, the container startup time of using Nydus image remains very short.
![Container Cold Startup](./misc/perf.jpg)
## Principles
Nydus' key features include:
***Provide Fast, Secure And Easy Access to Data Distribution***
- Container images may be downloaded on demand in chunks to boost container startup
- Chunk level data de-duplication among layers in a single repository to reduce storage, transport and memory cost
- Flatten image metadata and data to remove all intermediate layers
- Deleted(whiteout) files in certain layer aren't packed into nydus image, therefore, image size may be reduced
- E2E image data integrity check. So security issues like "Supply Chain Attach" can be avoided and detected at runtime
- Compatible with the OCI artifacts spec and distribution spec, so nydus image can be stored in a regular container registry
- Integrated with CNCF incubating project Dragonfly to distribute container images in P2P fashion and mitigate the pressure on container registries
- Different container image storage backends are supported. For example, Registry, NAS, Aliyun/OSS.
- Capable to prefetch data block before user IO hits the block thus to reduce read latency
- Readonly FUSE file system with Linux overlayfs to provide full POSIX compatibility
- Record files access pattern during runtime gathering access trace/log, by which user abnormal behaviors are easily caught
- Access trace based prefetch table
- User IO amplification to reduce the amount of small requests to storage backend.
- **Performance**: Second-level container startup speed, millisecond-level function computation code package loading speed.
- **Low Cost**: Written in memory-safed language `Rust`, numerous optimizations help improve memory, CPU, and network consumption.
- **Flexible**: Supports container runtimes such as [runC](https://github.com/opencontainers/runc) and [Kata](https://github.com/kata-containers), and provides [Confidential Containers](https://github.com/confidential-containers) and vulnerability scanning capabilities
- **Security**: End to end data integrity check, Supply Chain Attack can be detected and avoided at runtime.
Currently the repository includes following tools:
## Key features
| Tool | Description |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| nydusd | Linux FUSE user-space daemon, it processes all fuse messages from host/guest kernel and parses nydus container image to fullfil those requests |
| nydus-image | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| nydusify | It pulls OCI image down and unpack it, invokes `nydus-image` to convert image and then pushes the converted image back to registry and data storage |
| containerd-nydus-grpc | Works as a `containerd` remote snapshotter to help setup container rootfs with nydus images |
| nydusctl | Nydusd CLI client, query daemon's working status/metrics and configure it |
| ctr-remote | An enhanced `containerd` CLI tool enable nydus support with `containerd ` ctr |
| nydus-docker-graphdriver | Works as a `docker` remote graph driver to control how images and containers are stored and managed |
- **On-demand Load**: Container images/packages are downloaded on-demand in chunk unit to boost startup.
- **Chunk Deduplication**: Chunk level data de-duplication cross-layer or cross-image to reduce storage, transport, and memory cost.
- **Compatible with Ecosystem**: Storage backend support with Registry, OSS, NAS, Shared Disk, and [P2P service](https://d7y.io/). Compatible with the [OCI images](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-zran.md), and provide native [eStargz images](https://github.com/containerd/stargz-snapshotter) support.
- **Data Analyzability**: Record accesses, data layout optimization, prefetch, IO amplification, abnormal behavior detection.
- **POSIX Compatibility**: In-Kernel EROFS or FUSE filesystems together with overlayfs provide full POSIX compatibility
- **I/O optimization**: Use merged filesystem tree, data prefetching and User I/O amplification to reduce read latency and improve user I/O performance.
To try nydus image service:
## Ecosystem
### Nydus tools
1. Convert an original OCI image to nydus image and store it somewhere like Docker/Registry, NAS or Aliyun/OSS. This can be directly done by `nydusify`. Normal users don't have to get involved with `nydus-image`.
2. Get `nydus-snapshotter`(`containerd-nydus-grpc`) installed locally and configured properly. Or install `nydus-docker-graphdriver` plugin.
3. Operate container in legacy approaches. For example, `docker`, `nerdctl`, `CRI` and `ctr`
| Tool | Description |
| ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
## Build Binary
### Supported platforms
| Type | Platform | Description | Status |
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Runtime | [Kubernetes](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md) | Run Nydus image using CRI interface | ✅ |
| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | ✅ |
| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 |
| Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | ✅ |
| Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | ✅ |
## Build
### Build Binary
```shell
# build debug binary
make
@ -86,89 +53,54 @@ make release
make docker-static
```
### Build Nydus Image
## Build Nydus Image
Convert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).
Build Nydus image from directory source: [Nydus Image Builder](./docs/nydus-image.md).
Build Nydus image from Dockerfile directly: [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md).
Convert OCI image to Nydus image: [Nydusify](./docs/nydusify.md).
Build Nydus layer from various sources: [Nydus Image Builder](./docs/nydus-image.md).
## Nydus Snapshotter
#### Image prefetch optimization
To further reduce container startup time, a nydus image with a prefetch list can be built using the NRI plugin (containerd >=1.7): [Container Image Optimizer](https://github.com/containerd/nydus-snapshotter/blob/main/docs/optimize_nydus_image.md)
Nydus supports `containerd`. To run containers with nydus images and `containerd`, please build and install the nydus snapshotter. It is a `containerd` remote snapshotter and handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter.
## Run
### Quick Start
To build and setup nydus-snapshotter for containerd, please refer to [Nydus Snapshotter](./contrib/nydus-snapshotter/README.md)
For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md)
## Run Nydusd Daemon
### Run Nydus Snapshotter
Nydus-snapshotter is a non-core sub-project of containerd.
Check out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter).
It works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter.
### Run Nydusd Daemon
Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` when a container rootfs is prepared.
Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` or `nydus-docker-graphdriver` when a container rootfs is prepared.
Run Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md).
### Run Nydus with in-kernel EROFS filesystem
## Docker graph driver support
In-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then.
Docker graph driver is also accompanied, it helps to start container from nydus image. For more particular instructions, please refer to
Since [Linux 5.19](https://lwn.net/Articles/896140), EROFS has added a new file-based caching (fscache) backend. In this way, compressed RAFS v6 images can be mounted directly with fscache subsystem, even such images are partially available. `estargz` can be converted on the fly and mounted in this way too.
- [Nydus Graph Driver](./contrib/docker-nydus-graphdriver/README.md)
- [使用 docker 启动容器](./docs/chinese_docker_graph_driver_guide.md)
Guide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md)
## Learn Concepts and Commands
### Run Nydus with Dragonfly P2P system
Browse the documentation to learn more. Here are some topics you may be interested in:
Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point pressure of the registry server. Benchmarking results in the production environment demonstrate that using Dragonfly can reduce network latency by more than 80%, to understand the performance results and integration steps, please refer to the [nydus integration](https://d7y.io/docs/setup/integration/nydus).
If you want to deploy Dragonfly and Nydus at the same time through Helm, please refer to the **[Quick Start](https://github.com/dragonflyoss/helm-charts/blob/main/INSTALL.md)**.
### Run OCI image directly with Nydus
Nydus is able to generate a tiny artifact called a `nydus zran` from an existing OCI image in the short time. This artifact can be used to accelerate the container boot time without the need for a full image conversion. For more information, please see the [documentation](./docs/nydus-zran.md).
### Run with Docker(Moby)
Nydus provides a variety of methods to support running on docker(Moby), please refer to [Nydus Setup for Docker(Moby) Environment](./docs/docker-env-setup.md)
### Run with macOS
Nydus can also run with macfuse(a.k.a osxfuse). For more details please read [nydus with macOS](./docs/nydus_with_macos.md).
### Run eStargz image (with lazy pulling)
The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option.
In the future, `zstd::chunked` can work in this way as well.
### Run Nydus Service
Using the key features of nydus as native in your project without preparing and invoking `nydusd` deliberately, [nydus-service](./service/README.md) helps to reuse the core services of nyuds.
## Documentation
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
- [A Nydus Tutorial for Beginners](./docs/tutorial.md)
- Our talk on Open Infra Summit 2020: [Toward Next Generation Container Image](https://drive.google.com/file/d/1LRfLUkNxShxxWU7SKjc_50U0N9ZnGIdV/view)
- [Nydus Design Doc](./docs/nydus-design.md)
## Community
Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities.
Questions, bug reports, technical discussion, feature requests and contribution are always welcomed!
Welcome to share your use cases and contribute to Nydus project.
You can reach the community via Dingtalk and Slack
We're very pleased to hear your use cases any time.
Feel free to reach us via Slack or Dingtalk.
Any bug report, feature requirement, and technique discussion and cooperation are welcomed and expected!
- **Slack:** [Nydus Workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)
- Slack
- **Twitter:** [@dragonfly_oss](https://twitter.com/dragonfly_oss)
Join our Slack [workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)
- **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)
- Dingtalk
Join nydus-devel group by clicking [URL](https://qr.dingtalk.com/action/joingroup?code=v1,k1,YfGzhaTOnpm10Bf+/ohz4WcuDEIe9nTIjo+MPuIgRGQ=&_dt_no_comment=1&origin=11) from your phone.
You can also search our talking group by number _34971767_ and QR code
<img src="./misc/dingtalk.jpg" width="250" height="300"/>

View File

@ -1,31 +1,20 @@
[package]
name = "nydus-api"
version = "0.4.0"
description = "APIs for Nydus Image Service"
version = "0.1.0"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
libc = "0.2"
lazy_static = "1.4.0"
log = "0.4.8"
serde_json = "1.0.53"
toml = "0.5"
thiserror = "1.0.30"
backtrace = { version = "0.3", optional = true }
dbs-uhttp = { version = "0.3.0", optional = true }
http = { version = "0.2.1", optional = true }
lazy_static = { version = "1.4.0", optional = true }
mio = { version = "0.8", features = ["os-poll", "os-ext"], optional = true }
serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
url = { version = "2.1.1", optional = true }
[dev-dependencies]
vmm-sys-util = { version = "0.12.1" }
[features]
error-backtrace = ["backtrace"]
handler = ["dbs-uhttp", "http", "lazy_static", "mio", "url"]
epoll = ">=4.0.1"
micro_http = { git = "https://github.com/cloud-hypervisor/micro-http.git", branch = "master" }
serde = { version = ">=1.0.27", features = ["rc"] }
serde_derive = ">=1.0.27"
serde_json = ">=1.0.9"
vmm-sys-util = ">=0.8.0"
url = "2.1.1"
http = "0.2.1"
nydus-utils = { path = "../utils" }

View File

@ -1 +0,0 @@
../LICENSE-APACHE

View File

@ -1 +0,0 @@
../LICENSE-BSD-3-Clause

View File

@ -1,17 +0,0 @@
# nydus-api
The `nydus-api` crate defines the Nydus Image Service APIs and related data structures.
## Support
**Platforms**:
- x86_64
- aarch64
**Operating Systems**:
- Linux
- Darwin
## License
This code is licensed under [Apache-2.0](LICENSE-APACHE) or [BSD-3-Clause](LICENSE-BSD-3-Clause).

View File

@ -1,160 +0,0 @@
openapi: "3.0.2"
info:
title: Nydus Service and Management APIs, version 2.
description:
This is the second version of RESTful Nydus service and management APIs to manage the global daemon and
individual services.
license:
name: Apache 2.0
url: http://www.apache.org/licenses/LICENSE-2.0.html
version: "0.1"
servers:
- url: https://localhost/v2
paths:
/daemon:
summary: Returns general information about the nydus daemon
get:
operationId: describeDaemon
responses:
"200":
description: Daemon information
content:
application/json:
schema:
$ref: "#/components/schemas/DaemonInfo"
"500":
description: Internal Server Error
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
put:
operationId: configureDaemon
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/DaemonConf"
responses:
"204":
description: "Successfully configure the daemon!"
"500":
description: "Can't configure the daemon!"
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
/blobs:
summary: Manage cached blob objects
####################################################################
get:
operationId: getBlobObject
responses:
"200":
description: Blob objects
content:
application/json:
schema:
$ref: "#/components/schemas/BlobObjectList"
"404":
description: "Blob object not found"
"500":
description: "Internal Server Error"
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
put:
operationId: createBlobObject
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/BlobObjectConf"
responses:
"204":
description: "Successfully created the blob object!"
"500":
description: "Can't create the blob object!"
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
delete:
operationId: deleteBlobObject
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/BlobObjectParam"
responses:
"204":
description: "Successfully deleted the blob object!"
"500":
description: "Can't delete the blob object!"
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
operationId: deleteBlobFile
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/BlobId"
responses:
"204":
description: "Successfully deleted the blob file!"
"500":
description: "Can't delete the blob file!"
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
################################################################
components:
schemas:
DaemonInfo:
type: object
properties:
version:
type: object
properties:
package_ver:
type: string
git_commit:
type: string
build_time:
type: string
profile:
type: string
rustc:
type: string
id:
type: string
supervisor:
type: string
state:
type: string
enum:
- INIT
- RUNNING
- UPGRADING
- INTERRUPTED
- STOPPED
- UNKNOWN
DaemonConf:
type: object
properties:
log_level:
type: string
enum: [trace, debug, info, warn, error]
ErrorMsg:
type: object
properties:
code:
description: Nydus defined error code indicating certain error type
type: string
message:
description: Details about the error
type: string

View File

@ -348,8 +348,10 @@ components:
description: usually to be the metadata source
type: string
prefetch_files:
description: local file path which recorded files/directories to be prefetched and separated by newlines
type: string
description: files that need to be prefetched
type: array
items:
type: string
config:
description: inline request, use to configure fs backend.
type: string

File diff suppressed because it is too large Load Diff

View File

@ -1,252 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fmt::Debug;
/// Display error messages with line number, file path and optional backtrace.
pub fn make_error(
err: std::io::Error,
_raw: impl Debug,
_file: &str,
_line: u32,
) -> std::io::Error {
#[cfg(feature = "error-backtrace")]
{
if let Ok(val) = std::env::var("RUST_BACKTRACE") {
if val.trim() != "0" {
error!("Stack:\n{:?}", backtrace::Backtrace::new());
error!("Error:\n\t{:?}\n\tat {}:{}", _raw, _file, _line);
return err;
}
}
error!(
"Error:\n\t{:?}\n\tat {}:{}\n\tnote: enable `RUST_BACKTRACE=1` env to display a backtrace",
_raw, _file, _line
);
}
err
}
/// Define error macro like `x!()` or `x!(err)`.
/// Note: The `x!()` macro will convert any origin error (Os, Simple, Custom) to Custom error.
macro_rules! define_error_macro {
($fn:ident, $err:expr) => {
#[macro_export]
macro_rules! $fn {
() => {
std::io::Error::new($err.kind(), format!("{}: {}:{}", $err, file!(), line!()))
};
($raw:expr) => {
$crate::error::make_error($err, &$raw, file!(), line!())
};
}
};
}
/// Define error macro for libc error codes
macro_rules! define_libc_error_macro {
($fn:ident, $code:ident) => {
define_error_macro!($fn, std::io::Error::from_raw_os_error(libc::$code));
};
}
// TODO: Add format string support
// Add more libc error macro here if necessary
define_libc_error_macro!(einval, EINVAL);
define_libc_error_macro!(enoent, ENOENT);
define_libc_error_macro!(ebadf, EBADF);
define_libc_error_macro!(eacces, EACCES);
define_libc_error_macro!(enotdir, ENOTDIR);
define_libc_error_macro!(eisdir, EISDIR);
define_libc_error_macro!(ealready, EALREADY);
define_libc_error_macro!(enosys, ENOSYS);
define_libc_error_macro!(epipe, EPIPE);
define_libc_error_macro!(eio, EIO);
/// Return EINVAL error with formatted error message.
#[macro_export]
macro_rules! bail_einval {
($($arg:tt)*) => {{
return Err(einval!(format!($($arg)*)))
}}
}
/// Return EIO error with formatted error message.
#[macro_export]
macro_rules! bail_eio {
($($arg:tt)*) => {{
return Err(eio!(format!($($arg)*)))
}}
}
// Add more custom error macro here if necessary
define_error_macro!(last_error, std::io::Error::last_os_error());
define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)]
mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 {
return Err(einval!());
}
Ok(())
}
#[test]
fn test_einval() {
assert_eq!(
check_size(0x2000).unwrap_err().kind(),
std::io::Error::from_raw_os_error(libc::EINVAL).kind()
);
}
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
}

View File

@ -1,271 +1,262 @@
// Copyright 2022 Alibaba Cloud. All rights reserved.
// Copyright 2020 Ant Group. All rights reserved.
// Copyright © 2019 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
use std::io;
use std::sync::mpsc::{RecvError, SendError};
use std::collections::HashMap;
use std::io::Result;
use std::path::PathBuf;
use std::sync::mpsc::{Receiver, Sender};
use std::thread;
use std::time::SystemTime;
use serde::Deserialize;
use serde_json::Error as SerdeError;
use thiserror::Error;
use std::os::unix::io::AsRawFd;
use crate::BlobCacheEntry;
use http::uri::Uri;
use url::Url;
/// Errors related to Metrics.
#[derive(Error, Debug)]
pub enum MetricsError {
#[error("no counter found for the metric")]
NoCounter,
#[error("failed to serialize metric: {0:?}")]
Serialize(#[source] SerdeError),
use micro_http::{HttpServer, MediaType, Request, Response, StatusCode};
use vmm_sys_util::eventfd::EventFd;
use crate::http_endpoint::{
error_response, ApiError, ApiRequest, ApiResponse, EventsHandler, ExitHandler, FsBackendInfo,
HttpError, HttpResult, InfoHandler, MetricsBackendHandler, MetricsBlobcacheHandler,
MetricsFilesHandler, MetricsHandler, MetricsInflightHandler, MetricsPatternHandler,
MountHandler, SendFuseFdHandler, TakeoverHandler,
};
const HTTP_ROOT: &str = "/api/v1";
/// An HTTP endpoint handler interface
pub trait EndpointHandler: Sync + Send {
/// Handles an HTTP request.
/// After parsing the request, the handler could decide to send an
/// associated API request down to the Nydusd API server to e.g. get current working status.
/// The request will block waiting for an answer from the
/// API server and translate that into an HTTP response.
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult;
}
/// Mount a filesystem.
#[derive(Clone, Deserialize, Debug)]
pub struct ApiMountCmd {
/// Path to source of the filesystem.
pub source: String,
/// Type of filesystem.
#[serde(default)]
pub fs_type: String,
/// Configuration for the filesystem.
pub config: String,
/// List of files to prefetch.
#[serde(default)]
pub prefetch_files: Option<Vec<String>>,
/// An HTTP routes structure.
pub struct HttpRoutes {
/// routes is a hash table mapping endpoint URIs to their endpoint handlers.
pub routes: HashMap<String, Box<dyn EndpointHandler + Sync + Send>>,
}
/// Umount a mounted filesystem.
#[derive(Clone, Deserialize, Debug)]
pub struct ApiUmountCmd {
/// Path of mountpoint.
pub mountpoint: String,
macro_rules! endpoint {
($path:expr) => {
format!("{}{}", HTTP_ROOT, $path)
};
}
/// Set/update daemon configuration.
#[derive(Clone, Deserialize, Debug)]
pub struct DaemonConf {
/// Logging level: Off, Error, Warn, Info, Debug, Trace.
pub log_level: String,
lazy_static! {
/// HTTP_ROUTES contain all the cloud-hypervisor HTTP routes.
pub static ref HTTP_ROUTES: HttpRoutes = {
let mut r = HttpRoutes {
routes: HashMap::new(),
};
r.routes.insert(endpoint!("/daemon"), Box::new(InfoHandler{}));
r.routes.insert(endpoint!("/daemon/events"), Box::new(EventsHandler{}));
r.routes.insert(endpoint!("/daemon/backend"), Box::new(FsBackendInfo{}));
r.routes.insert(endpoint!("/daemon/exit"), Box::new(ExitHandler{}));
r.routes.insert(endpoint!("/daemon/fuse/sendfd"), Box::new(SendFuseFdHandler{}));
r.routes.insert(endpoint!("/daemon/fuse/takeover"), Box::new(TakeoverHandler{}));
r.routes.insert(endpoint!("/mount"), Box::new(MountHandler{}));
r.routes.insert(endpoint!("/metrics"), Box::new(MetricsHandler{}));
r.routes.insert(endpoint!("/metrics/files"), Box::new(MetricsFilesHandler{}));
r.routes.insert(endpoint!("/metrics/pattern"), Box::new(MetricsPatternHandler{}));
r.routes.insert(endpoint!("/metrics/backend"), Box::new(MetricsBackendHandler{}));
r.routes.insert(endpoint!("/metrics/blobcache"), Box::new(MetricsBlobcacheHandler{}));
r.routes.insert(endpoint!("/metrics/inflight"), Box::new(MetricsInflightHandler{}));
r
};
}
/// Identifier for cached blob objects.
///
/// Domains are used to control the blob sharing scope. All blobs associated with the same domain
/// will be shared/reused, but blobs associated with different domains are isolated.
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct BlobCacheObjectId {
/// Domain identifier for the object.
#[serde(default)]
pub domain_id: String,
/// Blob identifier for the object.
#[serde(default)]
pub blob_id: String,
fn kick_api_server(
api_evt: &EventFd,
to_api: &Sender<ApiRequest>,
from_api: &Receiver<ApiResponse>,
request: ApiRequest,
) -> ApiResponse {
to_api.send(request).map_err(ApiError::RequestSend)?;
api_evt.write(1).map_err(ApiError::EventFdWrite)?;
from_api.recv().map_err(ApiError::ResponseRecv)?
}
#[derive(Debug)]
pub enum ApiRequest {
/// Set daemon configuration.
ConfigureDaemon(DaemonConf),
/// Get daemon information.
GetDaemonInfo,
/// Get daemon global events.
GetEvents,
/// Stop the daemon.
Exit,
/// Start the daemon.
Start,
/// Send fuse fd to new daemon.
SendFuseFd,
/// Take over fuse fd from old daemon instance.
TakeoverFuseFd,
// Example:
// <-- GET /
// --> GET / 200 835ms 746b
//
// Filesystem Related
/// Mount a filesystem.
Mount(String, ApiMountCmd),
/// Remount a filesystem.
Remount(String, ApiMountCmd),
/// Unmount a filesystem.
Umount(String),
/// Get storage backend metrics.
ExportBackendMetrics(Option<String>),
/// Get blob cache metrics.
ExportBlobcacheMetrics(Option<String>),
// Nydus API v1 requests
/// Get filesystem global metrics.
ExportFsGlobalMetrics(Option<String>),
/// Get filesystem access pattern log.
ExportFsAccessPatterns(Option<String>),
/// Get filesystem backend information.
ExportFsBackendInfo(String),
/// Get filesystem file metrics.
ExportFsFilesMetrics(Option<String>, bool),
/// Get information about filesystem inflight requests.
ExportFsInflightMetrics,
// Nydus API v2
/// Get daemon information excluding filesystem backends.
GetDaemonInfoV2,
/// Create a blob cache entry
CreateBlobObject(Box<BlobCacheEntry>),
/// Get information about blob cache entries
GetBlobObject(BlobCacheObjectId),
/// Delete a blob cache entry
DeleteBlobObject(BlobCacheObjectId),
/// Delete a blob cache file
DeleteBlobFile(String),
fn trace_api_begin(request: &micro_http::Request) {
info!("<--- {:?} {:?}", request.method(), request.uri());
}
fn trace_api_end(
response: &micro_http::Response,
method: micro_http::Method,
recv_time: SystemTime,
) {
let elapse = SystemTime::now().duration_since(recv_time);
info!(
"---> {:?} Status Code: {:?}, Elapse: {:?}, Body Size: {:?}",
method,
response.status(),
elapse,
response.content_length()
);
}
/// Kinds for daemon related error messages.
#[derive(Debug)]
pub enum DaemonErrorKind {
/// Service not ready yet.
NotReady,
/// Generic errors.
Other(String),
/// Message serialization/deserialization related errors.
Serde(SerdeError),
/// Unexpected event type.
UnexpectedEvent(String),
/// Can't upgrade the daemon.
UpgradeManager(String),
/// Unsupported requests.
Unsupported,
fn handle_http_request(
request: &Request,
api_notifier: &EventFd,
to_api: &Sender<ApiRequest>,
from_api: &Receiver<ApiResponse>,
) -> Response {
trace_api_begin(request);
let begin_time = SystemTime::now();
// Micro http should ensure that req path is legal.
let uri_parsed = request.uri().get_abs_path().parse::<Uri>();
let mut response = match uri_parsed {
Ok(uri) => match HTTP_ROUTES.routes.get(uri.path()) {
Some(route) => route
.handle_request(&request, &|r| {
kick_api_server(api_notifier, to_api, from_api, r)
})
.unwrap_or_else(|err| error_response(err, StatusCode::BadRequest)),
None => error_response(HttpError::NoRoute, StatusCode::NotFound),
},
Err(e) => {
error!("URI can't be parsed, {}", e);
error_response(HttpError::BadRequest, StatusCode::BadRequest)
}
};
response.set_server("Nydus API");
response.set_content_type(MediaType::ApplicationJson);
trace_api_end(&response, request.method(), begin_time);
response
}
/// Kinds for metrics related error messages.
#[derive(Debug)]
pub enum MetricsErrorKind {
/// Generic daemon related errors.
Daemon(DaemonErrorKind),
/// Errors related to metrics implementation.
Stats(MetricsError),
}
#[derive(Error, Debug)]
#[allow(clippy::large_enum_variant)]
pub enum ApiError {
#[error("daemon internal error: {0:?}")]
DaemonAbnormal(DaemonErrorKind),
#[error("daemon events error: {0}")]
Events(String),
#[error("metrics error: {0:?}")]
Metrics(MetricsErrorKind),
#[error("failed to mount filesystem: {0:?}")]
MountFilesystem(DaemonErrorKind),
#[error("failed to send request to the API service: {0:?}")]
RequestSend(#[from] SendError<Option<ApiRequest>>),
#[error("failed to parse response payload type")]
ResponsePayloadType,
#[error("failed to receive response from the API service: {0:?}")]
ResponseRecv(#[from] RecvError),
#[error("failed to wake up the daemon: {0:?}")]
Wakeup(#[source] io::Error),
}
/// Specialized `std::result::Result` for API replies.
pub type ApiResult<T> = std::result::Result<T, ApiError>;
#[derive(Serialize)]
pub enum ApiResponsePayload {
/// Filesystem backend metrics.
BackendMetrics(String),
/// Blobcache metrics.
BlobcacheMetrics(String),
/// Daemon version, configuration and status information in json.
DaemonInfo(String),
/// No data is sent on the channel.
Empty,
/// Global error events.
Events(String),
/// Filesystem global metrics, v1.
FsGlobalMetrics(String),
/// Filesystem per-file metrics, v1.
FsFilesMetrics(String),
/// Filesystem access pattern trace log, v1.
FsFilesPatterns(String),
// Filesystem Backend Information, v1.
FsBackendInfo(String),
// Filesystem Inflight Requests, v1.
FsInflightMetrics(String),
/// List of blob objects, v2
BlobObjectList(String),
}
/// Specialized version of [`std::result::Result`] for value returned by backend services.
pub type ApiResponse = std::result::Result<ApiResponsePayload, ApiError>;
/// HTTP error messages sent back to the clients.
///
/// The `HttpError` object will be sent back to client with `format!("{:?}", http_error)`.
/// So unfortunately it implicitly becomes parts of the API, please keep it stable.
#[derive(Debug)]
pub enum HttpError {
// Daemon common related errors
/// Invalid HTTP request
BadRequest,
/// Failed to configure the daemon.
Configure(ApiError),
/// Failed to query information about daemon.
DaemonInfo(ApiError),
/// Failed to query global events.
Events(ApiError),
/// No handler registered for HTTP request URI
NoRoute,
/// Failed to parse HTTP request message body
ParseBody(SerdeError),
/// Query parameter is missed from the HTTP request.
QueryString(String),
/// Failed to mount filesystem.
Mount(ApiError),
/// Failed to remount filesystem.
Upgrade(ApiError),
// Metrics related errors
/// Failed to get backend metrics.
BackendMetrics(ApiError),
/// Failed to get blobcache metrics.
BlobcacheMetrics(ApiError),
// Filesystem related errors (v1)
/// Failed to get filesystem backend information
FsBackendInfo(ApiError),
/// Failed to get filesystem per-file metrics.
FsFilesMetrics(ApiError),
/// Failed to get global metrics.
GlobalMetrics(ApiError),
/// Failed to get information about inflight request
InflightMetrics(ApiError),
/// Failed to get filesystem file access trace.
Pattern(ApiError),
// Blob cache management related errors (v2)
/// Failed to create blob object
CreateBlobObject(ApiError),
/// Failed to delete blob object
DeleteBlobObject(ApiError),
/// Failed to delete blob file
DeleteBlobFile(ApiError),
/// Failed to list existing blob objects
GetBlobObjects(ApiError),
}
#[derive(Serialize, Debug)]
pub(crate) struct ErrorMessage {
pub code: String,
pub message: String,
}
impl From<ErrorMessage> for Vec<u8> {
fn from(msg: ErrorMessage) -> Self {
// Safe to unwrap since `ErrorMessage` must succeed in serialization
serde_json::to_vec(&msg).unwrap()
pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// Splicing req.uri with "http:" prefix might look weird, but since it depends on
// crate `Url` to generate query_pairs HashMap, which is working on top of Url not Uri.
// Better that we can add query part support to Micro-http in the future. But
// right now, below way makes it easy to obtain query parts from uri.
let http_prefix: String = String::from("http:");
let url = Url::parse(&(http_prefix + req.uri().get_abs_path()))
.map_err(|e| {
error!("Can't parse request {:?}", e);
e
})
.ok()?;
let v: Option<String> = None;
for (k, v) in url.query_pairs() {
if k != key {
continue;
} else {
trace!("Got query part {:?}", (k, &v));
return Some(v.into_owned());
}
}
v
}
const EVENT_UNIX_SOCKET: u64 = 1;
const EVENT_HTTP_DIE: u64 = 2;
/// Start a HTTP server parsing http requests and send to nydus API server a concrete
/// request to operate nydus or fetch working status.
/// The HTTP server sends request by `to_api` channel and wait for response from `from_api` channel
/// `api_notifier` is used to notify an execution context to fetch above request and handle it.
/// We can't forward signal to native rust thread, so we rely on `exit_evtfd` to notify
/// the server to exit. Therefore, it adds the unix domain socket fd receiving http request
/// to a global epoll_fd associated with a event_fd which will be used later to notify
/// the server thread to exit.
pub fn start_http_thread(
path: &str,
api_notifier: EventFd,
to_api: Sender<ApiRequest>,
from_api: Receiver<ApiResponse>,
exit_evtfd: EventFd,
) -> Result<thread::JoinHandle<Result<()>>> {
// Try to remove existed unix domain socket
std::fs::remove_file(path).unwrap_or_default();
let socket_path = PathBuf::from(path);
let thread = thread::Builder::new()
.name("http-server".to_string())
.spawn(move || {
let epoll_fd = epoll::create(true).unwrap();
let mut server = HttpServer::new(socket_path).unwrap();
// Must start the server successfully or just die by panic
server.start_server().unwrap();
epoll::ctl(
epoll_fd,
epoll::ControlOptions::EPOLL_CTL_ADD,
server.epoll().as_raw_fd(),
epoll::Event::new(epoll::Events::EPOLLIN, EVENT_UNIX_SOCKET),
)?;
epoll::ctl(
epoll_fd,
epoll::ControlOptions::EPOLL_CTL_ADD,
exit_evtfd.as_raw_fd(),
epoll::Event::new(epoll::Events::EPOLLIN, EVENT_HTTP_DIE),
)?;
let mut events = vec![epoll::Event::new(epoll::Events::empty(), 0); 100];
info!("http server started");
'wait: loop {
let num = epoll::wait(epoll_fd, -1, events.as_mut_slice()).map_err(|e| {
error!("Wait event error. {:?}", e);
e
})?;
for event in &events[..num] {
match event.data {
EVENT_UNIX_SOCKET => match server.requests() {
Ok(request_vec) => {
for server_request in request_vec {
// Ignore error when sending response
server
.respond(server_request.process(|request| {
handle_http_request(
request,
&api_notifier,
&to_api,
&from_api,
)
}))
.unwrap_or_else(|e| {
error!("HTTP server error on response: {}", e)
});
}
}
Err(e) => {
error!(
"HTTP server error on retrieving incoming request. Error: {}",
e
);
}
},
EVENT_HTTP_DIE => break 'wait Ok(()),
_ => error!("Invalid event"),
}
}
}
})?;
Ok(thread)
}

486
api/src/http_endpoint.rs Normal file
View File

@ -0,0 +1,486 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright © 2019 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//! We keep HTTP and endpoints handlers in the separated crate because not only
//! nydusd is using api server. Other component like `rafs` crate also rely this
//! to export running metrics. So it will be easier to wrap different crates' Error
//! into.
use std::fmt::Debug;
use std::io;
use std::sync::mpsc::{RecvError, SendError};
use micro_http::{Body, Method, Request, Response, StatusCode, Version};
use serde::Deserialize;
use serde_json::Error as SerdeError;
use crate::http::{extract_query_part, EndpointHandler};
use nydus_utils::metrics::IoStatsError;
#[derive(Debug)]
pub enum DaemonErrorKind {
NotReady,
UpgradeManager,
Unsupported,
Connect(io::Error),
SendFd,
RecvFd,
Disconnect(io::Error),
Channel,
Serde(SerdeError),
UnexpectedEvent(String),
Other(String),
}
#[derive(Debug)]
pub enum MetricsErrorKind {
// Errors about daemon working status
Daemon(DaemonErrorKind),
// Errors about metrics/stats recorders
Stats(IoStatsError),
}
/// API errors are sent back from the Nydusd API server through the ApiResponse.
#[derive(Debug)]
pub enum ApiError {
/// Cannot write to EventFd.
EventFdWrite(io::Error),
/// Cannot mount a resource
MountFailure(DaemonErrorKind),
/// API request send error
RequestSend(SendError<ApiRequest>),
/// Wrong response payload type
ResponsePayloadType,
/// API response receive error
ResponseRecv(RecvError),
DaemonAbnormal(DaemonErrorKind),
Events(String),
Metrics(MetricsErrorKind),
}
pub type ApiResult<T> = std::result::Result<T, ApiError>;
#[derive(Serialize)]
pub enum ApiResponsePayload {
/// No data is sent on the channel.
Empty,
/// Nydus daemon general working information.
DaemonInfo(String),
Events(String),
FsBackendInfo(String),
/// Nydus filesystem global metrics
FsGlobalMetrics(String),
/// Nydus filesystem per-file metrics
FsFilesMetrics(String),
FsFilesPatterns(String),
BackendMetrics(String),
BlobcacheMetrics(String),
InflightMetrics(String),
}
/// This is the response sent by the API server through the mpsc channel.
pub type ApiResponse = std::result::Result<ApiResponsePayload, ApiError>;
pub type HttpResult = std::result::Result<Response, HttpError>;
#[derive(Debug)]
pub enum ApiRequest {
DaemonInfo,
Events,
Mount(String, ApiMountCmd),
Remount(String, ApiMountCmd),
Umount(String),
ConfigureDaemon(DaemonConf),
ExportGlobalMetrics(Option<String>),
ExportFilesMetrics(Option<String>, bool),
ExportAccessPatterns(Option<String>),
ExportBackendMetrics(Option<String>),
ExportBlobcacheMetrics(Option<String>),
ExportInflightMetrics,
ExportFsBackendInfo(String),
SendFuseFd,
Takeover,
Exit,
}
#[derive(Clone, Deserialize, Debug)]
pub struct ApiMountCmd {
pub source: String,
#[serde(default)]
pub fs_type: String,
pub config: String,
#[serde(default)]
pub prefetch_files: Option<Vec<String>>,
}
#[derive(Clone, Deserialize, Debug)]
pub struct ApiUmountCmd {
pub mountpoint: String,
}
fn parse_body<'a, F: Deserialize<'a>>(b: &'a Body) -> Result<F, HttpError> {
serde_json::from_slice::<F>(b.raw()).map_err(HttpError::ParseBody)
}
#[derive(Clone, Deserialize, Debug)]
pub struct DaemonConf {
pub log_level: String,
}
/// Errors associated with Nydus management
#[derive(Debug)]
pub enum HttpError {
NoRoute,
BadRequest,
QueryString(String),
/// API request receive error
SerdeJsonDeserialize(SerdeError),
SerdeJsonSerialize(SerdeError),
ParseBody(SerdeError),
/// Could not query daemon info
Info(ApiError),
Events(ApiError),
/// Could not mount resource
Mount(ApiError),
GlobalMetrics(ApiError),
FsFilesMetrics(ApiError),
Pattern(ApiError),
Configure(ApiError),
Upgrade(ApiError),
BlobcacheMetrics(ApiError),
BackendMetrics(ApiError),
FsBackendInfo(ApiError),
InflightMetrics(ApiError),
}
fn success_response(body: Option<String>) -> Response {
let status_code = if body.is_some() {
StatusCode::OK
} else {
StatusCode::NoContent
};
let mut r = Response::new(Version::Http11, status_code);
if let Some(b) = body {
r.set_body(Body::new(b));
}
r
}
#[derive(Serialize, Debug)]
struct ErrorMessage {
code: String,
message: String,
}
impl From<ErrorMessage> for Vec<u8> {
fn from(msg: ErrorMessage) -> Self {
// Safe to unwrap since `ErrorMessage` must succeed in serialization
serde_json::to_vec(&msg).unwrap()
}
}
pub fn error_response(error: HttpError, status: StatusCode) -> Response {
let mut response = Response::new(Version::Http11, status);
let err_msg = ErrorMessage {
code: "UNDEFINED".to_string(),
message: format!("{:?}", error),
};
response.set_body(Body::new(err_msg));
response
}
fn translate_status_code(e: &ApiError) -> StatusCode {
match e {
ApiError::DaemonAbnormal(kind) | ApiError::MountFailure(kind) => match kind {
DaemonErrorKind::NotReady => StatusCode::ServiceUnavailable,
DaemonErrorKind::Unsupported => StatusCode::NotImplemented,
DaemonErrorKind::UnexpectedEvent(_) => StatusCode::BadRequest,
_ => StatusCode::InternalServerError,
},
ApiError::Metrics(MetricsErrorKind::Stats(IoStatsError::NoCounter)) => StatusCode::NotFound,
_ => StatusCode::InternalServerError,
}
}
// API server has successfully processed the request, but can't fulfill that. Therefore,
// a `error_response` is generated whose status code is 4XX or 5XX. With error response,
// it still returns Ok(error_response) to http request handling framework, which means
// nydusd api server receives the request and try handle it, even the request can't be fulfilled.
fn convert_to_response<O: FnOnce(ApiError) -> HttpError>(api_resp: ApiResponse, op: O) -> Response {
match api_resp {
Ok(r) => {
use ApiResponsePayload::*;
match r {
Empty => success_response(None),
DaemonInfo(d) => success_response(Some(d)),
Events(d) => success_response(Some(d)),
FsFilesMetrics(d) => success_response(Some(d)),
FsGlobalMetrics(d) => success_response(Some(d)),
FsFilesPatterns(d) => success_response(Some(d)),
BackendMetrics(d) => success_response(Some(d)),
BlobcacheMetrics(d) => success_response(Some(d)),
FsBackendInfo(d) => success_response(Some(d)),
InflightMetrics(d) => success_response(Some(d)),
}
}
Err(e) => {
let sc = translate_status_code(&e);
error_response(op(e), sc)
}
}
}
pub struct InfoHandler {}
impl EndpointHandler for InfoHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let r = kicker(ApiRequest::DaemonInfo);
Ok(convert_to_response(r, HttpError::Info))
}
(Method::Put, Some(body)) => {
let conf = parse_body(body)?;
let r = kicker(ApiRequest::ConfigureDaemon(conf));
Ok(convert_to_response(r, HttpError::Configure))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct EventsHandler {}
impl EndpointHandler for EventsHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let r = kicker(ApiRequest::Events);
Ok(convert_to_response(r, HttpError::Events))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct MountHandler {}
impl EndpointHandler for MountHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
let mountpoint = extract_query_part(req, "mountpoint").ok_or_else(|| {
HttpError::QueryString("'mountpoint' should be specified in query string".to_string())
})?;
match (req.method(), req.body.as_ref()) {
(Method::Post, Some(body)) => {
let cmd = parse_body(body)?;
let r = kicker(ApiRequest::Mount(mountpoint, cmd));
Ok(convert_to_response(r, HttpError::Mount))
}
(Method::Put, Some(body)) => {
let cmd = parse_body(body)?;
let r = kicker(ApiRequest::Remount(mountpoint, cmd));
Ok(convert_to_response(r, HttpError::Mount))
}
(Method::Delete, None) => {
let r = kicker(ApiRequest::Umount(mountpoint));
Ok(convert_to_response(r, HttpError::Mount))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct MetricsHandler {}
impl EndpointHandler for MetricsHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportGlobalMetrics(id));
Ok(convert_to_response(r, HttpError::GlobalMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct MetricsFilesHandler {}
impl EndpointHandler for MetricsFilesHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest")
.map_or(false, |b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct MetricsPatternHandler {}
impl EndpointHandler for MetricsPatternHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportAccessPatterns(id));
Ok(convert_to_response(r, HttpError::Pattern))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct MetricsBackendHandler {}
impl EndpointHandler for MetricsBackendHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportBackendMetrics(id));
Ok(convert_to_response(r, HttpError::BackendMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct MetricsBlobcacheHandler {}
impl EndpointHandler for MetricsBlobcacheHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportBlobcacheMetrics(id));
Ok(convert_to_response(r, HttpError::BlobcacheMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct MetricsInflightHandler {}
impl EndpointHandler for MetricsInflightHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let r = kicker(ApiRequest::ExportInflightMetrics);
Ok(convert_to_response(r, HttpError::InflightMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct SendFuseFdHandler {}
impl EndpointHandler for SendFuseFdHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Put, None) => {
let r = kicker(ApiRequest::SendFuseFd);
Ok(convert_to_response(r, HttpError::Upgrade))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct TakeoverHandler {}
impl EndpointHandler for TakeoverHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Put, None) => {
let r = kicker(ApiRequest::Takeover);
Ok(convert_to_response(r, HttpError::Upgrade))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct ExitHandler {}
impl EndpointHandler for ExitHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Put, None) => {
let r = kicker(ApiRequest::Exit);
Ok(convert_to_response(r, HttpError::Upgrade))
}
_ => Err(HttpError::BadRequest),
}
}
}
pub struct FsBackendInfo {}
impl EndpointHandler for FsBackendInfo {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let mountpoint = extract_query_part(req, "mountpoint").ok_or_else(|| {
HttpError::QueryString(
"'mountpoint' should be specified in query string".to_string(),
)
})?;
let r = kicker(ApiRequest::ExportFsBackendInfo(mountpoint));
Ok(convert_to_response(r, HttpError::FsBackendInfo))
}
_ => Err(HttpError::BadRequest),
}
}
}

View File

@ -1,197 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright © 2019 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
use dbs_uhttp::{Method, Request, Response};
use crate::http::{ApiError, ApiRequest, ApiResponse, ApiResponsePayload, HttpError};
use crate::http_handler::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
EndpointHandler, HttpResult,
};
// Convert an ApiResponse to a HTTP response.
//
// API server has successfully processed the request, but can't fulfill that. Therefore,
// a `error_response` is generated whose status code is 4XX or 5XX. With error response,
// it still returns Ok(error_response) to http request handling framework, which means
// nydusd api server receives the request and try handle it, even the request can't be fulfilled.
fn convert_to_response<O: FnOnce(ApiError) -> HttpError>(api_resp: ApiResponse, op: O) -> Response {
match api_resp {
Ok(r) => {
use ApiResponsePayload::*;
match r {
Empty => success_response(None),
Events(d) => success_response(Some(d)),
BackendMetrics(d) => success_response(Some(d)),
BlobcacheMetrics(d) => success_response(Some(d)),
_ => panic!("Unexpected response message from API service"),
}
}
Err(e) => {
let status_code = translate_status_code(&e);
error_response(op(e), status_code)
}
}
}
// Global daemon control requests.
/// Start the daemon.
pub struct StartHandler {}
impl EndpointHandler for StartHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Put, None) => {
let r = kicker(ApiRequest::Start);
Ok(convert_to_response(r, HttpError::Configure))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Stop the daemon.
pub struct ExitHandler {}
impl EndpointHandler for ExitHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Put, None) => {
let r = kicker(ApiRequest::Exit);
Ok(convert_to_response(r, HttpError::Configure))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Get daemon global events.
pub struct EventsHandler {}
impl EndpointHandler for EventsHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let r = kicker(ApiRequest::GetEvents);
Ok(convert_to_response(r, HttpError::Events))
}
_ => Err(HttpError::BadRequest),
}
}
}
// Metrics related requests.
/// Get storage backend metrics.
pub struct MetricsBackendHandler {}
impl EndpointHandler for MetricsBackendHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportBackendMetrics(id));
Ok(convert_to_response(r, HttpError::BackendMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Get blob cache metrics.
pub struct MetricsBlobcacheHandler {}
impl EndpointHandler for MetricsBlobcacheHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportBlobcacheMetrics(id));
Ok(convert_to_response(r, HttpError::BlobcacheMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Mount a filesystem.
pub struct MountHandler {}
impl EndpointHandler for MountHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
let mountpoint = extract_query_part(req, "mountpoint").ok_or_else(|| {
HttpError::QueryString("'mountpoint' should be specified in query string".to_string())
})?;
match (req.method(), req.body.as_ref()) {
(Method::Post, Some(body)) => {
let cmd = parse_body(body)?;
let r = kicker(ApiRequest::Mount(mountpoint, cmd));
Ok(convert_to_response(r, HttpError::Mount))
}
(Method::Put, Some(body)) => {
let cmd = parse_body(body)?;
let r = kicker(ApiRequest::Remount(mountpoint, cmd));
Ok(convert_to_response(r, HttpError::Mount))
}
(Method::Delete, None) => {
let r = kicker(ApiRequest::Umount(mountpoint));
Ok(convert_to_response(r, HttpError::Mount))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Send fuse fd to new daemon.
pub struct SendFuseFdHandler {}
impl EndpointHandler for SendFuseFdHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Put, None) => {
let r = kicker(ApiRequest::SendFuseFd);
Ok(convert_to_response(r, HttpError::Upgrade))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Take over fuse fd from old daemon instance.
pub struct TakeoverFuseFdHandler {}
impl EndpointHandler for TakeoverFuseFdHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Put, None) => {
let r = kicker(ApiRequest::TakeoverFuseFd);
Ok(convert_to_response(r, HttpError::Upgrade))
}
_ => Err(HttpError::BadRequest),
}
}
}

View File

@ -1,168 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright © 2019 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//! Nydus API v1.
use dbs_uhttp::{Method, Request, Response};
use crate::http::{ApiError, ApiRequest, ApiResponse, ApiResponsePayload, HttpError};
use crate::http_handler::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
EndpointHandler, HttpResult,
};
/// HTTP URI prefix for API v1.
pub const HTTP_ROOT_V1: &str = "/api/v1";
// Convert an ApiResponse to a HTTP response.
//
// API server has successfully processed the request, but can't fulfill that. Therefore,
// a `error_response` is generated whose status code is 4XX or 5XX. With error response,
// it still returns Ok(error_response) to http request handling framework, which means
// nydusd api server receives the request and try handle it, even the request can't be fulfilled.
fn convert_to_response<O: FnOnce(ApiError) -> HttpError>(api_resp: ApiResponse, op: O) -> Response {
match api_resp {
Ok(r) => {
use ApiResponsePayload::*;
match r {
Empty => success_response(None),
DaemonInfo(d) => success_response(Some(d)),
FsGlobalMetrics(d) => success_response(Some(d)),
FsFilesMetrics(d) => success_response(Some(d)),
FsFilesPatterns(d) => success_response(Some(d)),
FsBackendInfo(d) => success_response(Some(d)),
FsInflightMetrics(d) => success_response(Some(d)),
_ => panic!("Unexpected response message from API service"),
}
}
Err(e) => {
let status_code = translate_status_code(&e);
error_response(op(e), status_code)
}
}
}
/// Get daemon information and set daemon configuration.
pub struct InfoHandler {}
impl EndpointHandler for InfoHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let r = kicker(ApiRequest::GetDaemonInfo);
Ok(convert_to_response(r, HttpError::DaemonInfo))
}
(Method::Put, Some(body)) => {
let conf = parse_body(body)?;
let r = kicker(ApiRequest::ConfigureDaemon(conf));
Ok(convert_to_response(r, HttpError::Configure))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Get filesystem backend information.
pub struct FsBackendInfo {}
impl EndpointHandler for FsBackendInfo {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let mountpoint = extract_query_part(req, "mountpoint").ok_or_else(|| {
HttpError::QueryString(
"'mountpoint' should be specified in query string".to_string(),
)
})?;
let r = kicker(ApiRequest::ExportFsBackendInfo(mountpoint));
Ok(convert_to_response(r, HttpError::FsBackendInfo))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Get filesystem global metrics.
pub struct MetricsFsGlobalHandler {}
impl EndpointHandler for MetricsFsGlobalHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportFsGlobalMetrics(id));
Ok(convert_to_response(r, HttpError::GlobalMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Get filesystem access pattern log.
pub struct MetricsFsAccessPatternHandler {}
impl EndpointHandler for MetricsFsAccessPatternHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let r = kicker(ApiRequest::ExportFsAccessPatterns(id));
Ok(convert_to_response(r, HttpError::Pattern))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Get filesystem file metrics.
pub struct MetricsFsFilesHandler {}
impl EndpointHandler for MetricsFsFilesHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest")
.is_some_and(|b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// Get information about filesystem inflight requests.
pub struct MetricsFsInflightHandler {}
impl EndpointHandler for MetricsFsInflightHandler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let r = kicker(ApiRequest::ExportFsInflightMetrics);
Ok(convert_to_response(r, HttpError::InflightMetrics))
}
_ => Err(HttpError::BadRequest),
}
}
}

View File

@ -1,112 +0,0 @@
// Copyright 2022 Alibaba Cloud. All rights reserved.
// Copyright 2020 Ant Group. All rights reserved.
// Copyright © 2019 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//! Nydus API v2.
use crate::BlobCacheEntry;
use dbs_uhttp::{Method, Request, Response};
use crate::http::{
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, BlobCacheObjectId, HttpError,
};
use crate::http_handler::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
EndpointHandler, HttpResult,
};
/// HTTP URI prefix for API v2.
pub const HTTP_ROOT_V2: &str = "/api/v2";
// Convert an ApiResponse to a HTTP response.
//
// API server has successfully processed the request, but can't fulfill that. Therefore,
// a `error_response` is generated whose status code is 4XX or 5XX. With error response,
// it still returns Ok(error_response) to http request handling framework, which means
// nydusd api server receives the request and try handle it, even the request can't be fulfilled.
fn convert_to_response<O: FnOnce(ApiError) -> HttpError>(api_resp: ApiResponse, op: O) -> Response {
match api_resp {
Ok(r) => {
use ApiResponsePayload::*;
match r {
Empty => success_response(None),
DaemonInfo(d) => success_response(Some(d)),
BlobObjectList(d) => success_response(Some(d)),
_ => panic!("Unexpected response message from API service"),
}
}
Err(e) => {
let status_code = translate_status_code(&e);
error_response(op(e), status_code)
}
}
}
/// Get daemon information and set daemon configuration.
pub struct InfoV2Handler {}
impl EndpointHandler for InfoV2Handler {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
let r = kicker(ApiRequest::GetDaemonInfoV2);
Ok(convert_to_response(r, HttpError::DaemonInfo))
}
(Method::Put, Some(body)) => {
let conf = parse_body(body)?;
let r = kicker(ApiRequest::ConfigureDaemon(conf));
Ok(convert_to_response(r, HttpError::Configure))
}
_ => Err(HttpError::BadRequest),
}
}
}
/// List blob objects managed by the blob cache manager.
pub struct BlobObjectListHandlerV2 {}
impl EndpointHandler for BlobObjectListHandlerV2 {
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult {
match (req.method(), req.body.as_ref()) {
(Method::Get, None) => {
if let Some(domain_id) = extract_query_part(req, "domain_id") {
let blob_id = extract_query_part(req, "blob_id").unwrap_or_default();
let param = BlobCacheObjectId { domain_id, blob_id };
let r = kicker(ApiRequest::GetBlobObject(param));
return Ok(convert_to_response(r, HttpError::GetBlobObjects));
}
Err(HttpError::BadRequest)
}
(Method::Put, Some(body)) => {
let mut conf: Box<BlobCacheEntry> = parse_body(body)?;
if !conf.prepare_configuration_info() {
return Err(HttpError::BadRequest);
}
let r = kicker(ApiRequest::CreateBlobObject(conf));
Ok(convert_to_response(r, HttpError::CreateBlobObject))
}
(Method::Delete, None) => {
if let Some(domain_id) = extract_query_part(req, "domain_id") {
let blob_id = extract_query_part(req, "blob_id").unwrap_or_default();
let param = BlobCacheObjectId { domain_id, blob_id };
let r = kicker(ApiRequest::DeleteBlobObject(param));
return Ok(convert_to_response(r, HttpError::DeleteBlobObject));
}
if let Some(blob_id) = extract_query_part(req, "blob_id") {
let r = kicker(ApiRequest::DeleteBlobFile(blob_id));
return Ok(convert_to_response(r, HttpError::DeleteBlobFile));
}
Err(HttpError::BadRequest)
}
_ => Err(HttpError::BadRequest),
}
}
}

View File

@ -1,404 +0,0 @@
use std::collections::HashMap;
use std::io::{Error, ErrorKind, Result};
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
use std::sync::mpsc::{Receiver, Sender};
use std::sync::Arc;
use std::time::SystemTime;
use std::{fs, thread};
use dbs_uhttp::{Body, HttpServer, MediaType, Request, Response, ServerError, StatusCode, Version};
use http::uri::Uri;
use mio::unix::SourceFd;
use mio::{Events, Interest, Poll, Token, Waker};
use serde::Deserialize;
use url::Url;
use crate::http::{
ApiError, ApiRequest, ApiResponse, DaemonErrorKind, ErrorMessage, HttpError, MetricsError,
MetricsErrorKind,
};
use crate::http_endpoint_common::{
EventsHandler, ExitHandler, MetricsBackendHandler, MetricsBlobcacheHandler, MountHandler,
SendFuseFdHandler, StartHandler, TakeoverFuseFdHandler,
};
use crate::http_endpoint_v1::{
FsBackendInfo, InfoHandler, MetricsFsAccessPatternHandler, MetricsFsFilesHandler,
MetricsFsGlobalHandler, MetricsFsInflightHandler, HTTP_ROOT_V1,
};
use crate::http_endpoint_v2::{BlobObjectListHandlerV2, InfoV2Handler, HTTP_ROOT_V2};
const EXIT_TOKEN: Token = Token(usize::MAX);
const REQUEST_TOKEN: Token = Token(1);
/// Specialized version of [`std::result::Result`] for value returned by [`EndpointHandler`].
pub type HttpResult = std::result::Result<Response, HttpError>;
/// Get query parameter with `key` from the HTTP request.
pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// Splicing req.uri with "http:" prefix might look weird, but since it depends on
// crate `Url` to generate query_pairs HashMap, which is working on top of Url not Uri.
// Better that we can add query part support to Micro-http in the future. But
// right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix)
.inspect_err(|e| {
error!("api: can't parse request {:?}", e);
})
.ok()?;
for (k, v) in url.query_pairs() {
if k == key {
trace!("api: got query param {}={}", k, v);
return Some(v.into_owned());
}
}
None
}
/// Parse HTTP request body.
pub(crate) fn parse_body<'a, F: Deserialize<'a>>(b: &'a Body) -> std::result::Result<F, HttpError> {
serde_json::from_slice::<F>(b.raw()).map_err(HttpError::ParseBody)
}
/// Translate ApiError message to HTTP status code.
pub(crate) fn translate_status_code(e: &ApiError) -> StatusCode {
match e {
ApiError::DaemonAbnormal(kind) | ApiError::MountFilesystem(kind) => match kind {
DaemonErrorKind::NotReady => StatusCode::ServiceUnavailable,
DaemonErrorKind::Unsupported => StatusCode::NotImplemented,
DaemonErrorKind::UnexpectedEvent(_) => StatusCode::BadRequest,
_ => StatusCode::InternalServerError,
},
ApiError::Metrics(MetricsErrorKind::Stats(MetricsError::NoCounter)) => StatusCode::NotFound,
_ => StatusCode::InternalServerError,
}
}
/// Generate a successful HTTP response message.
pub(crate) fn success_response(body: Option<String>) -> Response {
if let Some(body) = body {
let mut r = Response::new(Version::Http11, StatusCode::OK);
r.set_body(Body::new(body));
r
} else {
Response::new(Version::Http11, StatusCode::NoContent)
}
}
/// Generate a HTTP error response message with status code and error message.
pub(crate) fn error_response(error: HttpError, status: StatusCode) -> Response {
let mut response = Response::new(Version::Http11, status);
let err_msg = ErrorMessage {
code: "UNDEFINED".to_string(),
message: format!("{:?}", error),
};
response.set_body(Body::new(err_msg));
response
}
/// Trait for HTTP endpoints to handle HTTP requests.
pub trait EndpointHandler: Sync + Send {
/// Handles an HTTP request.
///
/// The main responsibilities of the handlers includes:
/// - parse and validate incoming request message
/// - send the request to subscriber
/// - wait response from the subscriber
/// - generate HTTP result
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult;
}
/// Struct to route HTTP requests to corresponding registered endpoint handlers.
pub struct HttpRoutes {
/// routes is a hash table mapping endpoint URIs to their endpoint handlers.
pub routes: HashMap<String, Box<dyn EndpointHandler + Sync + Send>>,
}
macro_rules! endpoint_v1 {
($path:expr) => {
format!("{}{}", HTTP_ROOT_V1, $path)
};
}
macro_rules! endpoint_v2 {
($path:expr) => {
format!("{}{}", HTTP_ROOT_V2, $path)
};
}
lazy_static! {
/// HTTP_ROUTES contain all the nydusd HTTP routes.
pub static ref HTTP_ROUTES: HttpRoutes = {
let mut r = HttpRoutes {
routes: HashMap::new(),
};
// Common
r.routes.insert(endpoint_v1!("/daemon/events"), Box::new(EventsHandler{}));
r.routes.insert(endpoint_v1!("/daemon/exit"), Box::new(ExitHandler{}));
r.routes.insert(endpoint_v1!("/daemon/start"), Box::new(StartHandler{}));
r.routes.insert(endpoint_v1!("/daemon/fuse/sendfd"), Box::new(SendFuseFdHandler{}));
r.routes.insert(endpoint_v1!("/daemon/fuse/takeover"), Box::new(TakeoverFuseFdHandler{}));
r.routes.insert(endpoint_v1!("/mount"), Box::new(MountHandler{}));
r.routes.insert(endpoint_v1!("/metrics/backend"), Box::new(MetricsBackendHandler{}));
r.routes.insert(endpoint_v1!("/metrics/blobcache"), Box::new(MetricsBlobcacheHandler{}));
// Nydus API, v1
r.routes.insert(endpoint_v1!("/daemon"), Box::new(InfoHandler{}));
r.routes.insert(endpoint_v1!("/daemon/backend"), Box::new(FsBackendInfo{}));
r.routes.insert(endpoint_v1!("/metrics"), Box::new(MetricsFsGlobalHandler{}));
r.routes.insert(endpoint_v1!("/metrics/files"), Box::new(MetricsFsFilesHandler{}));
r.routes.insert(endpoint_v1!("/metrics/inflight"), Box::new(MetricsFsInflightHandler{}));
r.routes.insert(endpoint_v1!("/metrics/pattern"), Box::new(MetricsFsAccessPatternHandler{}));
// Nydus API, v2
r.routes.insert(endpoint_v2!("/daemon"), Box::new(InfoV2Handler{}));
r.routes.insert(endpoint_v2!("/blobs"), Box::new(BlobObjectListHandlerV2{}));
r
};
}
fn kick_api_server(
to_api: &Sender<Option<ApiRequest>>,
from_api: &Receiver<ApiResponse>,
request: ApiRequest,
) -> ApiResponse {
to_api.send(Some(request)).map_err(ApiError::RequestSend)?;
from_api.recv().map_err(ApiError::ResponseRecv)?
}
// Example:
// <-- GET /
// --> GET / 200 835ms 746b
fn trace_api_begin(request: &dbs_uhttp::Request) {
debug!("<--- {:?} {:?}", request.method(), request.uri());
}
fn trace_api_end(response: &dbs_uhttp::Response, method: dbs_uhttp::Method, recv_time: SystemTime) {
let elapse = SystemTime::now().duration_since(recv_time);
debug!(
"---> {:?} Status Code: {:?}, Elapse: {:?}, Body Size: {:?}",
method,
response.status(),
elapse,
response.content_length()
);
}
fn exit_api_server(to_api: &Sender<Option<ApiRequest>>) {
if to_api.send(None).is_err() {
error!("failed to send stop request api server");
}
}
fn handle_http_request(
request: &Request,
to_api: &Sender<Option<ApiRequest>>,
from_api: &Receiver<ApiResponse>,
) -> Response {
let begin_time = SystemTime::now();
trace_api_begin(request);
// Micro http should ensure that req path is legal.
let uri_parsed = request.uri().get_abs_path().parse::<Uri>();
let mut response = match uri_parsed {
Ok(uri) => match HTTP_ROUTES.routes.get(uri.path()) {
Some(route) => route
.handle_request(request, &|r| kick_api_server(to_api, from_api, r))
.unwrap_or_else(|err| error_response(err, StatusCode::BadRequest)),
None => error_response(HttpError::NoRoute, StatusCode::NotFound),
},
Err(e) => {
error!("Failed parse URI, {}", e);
error_response(HttpError::BadRequest, StatusCode::BadRequest)
}
};
response.set_server("Nydus API");
response.set_content_type(MediaType::ApplicationJson);
trace_api_end(&response, request.method(), begin_time);
response
}
/// Start a HTTP server to serve API requests.
///
/// Start a HTTP server parsing http requests and send to nydus API server a concrete
/// request to operate nydus or fetch working status.
/// The HTTP server sends request by `to_api` channel and wait for response from `from_api` channel.
pub fn start_http_thread(
path: &str,
to_api: Sender<Option<ApiRequest>>,
from_api: Receiver<ApiResponse>,
) -> Result<(thread::JoinHandle<Result<()>>, Arc<Waker>)> {
// Try to remove existed unix domain socket
let _ = fs::remove_file(path);
let socket_path = PathBuf::from(path);
let mut poll = Poll::new()?;
let waker = Arc::new(Waker::new(poll.registry(), EXIT_TOKEN)?);
let waker2 = waker.clone();
let mut server = HttpServer::new(socket_path).map_err(|e| {
if let ServerError::IOError(e) = e {
e
} else {
Error::new(ErrorKind::Other, format!("{:?}", e))
}
})?;
poll.registry().register(
&mut SourceFd(&server.epoll().as_raw_fd()),
REQUEST_TOKEN,
Interest::READABLE,
)?;
let thread = thread::Builder::new()
.name("nydus-http-server".to_string())
.spawn(move || {
// Must start the server successfully or just die by panic
server.start_server().unwrap();
info!("http server started");
let mut events = Events::with_capacity(100);
let mut do_exit = false;
loop {
match poll.poll(&mut events, None) {
Err(e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => {
error!("http server poll events failed, {}", e);
exit_api_server(&to_api);
return Err(e);
}
Ok(_) => {}
}
for event in &events {
match event.token() {
EXIT_TOKEN => do_exit = true,
REQUEST_TOKEN => match server.requests() {
Ok(request_vec) => {
for server_request in request_vec {
let reply = server_request.process(|request| {
handle_http_request(request, &to_api, &from_api)
});
// Ignore error when sending response
server.respond(reply).unwrap_or_else(|e| {
error!("HTTP server error on response: {}", e)
});
}
}
Err(e) => {
error!("HTTP server error on retrieving incoming request: {}", e);
}
},
_ => unreachable!("unknown poll token."),
}
}
if do_exit {
exit_api_server(&to_api);
break;
}
}
info!("http-server thread exits");
// Keep the Waker alive to match the lifetime of the poll loop above
drop(waker2);
Ok(())
})?;
Ok((thread, waker))
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::mpsc::channel;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit"));
assert!(HTTP_ROUTES
.routes
.contains_key("/api/v1/daemon/fuse/sendfd"));
assert!(HTTP_ROUTES
.routes
.contains_key("/api/v1/daemon/fuse/takeover"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight"));
}
#[test]
fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs"));
}
#[test]
fn test_kick_api_server() {
let (to_api, from_route) = channel();
let (to_route, from_api) = channel();
let request = ApiRequest::GetDaemonInfo;
let thread = thread::spawn(move || match kick_api_server(&to_api, &from_api, request) {
Err(reply) => matches!(reply, ApiError::ResponsePayloadType),
Ok(_) => panic!("unexpected reply message"),
});
let req2 = from_route.recv().unwrap();
matches!(req2.as_ref().unwrap(), ApiRequest::GetDaemonInfo);
let reply: ApiResponse = Err(ApiError::ResponsePayloadType);
to_route.send(reply).unwrap();
thread.join().unwrap();
let (to_api, from_route) = channel();
let (to_route, from_api) = channel();
drop(to_route);
let request = ApiRequest::GetDaemonInfo;
assert!(kick_api_server(&to_api, &from_api, request).is_err());
drop(from_route);
let request = ApiRequest::GetDaemonInfo;
assert!(kick_api_server(&to_api, &from_api, request).is_err());
}
#[test]
fn test_extract_query_part() {
let req = Request::try_from(
b"GET http://localhost/api/v1/daemon?arg1=test HTTP/1.0\r\n\r\n",
None,
)
.unwrap();
let arg1 = extract_query_part(&req, "arg1").unwrap();
assert_eq!(arg1, "test");
assert!(extract_query_part(&req, "arg2").is_none());
}
#[test]
fn test_start_http_thread() {
let tmpdir = TempFile::new().unwrap();
let path = tmpdir.as_path().to_str().unwrap();
let (to_api, from_route) = channel();
let (_to_route, from_api) = channel();
let (thread, waker) = start_http_thread(path, to_api, from_api).unwrap();
waker.wake().unwrap();
let msg = from_route.recv().unwrap();
assert!(msg.is_none());
let _ = thread.join().unwrap();
}
}

View File

@ -2,46 +2,16 @@
//
// SPDX-License-Identifier: Apache-2.0
//! APIs for the Nydus Image Service
//!
//! The `nydus-api` crate defines API and related data structures for Nydus Image Service.
//! All data structures used by the API are encoded in JSON format.
#[cfg_attr(feature = "handler", macro_use)]
extern crate log;
#[macro_use]
extern crate log;
extern crate serde;
#[cfg(feature = "handler")]
#[macro_use]
extern crate serde_derive;
extern crate micro_http;
extern crate vmm_sys_util;
#[macro_use]
extern crate lazy_static;
extern crate url;
pub mod config;
pub use config::*;
#[macro_use]
pub mod error;
pub mod http;
pub use self::http::*;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_common;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_v1;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_v2;
#[cfg(feature = "handler")]
pub(crate) mod http_handler;
#[cfg(feature = "handler")]
pub use http_handler::{
extract_query_part, start_http_thread, EndpointHandler, HttpResult, HttpRoutes, HTTP_ROUTES,
};
/// Application build and version information.
#[derive(Serialize, Clone)]
pub struct BuildTimeInfo {
pub package_ver: String,
pub git_commit: String,
pub build_time: String,
pub profile: String,
pub rustc: String,
}
pub mod http_endpoint;

14
app/CHANGELOG.md Normal file
View File

@ -0,0 +1,14 @@
# Changelog
## [Unreleased]
### Added
### Fixed
### Deprecated
## [v0.1.0]
### Added
- Initial release

1
app/CODEOWNERS Normal file
View File

@ -0,0 +1 @@
* @bergwolf @imeoer @jiangliu

22
app/Cargo.toml Normal file
View File

@ -0,0 +1,22 @@
[package]
name = "nydus-app"
version = "0.1.0"
authors = ["The Nydus Developers"]
description = "Application framework and utilities for Nydus"
readme = "README.md"
repository = "https://github.com/dragonflyoss/image-service"
license = "Apache-2.0"
edition = "2018"
build = "build.rs"
[build-dependencies]
built = { version = "=0.4.3", features = ["chrono", "git2"] }
[dependencies]
flexi_logger = { version = "0.17" }
libc = "0.2"
log = "0.4.8"
nix = "0.17"
serde = { version = ">=1.0.27", features = ["serde_derive"] }
nydus-error = "0.1"

57
app/README.md Normal file
View File

@ -0,0 +1,57 @@
# nydus-error
The `nydus-app` crate is a collection of utilities to help creating applications for [`Nydus Image Service`](https://github.com/dragonflyoss/image-service) project, which provides:
- `struct BuildTimeInfo`: application build and version information.
- `fn dump_program_info()`: dump program build and version information.
- `fn setup_logging()`: setup logging infrastructure for application.
## Support
**Platforms**:
- x86_64
- aarch64
**Operating Systems**:
- Linux
## Usage
Add `nydus-app` as a dependency in `Cargo.toml`
```toml
[dependencies]
nydus-app = "*"
```
Then add `extern crate nydus-error;` to your crate root if needed.
## Examples
- Setup application infrastructure.
```rust
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use clap::App;
use std::io::Result;
use nydus_app::{BuildTimeInfo, setup_logging};
fn main() -> Result<()> {
let level = cmd.value_of("log-level").unwrap().parse().unwrap();
let (bti_string, build_info) = BuildTimeInfo::dump(crate_version!());
let _cmd = App::new("")
.version(bti_string.as_str())
.author(crate_authors!())
.get_matches();
setup_logging(None, level)?;
print!("{}", build_info);
Ok(())
}
```
## License
This code is licensed under [Apache-2.0](LICENSE).

8
app/build.rs Normal file
View File

@ -0,0 +1,8 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
fn main() {
println!("cargo:rerun-if-changed=../git/HEAD");
built::write_built_file().expect("Failed to acquire build-time information");
}

View File

@ -3,15 +3,53 @@
//
// SPDX-License-Identifier: Apache-2.0
//! Application framework and utilities for Nydus.
//!
//! The `nydus-app` crates provides common helpers and utilities to support Nydus application:
//! - Application Building Information: [`struct BuildTimeInfo`](struct.BuildTimeInfo.html) and
//! [`fn dump_program_info()`](fn.dump_program_info.html).
//! - Logging helpers: [`fn setup_logging()`](fn.set_logging.html) and
//! [`fn log_level_to_verbosity()`](fn.log_level_to_verbosity.html).
//! - Signal handling: [`fn register_signal_handler()`](signal/fn.register_signal_handler.html).
//!
//! ```rust,ignore
//! #[macro_use(crate_authors, crate_version)]
//! extern crate clap;
//!
//! use clap::App;
//! use nydus_app::{BuildTimeInfo, setup_logging};
//! # use std::io::Result;
//!
//! fn main() -> Result<()> {
//! let level = cmd.value_of("log-level").unwrap().parse().unwrap();
//! let (bti_string, build_info) = BuildTimeInfo::dump(crate_version!());
//! let _cmd = App::new("")
//! .version(bti_string.as_str())
//! .author(crate_authors!())
//! .get_matches();
//!
//! setup_logging(None, level)?;
//! print!("{}", build_info);
//!
//! Ok(())
//! }
//! ```
#[macro_use]
extern crate log;
#[macro_use]
extern crate nydus_error;
#[macro_use]
extern crate serde;
use std::env::current_dir;
use std::io::Result;
use std::path::PathBuf;
use flexi_logger::{
self, style, Cleanup, Criterion, DeferredNow, FileSpec, Logger, Naming,
TS_DASHES_BLANK_COLONS_DOT_BLANK,
};
use log::{Level, LevelFilter, Record};
use flexi_logger::{self, colored_opt_format, opt_format, Logger};
use log::LevelFilter;
pub mod signal;
pub fn log_level_to_verbosity(level: log::LevelFilter) -> usize {
if level == log::LevelFilter::Off {
@ -21,67 +59,52 @@ pub fn log_level_to_verbosity(level: log::LevelFilter) -> usize {
}
}
fn get_file_name<'a>(record: &'a Record) -> Option<&'a str> {
record.file().map(|v| match v.rfind("/src/") {
None => v,
Some(pos) => match v[..pos].rfind('/') {
None => &v[pos..],
Some(p) => &v[p..],
},
})
pub mod built_info {
include!(concat!(env!("OUT_DIR"), "/built.rs"));
}
fn opt_format(
w: &mut dyn std::io::Write,
now: &mut DeferredNow,
record: &Record,
) -> std::result::Result<(), std::io::Error> {
let level = record.level();
if level == Level::Info {
write!(
w,
"[{}] {} {}",
now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK),
record.level(),
&record.args()
)
} else {
write!(
w,
"[{}] {} [{}:{}] {}",
now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK),
record.level(),
get_file_name(record).unwrap_or("<unnamed>"),
record.line().unwrap_or(0),
&record.args()
)
}
/// Dump program build and version information.
pub fn dump_program_info(prog_version: &str) {
info!(
"Program Version: {}, Git Commit: {:?}, Build Time: {:?}, Profile: {:?}, Rustc Version: {:?}",
prog_version,
built_info::GIT_COMMIT_HASH.unwrap_or_default(),
built_info::BUILT_TIME_UTC,
built_info::PROFILE,
built_info::RUSTC_VERSION,
);
}
fn colored_opt_format(
w: &mut dyn std::io::Write,
now: &mut DeferredNow,
record: &Record,
) -> std::result::Result<(), std::io::Error> {
let level = record.level();
if level == Level::Info {
write!(
w,
"[{}] {} {}",
style(level).paint(now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK).to_string()),
style(level).paint(level.to_string()),
style(level).paint(record.args().to_string())
)
} else {
write!(
w,
"[{}] {} [{}:{}] {}",
style(level).paint(now.format(TS_DASHES_BLANK_COLONS_DOT_BLANK).to_string()),
style(level).paint(level.to_string()),
get_file_name(record).unwrap_or("<unnamed>"),
record.line().unwrap_or(0),
style(level).paint(record.args().to_string())
)
/// Application build and version information.
#[derive(Serialize, Clone)]
pub struct BuildTimeInfo {
pub package_ver: String,
pub git_commit: String,
build_time: String,
profile: String,
rustc: String,
}
impl BuildTimeInfo {
pub fn dump(package_ver: &str) -> (String, Self) {
let info_string = format!(
"\rVersion: \t{}\nGit Commit: \t{}\nBuild Time: \t{}\nProfile: \t{}\nRustc: \t\t{}\n",
package_ver,
built_info::GIT_COMMIT_HASH.unwrap_or_default(),
built_info::BUILT_TIME_UTC,
built_info::PROFILE,
built_info::RUSTC_VERSION,
);
let info = Self {
package_ver: package_ver.to_string(),
git_commit: built_info::GIT_COMMIT_HASH.unwrap_or_default().to_string(),
build_time: built_info::BUILT_TIME_UTC.to_string(),
profile: built_info::PROFILE.to_string(),
rustc: built_info::RUSTC_VERSION.to_string(),
};
(info_string, info)
}
}
@ -92,14 +115,18 @@ fn colored_opt_format(
/// Flexi logger always appends a suffix to file name whose default value is ".log"
/// unless we set it intentionally. I don't like this passion. When the basename of `log_file_path`
/// is "bar", the newly created log file will be "bar.log"
pub fn setup_logging(
log_file_path: Option<PathBuf>,
level: LevelFilter,
rotation_size: u64,
) -> Result<()> {
pub fn setup_logging(log_file_path: Option<PathBuf>, level: LevelFilter) -> Result<()> {
if let Some(ref path) = log_file_path {
// Do not try to canonicalize the path since the file may not exist yet.
let mut spec = FileSpec::default().suppress_timestamp();
// We rely on rust `log` macro to limit current log level rather than `flexi_logger`
// So we set `flexi_logger` log level to "trace" which is High enough. Otherwise, we
// can't change log level to a higher level than what is passed to `flexi_logger`.
let mut logger = Logger::with_env_or_str("trace")
.log_to_file()
.suppress_timestamp()
.append()
.format(opt_format);
// Parse log file to get the `basename` and `suffix`(extension) because `flexi_logger`
// will automatically add `.log` suffix if we don't set explicitly, see:
@ -115,15 +142,15 @@ pub fn setup_logging(
eprintln!("invalid file name input {:?}", path);
einval!()
})?;
spec = spec.basename(basename);
logger = logger.basename(basename);
// `flexi_logger` automatically add `.log` suffix if the file name has no extension.
// `flexi_logger` automatically add `.log` suffix if the file name has not extension.
if let Some(suffix) = path.extension() {
let suffix = suffix.to_str().ok_or_else(|| {
eprintln!("invalid file extension {:?}", suffix);
einval!()
})?;
spec = spec.suffix(suffix);
logger = logger.suffix(suffix);
}
// Set log directory
@ -135,26 +162,7 @@ pub fn setup_logging(
} else {
p.to_path_buf()
};
spec = spec.directory(dir);
}
// We rely on rust `log` macro to limit current log level rather than `flexi_logger`
// So we set `flexi_logger` log level to "trace" which is High enough. Otherwise, we
// can't change log level to a higher level than what is passed to `flexi_logger`.
let mut logger = Logger::try_with_env_or_str("trace")
.map_err(|_e| enosys!())?
.log_to_file(spec)
.append()
.format(opt_format);
// Set log rotation
if rotation_size > 0 {
let log_rotation_size_byte: u64 = rotation_size * 1024 * 1024;
logger = logger.rotate(
Criterion::Size(log_rotation_size_byte),
Naming::Timestamps,
Cleanup::KeepCompressedFiles(10),
);
logger = logger.directory(dir);
}
logger.start().map_err(|e| {
@ -165,20 +173,13 @@ pub fn setup_logging(
// We rely on rust `log` macro to limit current log level rather than `flexi_logger`
// So we set `flexi_logger` log level to "trace" which is High enough. Otherwise, we
// can't change log level to a higher level than what is passed to `flexi_logger`.
Logger::try_with_env_or_str("trace")
.map_err(|_e| enosys!())?
Logger::with_env_or_str("trace")
.format(colored_opt_format)
.start()
.map_err(|e| eother!(e))?;
}
log::set_max_level(level);
// Dump panic info and backtrace to logger.
log_panics::Config::new()
.backtrace_mode(log_panics::BacktraceMode::Resolved)
.install_panic_hook();
Ok(())
}
@ -192,13 +193,4 @@ mod tests {
assert_eq!(log_level_to_verbosity(log::LevelFilter::Error), 0);
assert_eq!(log_level_to_verbosity(log::LevelFilter::Warn), 1);
}
#[test]
fn test_log_rotation() {
let log_file = Some(PathBuf::from("test_log_rotation"));
let level = LevelFilter::Info;
let rotation_size = 1; // 1MB
assert!(setup_logging(log_file, level, rotation_size).is_ok());
}
}

View File

@ -1,64 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::ffi::OsString;
use std::process::Command;
use std::str::FromStr;
use std::{ffi, io};
fn get_version_from_cmd(executable: &ffi::OsStr) -> io::Result<String> {
let output = Command::new(executable).arg("-V").output()?;
let mut v = String::from_utf8(output.stdout).unwrap();
v.pop(); // remove newline
Ok(v)
}
fn get_git_commit_hash() -> String {
let commit = Command::new("git")
.arg("rev-parse")
.arg("--verify")
.arg("HEAD")
.output();
if let Ok(commit_output) = commit {
if let Some(commit) = String::from_utf8_lossy(&commit_output.stdout)
.lines()
.next()
{
return commit.to_string();
}
}
"unknown".to_string()
}
fn get_git_commit_version() -> String {
let tag = Command::new("git").args(["describe", "--tags"]).output();
if let Ok(tag) = tag {
if let Some(tag) = String::from_utf8_lossy(&tag.stdout).lines().next() {
return tag.to_string();
}
}
"unknown".to_string()
}
fn main() {
let rustc_ver = if let Ok(p) = std::env::var("RUSTC") {
let rustc = OsString::from_str(&p).unwrap();
get_version_from_cmd(&rustc).unwrap()
} else {
"<Unknown>".to_string()
};
let profile = std::env::var("PROFILE").unwrap_or_else(|_| "<Unknown>".to_string());
let build_time = time::OffsetDateTime::now_utc()
.format(&time::format_description::well_known::Iso8601::DEFAULT)
.unwrap();
let git_commit_hash = get_git_commit_hash();
let git_commit_version = get_git_commit_version();
println!("cargo:rerun-if-changed=../git/HEAD");
println!("cargo:rustc-env=RUSTC_VERSION={}", rustc_ver);
println!("cargo:rustc-env=PROFILE={}", profile);
println!("cargo:rustc-env=BUILT_TIME_UTC={}", build_time);
println!("cargo:rustc-env=GIT_COMMIT_HASH={}", git_commit_hash);
println!("cargo:rustc-env=GIT_COMMIT_VERSION={}", git_commit_version);
}

View File

@ -1,35 +0,0 @@
[package]
name = "nydus-builder"
version = "0.2.0"
description = "Nydus Image Builder"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[dependencies]
anyhow = "1.0.35"
base64 = "0.21"
hex = "0.4.3"
indexmap = "2"
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
sha2 = "0.10.2"
tar = "0.4.40"
vmm-sys-util = "0.12.1"
xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.5.0", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "aarch64-unknown-linux-gnu", "aarch64-apple-darwin"]

View File

@ -1,189 +0,0 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -1,283 +0,0 @@
// Copyright (C) 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate Chunkdict RAFS bootstrap.
//! -------------------------------------------------------------------------------------------------
//! Bug 1: Inconsistent Chunk Size Leading to Blob Size Less Than 4K(v6_block_size)
//! Description: The size of chunks is not consistent, which results in the possibility that a blob,
//! composed of a group of these chunks, may be less than 4K(v6_block_size) in size.
//! This inconsistency leads to a failure in passing the size check.
//! -------------------------------------------------------------------------------------------------
//! Bug 2: Incorrect Chunk Number Calculation Due to Premature Check Logic
//! Description: The current logic for calculating the chunk number is based on the formula size/chunk size.
//! However, this approach is flawed as it precedes the actual check which accounts for chunk statistics.
//! Consequently, this leads to inaccurate counting of chunk numbers.
use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node;
use crate::NodeChunk;
use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest;
use std::mem::size_of;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ChunkdictChunkInfo {
pub image_reference: String,
pub version: String,
pub chunk_blob_id: String,
pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64,
pub chunk_uncompressed_offset: u64,
}
pub struct ChunkdictBlobInfo {
pub blob_id: String,
pub blob_compressed_size: u64,
pub blob_uncompressed_size: u64,
pub blob_compressor: String,
pub blob_meta_ci_compressed_size: u64,
pub blob_meta_ci_uncompressed_size: u64,
pub blob_meta_ci_offset: u64,
}
/// Struct to generate chunkdict RAFS bootstrap.
pub struct Generator {}
impl Generator {
// Generate chunkdict RAFS bootstrap.
pub fn generate(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
chunkdict_chunks_origin: Vec<ChunkdictChunkInfo>,
chunkdict_blobs: Vec<ChunkdictBlobInfo>,
) -> Result<BuildOutput> {
// Validate and remove chunks whose belonged blob sizes are smaller than a block.
let mut chunkdict_chunks = chunkdict_chunks_origin.to_vec();
Self::validate_and_remove_chunks(ctx, &mut chunkdict_chunks);
// Build root tree.
let mut tree = Self::build_root_tree(ctx)?;
// Build child tree.
let child = Self::build_child_tree(ctx, blob_mgr, &chunkdict_chunks, &chunkdict_blobs)?;
let result = vec![child];
tree.children = result;
Self::validate_tree(&tree)?;
// Build bootstrap.
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
/// Validate tree.
fn validate_tree(tree: &Tree) -> Result<()> {
let pre = &mut |t: &Tree| -> Result<()> {
let node = t.borrow_mut_node();
debug!("chunkdict tree: ");
debug!("inode: {}", node);
for chunk in &node.chunks {
debug!("\t chunk: {}", chunk);
}
Ok(())
};
tree.walk_dfs_pre(pre)?;
debug!("chunkdict tree is valid.");
Ok(())
}
/// Validates and removes chunks with a total uncompressed size smaller than the block size limit.
fn validate_and_remove_chunks(ctx: &mut BuildContext, chunkdict: &mut Vec<ChunkdictChunkInfo>) {
let mut chunk_sizes = std::collections::HashMap::new();
// Accumulate the uncompressed size for each chunk_blob_id.
for chunk in chunkdict.iter() {
*chunk_sizes.entry(chunk.chunk_blob_id.clone()).or_insert(0) +=
chunk.chunk_uncompressed_size as u64;
}
// Find all chunk_blob_ids with a total uncompressed size > v6_block_size.
let small_chunks: Vec<String> = chunk_sizes
.into_iter()
.filter(|&(_, size)| size < ctx.v6_block_size())
.inspect(|(id, _)| {
eprintln!(
"Warning: Blob with id '{}' is smaller than {} bytes.",
id,
ctx.v6_block_size()
)
})
.map(|(id, _)| id)
.collect();
// Retain only chunks with chunk_blob_id that has a total uncompressed size > v6_block_size.
chunkdict.retain(|chunk| !small_chunks.contains(&chunk.chunk_blob_id));
}
/// Build the root tree.
pub fn build_root_tree(ctx: &mut BuildContext) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(1);
inode.set_uid(1000);
inode.set_gid(1000);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFDIR as u32);
inode.set_nlink(3);
inode.set_name_size("/".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 0,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/"),
target: PathBuf::from("/"),
target_vec: vec![OsString::from("/")],
symlink: None,
xattrs: RafsXAttrs::default(),
v6_force_extended_inode: true,
};
let root_node = Node::new(inode, node_info, 0);
let tree = Tree::new(root_node);
Ok(tree)
}
/// Build the child tree.
fn build_child_tree(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(2);
inode.set_uid(0);
inode.set_gid(0);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFREG as u32);
inode.set_nlink(1);
inode.set_name_size("chunkdict".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 1,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/chunkdict"),
target: PathBuf::from("/chunkdict"),
target_vec: vec![OsString::from("/"), OsString::from("/chunkdict")],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: true,
};
let mut node = Node::new(inode, node_info, 0);
// Insert chunks.
Self::insert_chunks(ctx, blob_mgr, &mut node, chunkdict_chunks, chunkdict_blobs)?;
let node_size: u64 = node
.chunks
.iter()
.map(|chunk| chunk.inner.uncompressed_size() as u64)
.sum();
node.inode.set_size(node_size);
// Update child count.
node.inode.set_child_count(node.chunks.len() as u32);
let child = Tree::new(node);
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
Ok(child)
}
/// Insert chunks.
fn insert_chunks(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
node: &mut Node,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<()> {
for (index, chunk_info) in chunkdict_chunks.iter().enumerate() {
let chunk_size: u32 = chunk_info.chunk_compressed_size;
let file_offset = index as u64 * chunk_size as u64;
let mut chunk = ChunkWrapper::new(ctx.fs_version);
// Update blob context.
let (blob_index, blob_ctx) =
blob_mgr.get_or_cerate_blob_for_chunkdict(ctx, &chunk_info.chunk_blob_id)?;
let chunk_uncompressed_size = chunk_info.chunk_uncompressed_size;
let pre_d_offset = blob_ctx.current_uncompressed_offset;
blob_ctx.uncompressed_blob_size = pre_d_offset + chunk_uncompressed_size as u64;
blob_ctx.current_uncompressed_offset += chunk_uncompressed_size as u64;
blob_ctx.blob_meta_header.set_ci_uncompressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
blob_ctx.blob_meta_header.set_ci_compressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
let chunkdict_blob_info = chunkdict_blobs
.iter()
.find(|blob| blob.blob_id == chunk_info.chunk_blob_id)
.unwrap();
blob_ctx.blob_compressor =
Algorithm::from_str(chunkdict_blob_info.blob_compressor.as_str())?;
blob_ctx
.blob_meta_header
.set_ci_uncompressed_size(chunkdict_blob_info.blob_meta_ci_uncompressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_size(chunkdict_blob_info.blob_meta_ci_compressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_offset(chunkdict_blob_info.blob_meta_ci_offset);
blob_ctx.blob_meta_header.set_ci_compressor(Algorithm::Zstd);
// Update chunk context.
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
chunk.set_compressed_size(chunk_info.chunk_compressed_size);
chunk.set_compressed_offset(chunk_info.chunk_compressed_offset);
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk.clone()),
});
}
Ok(())
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,364 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::borrow::Cow;
use std::slice;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{toc, BlobMetaChunkArray};
use nydus_utils::digest::{self, DigestHasher, RafsDigest};
use nydus_utils::{compress, crypt};
use sha2::digest::Digest;
use super::layout::BlobLayout;
use super::node::Node;
use crate::core::context::Artifact;
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
const VALID_BLOB_ID_LENGTH: usize = 64;
/// Generator for RAFS data blob.
pub(crate) struct Blob {}
impl Blob {
/// Dump blob file and generate chunks
pub(crate) fn dump(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
match ctx.conversion_type {
ConversionType::DirectoryToRafs => {
let mut chunk_data_buf = vec![0u8; RAFS_MAX_CHUNK_SIZE as usize];
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&ctx.prefetch)?;
for (idx, node) in inodes.iter().enumerate() {
let mut node = node.borrow_mut();
let size = node
.dump_node_data(ctx, blob_mgr, blob_writer, &mut chunk_data_buf)
.context("failed to dump blob chunks")?;
if idx < prefetch_entries {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
blob_ctx.blob_prefetch_size += size;
}
}
}
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToRafs
| ConversionType::TargzToRafs
| ConversionType::EStargzToRafs => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToTarfs
| ConversionType::TarToRef
| ConversionType::TargzToRef
| ConversionType::EStargzToRef => {
// Use `sha256(tarball)` as `blob_id` for ref-type conversions.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if ctx.conversion_type == ConversionType::TarToTarfs {
blob_ctx.uncompressed_blob_size = blob_ctx.compressed_blob_size;
}
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::EStargzIndexToRef => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToStargz
| ConversionType::DirectoryToTargz
| ConversionType::DirectoryToStargz
| ConversionType::TargzToStargz => {
unimplemented!()
}
}
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
blob_ctx.set_blob_prefetch_size(ctx);
}
Ok(())
}
pub fn finalize_blob_data(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump buffered batch chunk data if exists.
if let Some(ref batch) = ctx.blob_batch_generator {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() {
let (_, compressed_size, _) = Node::write_chunk_data(
&ctx,
blob_ctx,
blob_writer,
batch.chunk_data_buf(),
)?;
batch.add_context(compressed_size);
batch.clear_chunk_data_buf();
}
}
}
if !ctx.blob_features.contains(BlobFeatures::SEPARATE)
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.external {
return Ok(());
}
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_RAW,
blob_ctx.compressed_blob_size,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
let blob_digest = RafsDigest {
data: blob_ctx.blob_hash.clone().finalize().into(),
};
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_RAW,
compress::Algorithm::None,
blob_digest,
blob_ctx.compressed_offset(),
blob_ctx.compressed_blob_size,
blob_ctx.uncompressed_blob_size,
)?;
}
}
}
// check blobs to make sure all blobs are valid.
if blob_mgr.external {
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
bail!(
"invalid blob id:{}, length:{}, index:{}",
blob_ctx.blob_id,
blob_ctx.blob_id.len(),
index
);
}
}
}
Ok(())
}
fn get_compression_algorithm_for_meta(ctx: &BuildContext) -> compress::Algorithm {
if ctx.conversion_type.is_to_ref() {
compress::Algorithm::Zstd
} else {
ctx.compressor
}
}
pub(crate) fn dump_meta_data(
ctx: &BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump blob meta for v6 when it has chunks or bootstrap is to be inlined.
if !blob_ctx.blob_meta_info_enabled || blob_ctx.uncompressed_blob_size == 0 {
return Ok(());
}
// Prepare blob meta information data.
let encrypt = ctx.cipher != crypt::Algorithm::None;
let cipher_obj = &blob_ctx.cipher_object;
let cipher_ctx = &blob_ctx.cipher_ctx;
let blob_meta_info = &blob_ctx.blob_meta_info;
let mut ci_data = blob_meta_info.as_byte_slice();
let mut inflate_buf = Vec::new();
let mut header = blob_ctx.blob_meta_header;
if let Some(ref zran) = ctx.blob_zran_generator {
let (inflate_data, inflate_count) = zran.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_zran(true);
header.set_separate_blob(true);
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if let Some(ref batch) = ctx.blob_batch_generator {
let (inflate_data, inflate_count) = batch.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_batch(true);
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if ctx.blob_tar_reader.is_some() {
header.set_separate_blob(true);
};
let mut compressor = Self::get_compression_algorithm_for_meta(ctx);
let (compressed_data, compressed) = compress::compress(ci_data, compressor)
.with_context(|| "failed to compress blob chunk info array".to_string())?;
if !compressed {
compressor = compress::Algorithm::None;
}
let encrypted_ci_data =
crypt::encrypt_with_context(&compressed_data, cipher_obj, cipher_ctx, encrypt)?;
let compressed_offset = blob_writer.pos()?;
let compressed_size = encrypted_ci_data.len() as u64;
let uncompressed_size = ci_data.len() as u64;
header.set_ci_compressor(compressor);
header.set_ci_entries(blob_meta_info.len() as u32);
header.set_ci_compressed_offset(compressed_offset);
header.set_ci_compressed_size(compressed_size as u64);
header.set_ci_uncompressed_size(uncompressed_size as u64);
header.set_aligned(true);
match blob_meta_info {
BlobMetaChunkArray::V1(_) => header.set_chunk_info_v2(false),
BlobMetaChunkArray::V2(_) => header.set_chunk_info_v2(true),
}
if ctx.features.is_enabled(Feature::BlobToc) && blob_ctx.chunk_count > 0 {
header.set_inlined_chunk_digest(true);
}
blob_ctx.blob_meta_header = header;
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.write_blob_meta(ci_data, &header)?;
}
let encrypted_header =
crypt::encrypt_with_context(header.as_bytes(), cipher_obj, cipher_ctx, encrypt)?;
let header_size = encrypted_header.len();
// Write blob meta data and header
match encrypted_ci_data {
Cow::Owned(v) => blob_ctx.write_data(blob_writer, &v)?,
Cow::Borrowed(v) => {
let buf = v.to_vec();
blob_ctx.write_data(blob_writer, &buf)?;
}
}
blob_ctx.write_data(blob_writer, &encrypted_header)?;
// Write tar header for `blob.meta`.
if ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_META,
compressed_size + header_size as u64,
)?;
}
// Generate ToC entry for `blob.meta` and write chunk digest array.
if ctx.features.is_enabled(Feature::BlobToc) {
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let ci_data = if ctx.blob_features.contains(BlobFeatures::BATCH)
|| ctx.blob_features.contains(BlobFeatures::ZRAN)
{
inflate_buf.as_slice()
} else {
blob_ctx.blob_meta_info.as_byte_slice()
};
hasher.digest_update(ci_data);
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_META,
compressor,
hasher.digest_finalize(),
compressed_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
hasher.digest_update(header.as_bytes());
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_META_HEADER,
compress::Algorithm::None,
hasher.digest_finalize(),
compressed_offset + compressed_size,
header_size as u64,
header_size as u64,
)?;
let buf = unsafe {
slice::from_raw_parts(
blob_ctx.blob_chunk_digest.as_ptr() as *const u8,
blob_ctx.blob_chunk_digest.len() * 32,
)
};
assert!(!buf.is_empty());
// The chunk digest array is almost incompressible, no need for compression.
let digest = RafsDigest::from_buf(buf, digest::Algorithm::Sha256);
let compressed_offset = blob_writer.pos()?;
let size = buf.len() as u64;
blob_writer.write_all(buf)?;
blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_DIGEST, size)?;
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_DIGEST,
compress::Algorithm::None,
digest,
compressed_offset,
size,
size,
)?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_default_compression_algorithm_for_meta_ci() {
let mut ctx = BuildContext::default();
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//EStargzIndexToRef
ctx = BuildContext {
conversion_type: ConversionType::EStargzIndexToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TargzToRef
ctx = BuildContext {
conversion_type: ConversionType::TargzToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
}
}

View File

@ -1,214 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::{Context, Error, Result};
use nydus_utils::digest::{self, RafsDigest};
use std::ops::Deref;
use nydus_rafs::metadata::layout::{RafsBlobTable, RAFS_V5_ROOT_INODE};
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig, RafsSuperFlags};
use crate::{ArtifactStorage, BlobManager, BootstrapContext, BootstrapManager, BuildContext, Tree};
/// RAFS bootstrap/meta builder.
pub struct Bootstrap {
pub(crate) tree: Tree,
}
impl Bootstrap {
/// Create a new instance of [Bootstrap].
pub fn new(tree: Tree) -> Result<Self> {
Ok(Self { tree })
}
/// Build the final view of the RAFS filesystem meta from the hierarchy `tree`.
pub fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
) -> Result<()> {
// Special handling of the root inode
let mut root_node = self.tree.borrow_mut_node();
assert!(root_node.is_dir());
let index = bootstrap_ctx.generate_next_ino();
// 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE.
assert_eq!(index, RAFS_V5_ROOT_INODE);
root_node.index = index;
root_node.inode.set_ino(index);
ctx.prefetch.insert(&self.tree.node, root_node.deref());
bootstrap_ctx.inode_map.insert(
(
root_node.layer_idx,
root_node.info.src_ino,
root_node.info.src_dev,
),
vec![self.tree.node.clone()],
);
drop(root_node);
Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?;
if ctx.fs_version.is_v6() {
let root_offset = self.tree.node.borrow().v6_offset;
Self::v6_update_dirents(&self.tree, root_offset);
}
Ok(())
}
/// Dump the RAFS filesystem meta information to meta blob.
pub fn dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_storage: &mut Option<ArtifactStorage>,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsBlobTable,
) -> Result<()> {
match blob_table {
RafsBlobTable::V5(table) => self.v5_dump(ctx, bootstrap_ctx, table)?,
RafsBlobTable::V6(table) => self.v6_dump(ctx, bootstrap_ctx, table)?,
}
if let Some(ArtifactStorage::FileDir(p)) = bootstrap_storage {
let bootstrap_data = bootstrap_ctx.writer.as_bytes()?;
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?;
let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(())
} else {
bootstrap_ctx.writer.finalize(Some(String::default()))
}
}
/// Traverse node tree, set inode index, ino, child_index and child_count etc according to the
/// RAFS metadata format, then store to nodes collection.
fn build_rafs(
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
tree: &mut Tree,
) -> Result<()> {
let parent_node = tree.node.clone();
let mut parent_node = parent_node.borrow_mut();
let parent_ino = parent_node.inode.ino();
let block_size = ctx.v6_block_size();
// In case of multi-layer building, it's possible that the parent node is not a directory.
if parent_node.is_dir() {
parent_node
.inode
.set_child_count(tree.children.len() as u32);
if ctx.fs_version.is_v5() {
parent_node
.inode
.set_child_index(bootstrap_ctx.get_next_ino() as u32);
} else if ctx.fs_version.is_v6() {
// Layout directory entries for v6.
let d_size = parent_node.v6_dirent_size(ctx, tree)?;
parent_node.v6_set_dir_offset(bootstrap_ctx, d_size, block_size)?;
}
}
let mut dirs: Vec<&mut Tree> = Vec::new();
for child in tree.children.iter_mut() {
let child_node = child.node.clone();
let mut child_node = child_node.borrow_mut();
let index = bootstrap_ctx.generate_next_ino();
child_node.index = index;
if ctx.fs_version.is_v5() {
child_node.inode.set_parent(parent_ino);
}
// Handle hardlink.
// All hardlink nodes' ino and nlink should be the same.
// We need to find hardlink node index list in the layer where the node is located
// because the real_ino may be different among different layers,
let mut v6_hardlink_offset: Option<u64> = None;
let key = (
child_node.layer_idx,
child_node.info.src_ino,
child_node.info.src_dev,
);
if let Some(indexes) = bootstrap_ctx.inode_map.get_mut(&key) {
let nlink = indexes.len() as u32 + 1;
// Update nlink for previous hardlink inodes
for n in indexes.iter() {
n.borrow_mut().inode.set_nlink(nlink);
}
let (first_ino, first_offset) = {
let first_node = indexes[0].borrow_mut();
(first_node.inode.ino(), first_node.v6_offset)
};
// set offset for rafs v6 hardlinks
v6_hardlink_offset = Some(first_offset);
child_node.inode.set_nlink(nlink);
child_node.inode.set_ino(first_ino);
indexes.push(child.node.clone());
} else {
child_node.inode.set_ino(index);
child_node.inode.set_nlink(1);
// Store inode real ino
bootstrap_ctx
.inode_map
.insert(key, vec![child.node.clone()]);
}
// update bootstrap_ctx.offset for rafs v6 non-dir nodes.
if !child_node.is_dir() && ctx.fs_version.is_v6() {
child_node.v6_set_offset(bootstrap_ctx, v6_hardlink_offset, block_size)?;
}
ctx.prefetch.insert(&child.node, child_node.deref());
if child_node.is_dir() {
dirs.push(child);
}
}
// According to filesystem semantics, a parent directory should have nlink equal to
// the number of its child directories plus 2.
if parent_node.is_dir() {
parent_node.inode.set_nlink((2 + dirs.len()) as u32);
}
for dir in dirs {
Self::build_rafs(ctx, bootstrap_ctx, dir)?;
}
Ok(())
}
/// Load a parent RAFS bootstrap and return the `Tree` object representing the filesystem.
pub fn load_parent_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<Tree> {
let rs = if let Some(path) = bootstrap_mgr.f_parent_path.as_ref() {
RafsSuper::load_from_file(path, ctx.configuration.clone(), false).map(|(rs, _)| rs)?
} else {
return Err(Error::msg("bootstrap context's parent bootstrap is null"));
};
let config = RafsSuperConfig {
compressor: ctx.compressor,
digester: ctx.digester,
chunk_size: ctx.chunk_size,
batch_size: ctx.batch_size,
explicit_uidgid: ctx.explicit_uidgid,
version: ctx.fs_version,
is_tarfs_mode: rs.meta.flags.contains(RafsSuperFlags::TARTFS_MODE),
};
config.check_compatibility(&rs.meta)?;
// Reuse lower layer blob table,
// we need to append the blob entry of upper layer to the table
blob_mgr.extend_from_blob_table(ctx, rs.superblock.get_blob_infos())?;
// Build node tree of lower layer from a bootstrap file, and add chunks
// of lower node to layered_chunk_dict for chunk deduplication on next.
Tree::from_bootstrap(&rs, &mut blob_mgr.layered_chunk_dict)
.context("failed to build tree from bootstrap")
}
}

View File

@ -1,280 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::{BTreeMap, HashMap};
use std::mem::size_of;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, Mutex};
use anyhow::{bail, Context, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::layout::v5::RafsV5ChunkInfo;
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig};
use nydus_storage::device::BlobInfo;
use nydus_utils::digest::{self, RafsDigest};
use crate::Tree;
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
/// Trait to manage chunk cache for chunk deduplication.
pub trait ChunkDict: Sync + Send + 'static {
/// Add a chunk into the cache.
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm);
/// Get a cached chunk from the cache.
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>>;
/// Get all `BlobInfo` objects referenced by cached chunks.
fn get_blobs(&self) -> Vec<Arc<BlobInfo>>;
/// Get the `BlobInfo` object with inner index `idx`.
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>>;
/// Associate an external index with the inner index.
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32);
/// Get the external index associated with an inner index.
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32>;
/// Get the digest algorithm used to generate chunk digest.
fn digester(&self) -> digest::Algorithm;
}
impl ChunkDict for () {
fn add_chunk(&mut self, _chunk: Arc<ChunkWrapper>, _digester: digest::Algorithm) {}
fn get_chunk(
&self,
_digest: &RafsDigest,
_uncompressed_size: u32,
) -> Option<&Arc<ChunkWrapper>> {
None
}
fn get_blobs(&self) -> Vec<Arc<BlobInfo>> {
Vec::new()
}
fn get_blob_by_inner_idx(&self, _idx: u32) -> Option<&Arc<BlobInfo>> {
None
}
fn set_real_blob_idx(&self, _inner_idx: u32, _out_idx: u32) {
panic!("()::set_real_blob_idx() should not be invoked");
}
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32> {
Some(inner_idx)
}
fn digester(&self) -> digest::Algorithm {
digest::Algorithm::Sha256
}
}
/// An implementation of [ChunkDict] based on [HashMap].
pub struct HashChunkDict {
m: HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)>,
blobs: Vec<Arc<BlobInfo>>,
blob_idx_m: Mutex<BTreeMap<u32, u32>>,
digester: digest::Algorithm,
}
impl ChunkDict for HashChunkDict {
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm) {
if self.digester == digester {
if let Some(e) = self.m.get(chunk.id()) {
e.1.fetch_add(1, Ordering::AcqRel);
} else {
self.m
.insert(chunk.id().to_owned(), (chunk, AtomicU32::new(1)));
}
}
}
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>> {
if let Some((chunk, _)) = self.m.get(digest) {
if chunk.uncompressed_size() == 0 || chunk.uncompressed_size() == uncompressed_size {
return Some(chunk);
}
}
None
}
fn get_blobs(&self) -> Vec<Arc<BlobInfo>> {
self.blobs.clone()
}
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>> {
self.blobs.get(idx as usize)
}
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32) {
self.blob_idx_m.lock().unwrap().insert(inner_idx, out_idx);
}
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32> {
self.blob_idx_m.lock().unwrap().get(&inner_idx).copied()
}
fn digester(&self) -> digest::Algorithm {
self.digester
}
}
impl HashChunkDict {
/// Create a new instance of [HashChunkDict].
pub fn new(digester: digest::Algorithm) -> Self {
HashChunkDict {
m: Default::default(),
blobs: vec![],
blob_idx_m: Mutex::new(Default::default()),
digester,
}
}
/// Get an immutable reference to the internal `HashMap`.
pub fn hashmap(&self) -> &HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)> {
&self.m
}
/// Parse commandline argument for chunk dictionary and load chunks into the dictionary.
pub fn from_commandline_arg(
arg: &str,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Arc<dyn ChunkDict>> {
let file_path = parse_chunk_dict_arg(arg)?;
HashChunkDict::from_bootstrap_file(&file_path, config, rafs_config)
.map(|d| Arc::new(d) as Arc<dyn ChunkDict>)
}
/// Load chunks from the RAFS filesystem into the chunk dictionary.
pub fn from_bootstrap_file(
path: &Path,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Self> {
let (rs, _) = RafsSuper::load_from_file(path, config, true)
.with_context(|| format!("failed to open bootstrap file {:?}", path))?;
let mut d = HashChunkDict {
m: HashMap::new(),
blobs: rs.superblock.get_blob_infos(),
blob_idx_m: Mutex::new(BTreeMap::new()),
digester: rafs_config.digester,
};
rafs_config.check_compatibility(&rs.meta)?;
if rs.meta.is_v5() || rs.meta.has_inlined_chunk_digest() {
Tree::from_bootstrap(&rs, &mut d).context("failed to build tree from bootstrap")?;
} else if rs.meta.is_v6() {
d.load_chunk_table(&rs)
.context("failed to load chunk table")?;
} else {
unimplemented!()
}
Ok(d)
}
fn load_chunk_table(&mut self, rs: &RafsSuper) -> Result<()> {
let size = rs.meta.chunk_table_size as usize;
if size == 0 || self.digester != rs.meta.get_digester() {
return Ok(());
}
let unit_size = size_of::<RafsV5ChunkInfo>();
if size % unit_size != 0 {
return Err(std::io::Error::from_raw_os_error(libc::EINVAL)).with_context(|| {
format!(
"load_chunk_table: invalid rafs v6 chunk table size {}",
size
)
});
}
for idx in 0..(size / unit_size) {
let chunk = rs.superblock.get_chunk_info(idx)?;
let chunk_info = Arc::new(ChunkWrapper::from_chunk_info(chunk));
self.add_chunk(chunk_info, self.digester);
}
Ok(())
}
}
/// Parse a chunk dictionary argument string.
///
/// # Argument
/// `arg` may be in inform of:
/// - type=path: type of external source and corresponding path
/// - path: type default to "bootstrap"
///
/// for example:
/// bootstrap=image.boot
/// image.boot
/// ~/image/image.boot
/// boltdb=/var/db/dict.db (not supported yet)
pub fn parse_chunk_dict_arg(arg: &str) -> Result<PathBuf> {
let (file_type, file_path) = match arg.find('=') {
None => ("bootstrap", arg),
Some(idx) => (&arg[0..idx], &arg[idx + 1..]),
};
debug!("parse chunk dict argument {}={}", file_type, file_path);
match file_type {
"bootstrap" => Ok(PathBuf::from(file_path)),
_ => bail!("invalid chunk dict type {}", file_type),
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_utils::{compress, digest};
use std::path::PathBuf;
#[test]
fn test_null_dict() {
let mut dict = Box::new(()) as Box<dyn ChunkDict>;
let chunk = Arc::new(ChunkWrapper::new(RafsVersion::V5));
dict.add_chunk(chunk.clone(), digest::Algorithm::Sha256);
assert!(dict.get_chunk(chunk.id(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 0);
assert_eq!(dict.get_real_blob_idx(5).unwrap(), 5);
}
#[test]
fn test_chunk_dict() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path = PathBuf::from(root_dir);
source_path.push("../tests/texture/bootstrap/rafs-v5.boot");
let path = source_path.to_str().unwrap();
let rafs_config = RafsSuperConfig {
version: RafsVersion::V5,
compressor: compress::Algorithm::Lz4Block,
digester: digest::Algorithm::Blake3,
chunk_size: 0x100000,
batch_size: 0,
explicit_uidgid: true,
is_tarfs_mode: false,
};
let dict =
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
.unwrap();
assert!(dict.get_chunk(&RafsDigest::default(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 18);
dict.set_real_blob_idx(0, 10);
assert_eq!(dict.get_real_blob_idx(0), Some(10));
assert_eq!(dict.get_real_blob_idx(1), None);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,94 +0,0 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashSet;
use std::convert::TryFrom;
use anyhow::{bail, Result};
const ERR_UNSUPPORTED_FEATURE: &str = "unsupported feature";
/// Feature flags to control behavior of RAFS filesystem builder.
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum Feature {
/// Append a Table Of Content footer to RAFS v6 data blob, to help locate data sections.
BlobToc,
}
impl TryFrom<&str> for Feature {
type Error = anyhow::Error;
fn try_from(f: &str) -> Result<Self> {
match f {
"blob-toc" => Ok(Self::BlobToc),
_ => bail!(
"{} `{}`, please try upgrading to the latest nydus-image",
ERR_UNSUPPORTED_FEATURE,
f,
),
}
}
}
/// A set of enabled feature flags to control behavior of RAFS filesystem builder
#[derive(Clone, Debug)]
pub struct Features(HashSet<Feature>);
impl Default for Features {
fn default() -> Self {
Self::new()
}
}
impl Features {
/// Create a new instance of [Features].
pub fn new() -> Self {
Self(HashSet::new())
}
/// Check whether a feature is enabled or not.
pub fn is_enabled(&self, feature: Feature) -> bool {
self.0.contains(&feature)
}
}
impl TryFrom<&str> for Features {
type Error = anyhow::Error;
fn try_from(features: &str) -> Result<Self> {
let mut list = Features::new();
for feat in features.trim().split(',') {
if !feat.is_empty() {
let feature = Feature::try_from(feat.trim())?;
list.0.insert(feature);
}
}
Ok(list)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_feature() {
assert_eq!(Feature::try_from("blob-toc").unwrap(), Feature::BlobToc);
Feature::try_from("unknown-feature-bit").unwrap_err();
}
#[test]
fn test_features() {
let features = Features::try_from("blob-toc").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc,").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc, ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from(" blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
}
}

View File

@ -1,62 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::Result;
use std::ops::Deref;
use super::node::Node;
use crate::{Overlay, Prefetch, TreeNode};
#[derive(Clone)]
pub struct BlobLayout {}
impl BlobLayout {
pub fn layout_blob_simple(prefetch: &Prefetch) -> Result<(Vec<TreeNode>, usize)> {
let (pre, non_pre) = prefetch.get_file_nodes();
let mut inodes: Vec<TreeNode> = pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let mut non_prefetch_inodes: Vec<TreeNode> = non_pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let prefetch_entries = inodes.len();
inodes.append(&mut non_prefetch_inodes);
Ok((inodes, prefetch_entries))
}
#[inline]
fn should_dump_node(node: &Node) -> bool {
node.overlay == Overlay::UpperAddition || node.overlay == Overlay::UpperModification
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{core::node::NodeInfo, Tree};
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
#[test]
fn test_layout_blob_simple() {
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let mut node1 = Node::new(inode.clone(), NodeInfo::default(), 1);
node1.overlay = Overlay::UpperAddition;
let tree = Tree::new(node1);
let mut prefetch = Prefetch::default();
prefetch.insert(&tree.node, tree.node.borrow().deref());
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap();
assert_eq!(inodes.len(), 1);
assert_eq!(prefetch_entries, 0);
}
}

View File

@ -1,16 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
pub(crate) mod blob;
pub(crate) mod bootstrap;
pub(crate) mod chunk_dict;
pub(crate) mod context;
pub(crate) mod feature;
pub(crate) mod layout;
pub(crate) mod node;
pub(crate) mod overlay;
pub(crate) mod prefetch;
pub(crate) mod tree;
pub(crate) mod v5;
pub(crate) mod v6;

File diff suppressed because it is too large Load Diff

View File

@ -1,361 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021-2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Execute file/directory whiteout rules when merging multiple RAFS filesystems
//! according to the OCI or Overlayfs specifications.
use std::ffi::{OsStr, OsString};
use std::fmt::{self, Display, Formatter};
use std::os::unix::ffi::OsStrExt;
use std::str::FromStr;
use anyhow::{anyhow, Error, Result};
use super::node::Node;
/// Prefix for OCI whiteout file.
pub const OCISPEC_WHITEOUT_PREFIX: &str = ".wh.";
/// Prefix for OCI whiteout opaque.
pub const OCISPEC_WHITEOUT_OPAQUE: &str = ".wh..wh..opq";
/// Extended attribute key for Overlayfs whiteout opaque.
pub const OVERLAYFS_WHITEOUT_OPAQUE: &str = "trusted.overlay.opaque";
/// RAFS filesystem overlay specifications.
///
/// When merging multiple RAFS filesystems into one, special rules are needed to white out
/// files/directories in lower/parent filesystems. The whiteout specification defined by the
/// OCI image specification and Linux Overlayfs are widely adopted, so both of them are supported
/// by RAFS filesystem.
///
/// # Overlayfs Whiteout
///
/// In order to support rm and rmdir without changing the lower filesystem, an overlay filesystem
/// needs to record in the upper filesystem that files have been removed. This is done using
/// whiteouts and opaque directories (non-directories are always opaque).
///
/// A whiteout is created as a character device with 0/0 device number. When a whiteout is found
/// in the upper level of a merged directory, any matching name in the lower level is ignored,
/// and the whiteout itself is also hidden.
///
/// A directory is made opaque by setting the xattr “trusted.overlay.opaque” to “y”. Where the upper
/// filesystem contains an opaque directory, any directory in the lower filesystem with the same
/// name is ignored.
///
/// # OCI Image Whiteout
/// - A whiteout file is an empty file with a special filename that signifies a path should be
/// deleted.
/// - A whiteout filename consists of the prefix .wh. plus the basename of the path to be deleted.
/// - As files prefixed with .wh. are special whiteout markers, it is not possible to create a
/// filesystem which has a file or directory with a name beginning with .wh..
/// - Once a whiteout is applied, the whiteout itself MUST also be hidden.
/// - Whiteout files MUST only apply to resources in lower/parent layers.
/// - Files that are present in the same layer as a whiteout file can only be hidden by whiteout
/// files in subsequent layers.
/// - In addition to expressing that a single entry should be removed from a lower layer, layers
/// may remove all of the children using an opaque whiteout entry.
/// - An opaque whiteout entry is a file with the name .wh..wh..opq indicating that all siblings
/// are hidden in the lower layer.
#[derive(Clone, Copy, PartialEq)]
pub enum WhiteoutSpec {
/// Overlay whiteout rules according to the OCI image specification.
///
/// https://github.com/opencontainers/image-spec/blob/master/layer.md#whiteouts
Oci,
/// Overlay whiteout rules according to the Linux Overlayfs specification.
///
/// "whiteouts and opaque directories" in https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt
Overlayfs,
/// No whiteout, keep all content from lower/parent filesystems.
None,
}
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec {
fn default() -> Self {
Self::Oci
}
}
impl FromStr for WhiteoutSpec {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.to_lowercase().as_str() {
"oci" => Ok(Self::Oci),
"overlayfs" => Ok(Self::Overlayfs),
"none" => Ok(Self::None),
_ => Err(anyhow!("invalid whiteout spec")),
}
}
}
/// RAFS filesystem overlay operation types.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WhiteoutType {
OciOpaque,
OciRemoval,
OverlayFsOpaque,
OverlayFsRemoval,
}
impl WhiteoutType {
pub fn is_removal(&self) -> bool {
*self == WhiteoutType::OciRemoval || *self == WhiteoutType::OverlayFsRemoval
}
}
/// RAFS filesystem node overlay state.
#[allow(dead_code)]
#[derive(Clone, Debug, PartialEq)]
pub enum Overlay {
Lower,
UpperAddition,
UpperModification,
}
impl Overlay {
pub fn is_lower_layer(&self) -> bool {
self == &Overlay::Lower
}
}
impl Display for Overlay {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
match self {
Overlay::Lower => write!(f, "LOWER"),
Overlay::UpperAddition => write!(f, "ADDED"),
Overlay::UpperModification => write!(f, "MODIFIED"),
}
}
}
impl Node {
/// Check whether the inode is a special overlayfs whiteout file.
pub fn is_overlayfs_whiteout(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs {
return false;
}
self.inode.is_chrdev()
&& nydus_utils::compact::major_dev(self.info.rdev) == 0
&& nydus_utils::compact::minor_dev(self.info.rdev) == 0
}
/// Check whether the inode (directory) is a overlayfs whiteout opaque.
pub fn is_overlayfs_opaque(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs || !self.is_dir() {
return false;
}
// A directory is made opaque by setting the xattr "trusted.overlay.opaque" to "y".
if let Some(v) = self
.info
.xattrs
.get(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE))
{
if let Ok(v) = std::str::from_utf8(v.as_slice()) {
return v == "y";
}
}
false
}
/// Get whiteout type to process the inode.
pub fn whiteout_type(&self, spec: WhiteoutSpec) -> Option<WhiteoutType> {
if self.overlay == Overlay::Lower {
return None;
}
match spec {
WhiteoutSpec::Oci => {
if let Some(name) = self.name().to_str() {
if name == OCISPEC_WHITEOUT_OPAQUE {
return Some(WhiteoutType::OciOpaque);
} else if name.starts_with(OCISPEC_WHITEOUT_PREFIX) {
return Some(WhiteoutType::OciRemoval);
}
}
}
WhiteoutSpec::Overlayfs => {
if self.is_overlayfs_whiteout(spec) {
return Some(WhiteoutType::OverlayFsRemoval);
} else if self.is_overlayfs_opaque(spec) {
return Some(WhiteoutType::OverlayFsOpaque);
}
}
WhiteoutSpec::None => {
return None;
}
}
None
}
/// Get original filename from a whiteout filename.
pub fn origin_name(&self, t: WhiteoutType) -> Option<&OsStr> {
if let Some(name) = self.name().to_str() {
if t == WhiteoutType::OciRemoval {
// the whiteout filename prefixes the basename of the path to be deleted with ".wh.".
return Some(OsStr::from_bytes(
name[OCISPEC_WHITEOUT_PREFIX.len()..].as_bytes(),
));
} else if t == WhiteoutType::OverlayFsRemoval {
// the whiteout file has the same name as the file to be deleted.
return Some(name.as_ref());
}
}
None
}
}
#[cfg(test)]
mod tests {
use nydus_rafs::metadata::{inode::InodeWrapper, layout::v5::RafsV5Inode};
use crate::core::node::NodeInfo;
use super::*;
#[test]
fn test_white_spec_from_str() {
let spec = WhiteoutSpec::default();
assert!(matches!(spec, WhiteoutSpec::Oci));
assert!(WhiteoutSpec::from_str("oci").is_ok());
assert!(WhiteoutSpec::from_str("overlayfs").is_ok());
assert!(WhiteoutSpec::from_str("none").is_ok());
assert!(WhiteoutSpec::from_str("foo").is_err());
}
#[test]
fn test_white_type_removal_check() {
let t1 = WhiteoutType::OciOpaque;
let t2 = WhiteoutType::OciRemoval;
let t3 = WhiteoutType::OverlayFsOpaque;
let t4 = WhiteoutType::OverlayFsRemoval;
assert!(!t1.is_removal());
assert!(t2.is_removal());
assert!(!t3.is_removal());
assert!(t4.is_removal());
}
#[test]
fn test_overlay_low_layer_check() {
let t1 = Overlay::Lower;
let t2 = Overlay::UpperAddition;
let t3 = Overlay::UpperModification;
assert!(t1.is_lower_layer());
assert!(!t2.is_lower_layer());
assert!(!t3.is_lower_layer());
}
#[test]
fn test_node() {
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, NodeInfo::default(), 0);
assert!(!node.is_overlayfs_whiteout(WhiteoutSpec::None));
assert!(node.is_overlayfs_whiteout(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsRemoval
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info: NodeInfo = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsOpaque
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let inode = InodeWrapper::V5(RafsV5Inode::default());
let info = NodeInfo::default();
let mut node = Node::new(inode, info, 0);
assert_eq!(node.whiteout_type(WhiteoutSpec::None), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Oci), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
node.overlay = Overlay::Lower;
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
let name = OCISPEC_WHITEOUT_PREFIX.to_string() + "foo";
info.target_vec.push(name.clone().into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciRemoval
);
assert_eq!(node.origin_name(WhiteoutType::OciRemoval).unwrap(), "foo");
assert_eq!(node.origin_name(WhiteoutType::OciOpaque), None);
assert_eq!(
node.origin_name(WhiteoutType::OverlayFsRemoval).unwrap(),
OsStr::new(&name)
);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
info.target_vec.push(OCISPEC_WHITEOUT_OPAQUE.into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciOpaque
);
}
}

View File

@ -1,391 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::path::PathBuf;
use std::str::FromStr;
use anyhow::{anyhow, Context, Error, Result};
use indexmap::IndexMap;
use nydus_rafs::metadata::layout::v5::RafsV5PrefetchTable;
use nydus_rafs::metadata::layout::v6::{calculate_nid, RafsV6PrefetchTable};
use super::node::Node;
use crate::core::tree::TreeNode;
/// Filesystem data prefetch policy.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum PrefetchPolicy {
None,
/// Prefetch will be issued from Fs layer, which leverages inode/chunkinfo to prefetch data
/// from blob no matter where it resides(OSS/Localfs). Basically, it is willing to cache the
/// data into blobcache(if exists). It's more nimble. With this policy applied, image builder
/// currently puts prefetch files' data into a continuous region within blob which behaves very
/// similar to `Blob` policy.
Fs,
/// Prefetch will be issued directly from backend/blob layer
Blob,
}
impl Default for PrefetchPolicy {
fn default() -> Self {
Self::None
}
}
impl FromStr for PrefetchPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"none" => Ok(Self::None),
"fs" => Ok(Self::Fs),
"blob" => Ok(Self::Blob),
_ => Err(anyhow!("invalid prefetch policy")),
}
}
}
/// Gather prefetch patterns from STDIN line by line.
///
/// Input format:
/// printf "/relative/path/to/rootfs/1\n/relative/path/to/rootfs/2"
///
/// It does not guarantee that specified path exist in local filesystem because the specified path
/// may exist in parent image/layers.
fn get_patterns() -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let stdin = std::io::stdin();
let mut patterns = Vec::new();
loop {
let mut file = String::new();
let size = stdin
.read_line(&mut file)
.context("failed to read prefetch pattern")?;
if size == 0 {
return generate_patterns(patterns);
}
patterns.push(file);
}
}
fn generate_patterns(input: Vec<String>) -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let mut patterns = IndexMap::new();
for file in &input {
let file_trimmed: PathBuf = file.trim().into();
// Sanity check for the list format.
if !file_trimmed.is_absolute() {
warn!(
"Illegal file path {} specified, should be absolute path",
file
);
continue;
}
let mut current_path = file_trimmed.clone();
let mut skip = patterns.contains_key(&current_path);
while !skip && current_path.pop() {
if patterns.contains_key(&current_path) {
skip = true;
break;
}
}
if skip {
warn!(
"prefetch pattern {} is covered by previous pattern and thus omitted",
file
);
} else {
debug!(
"prefetch pattern: {}, trimmed file name {:?}",
file, file_trimmed
);
patterns.insert(file_trimmed, None);
}
}
Ok(patterns)
}
/// Manage filesystem data prefetch configuration and state for builder.
#[derive(Default, Clone)]
pub struct Prefetch {
pub policy: PrefetchPolicy,
pub disabled: bool,
// Patterns to generate prefetch inode array, which will be put into the prefetch array
// in the RAFS bootstrap. It may access directory or file inodes.
patterns: IndexMap<PathBuf, Option<TreeNode>>,
// File list to help optimizing layout of data blobs.
// Files from this list may be put at the head of data blob for better prefetch performance,
// The index of matched prefetch pattern is stored in `usize`,
// which will help to sort the prefetch files in the final layout.
// It only stores regular files.
files_prefetch: Vec<(TreeNode, usize)>,
// It stores all non-prefetch files that is not stored in `prefetch_files`,
// including regular files, dirs, symlinks, etc.,
// with the same order of BFS traversal of file tree.
files_non_prefetch: Vec<TreeNode>,
}
impl Prefetch {
/// Create a new instance of [Prefetch].
pub fn new(policy: PrefetchPolicy) -> Result<Self> {
let patterns = if policy != PrefetchPolicy::None {
get_patterns().context("failed to get prefetch patterns")?
} else {
IndexMap::new()
};
Ok(Self {
policy,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10000),
files_non_prefetch: Vec::with_capacity(10000),
})
}
/// Insert node into the prefetch Vector if it matches prefetch rules,
/// while recording the index of matched prefetch pattern,
/// or insert it into non-prefetch Vector.
pub fn insert(&mut self, obj: &TreeNode, node: &Node) {
// Newly created root inode of this rafs has zero size
if self.policy == PrefetchPolicy::None
|| self.disabled
|| (node.inode.is_reg() && node.inode.size() == 0)
{
self.files_non_prefetch.push(obj.clone());
return;
}
let mut path = node.target().clone();
let mut exact_match = true;
loop {
if let Some((idx, _, v)) = self.patterns.get_full_mut(&path) {
if exact_match {
*v = Some(obj.clone());
}
if node.is_reg() {
self.files_prefetch.push((obj.clone(), idx));
} else {
self.files_non_prefetch.push(obj.clone());
}
return;
}
// If no exact match, try to match parent dir until root.
if !path.pop() {
self.files_non_prefetch.push(obj.clone());
return;
}
exact_match = false;
}
}
/// Get node Vector of files in the prefetch list and non-prefetch list.
/// The order of prefetch files is the same as the order of prefetch patterns.
/// The order of non-prefetch files is the same as the order of BFS traversal of file tree.
pub fn get_file_nodes(&self) -> (Vec<TreeNode>, Vec<TreeNode>) {
let mut p_files = self.files_prefetch.clone();
p_files.sort_by_key(|k| k.1);
let p_files = p_files.into_iter().map(|(s, _)| s).collect();
(p_files, self.files_non_prefetch.clone())
}
/// Get the number of ``valid`` prefetch rules.
pub fn fs_prefetch_rule_count(&self) -> u32 {
if self.policy == PrefetchPolicy::Fs {
self.patterns.values().filter(|v| v.is_some()).count() as u32
} else {
0
}
}
/// Generate filesystem layer prefetch list for RAFS v5.
pub fn get_v5_prefetch_table(&mut self) -> Option<RafsV5PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV5PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
assert!(node.inode.ino() < u32::MAX as u64);
prefetch_table.add_entry(node.inode.ino() as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Generate filesystem layer prefetch list for RAFS v6.
pub fn get_v6_prefetch_table(&mut self, meta_addr: u64) -> Option<RafsV6PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV6PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
let ino = node.inode.ino();
debug_assert!(ino > 0);
let nid = calculate_nid(node.v6_offset, meta_addr);
// 32bit nid can represent 128GB bootstrap, it is large enough, no need
// to worry about casting here
assert!(nid < u32::MAX as u64);
trace!(
"v6 prefetch table: map node index {} to offset {} nid {} path {:?} name {:?}",
ino,
node.v6_offset,
nid,
node.path(),
node.name()
);
prefetch_table.add_entry(nid as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Disable filesystem data prefetch.
pub fn disable(&mut self) {
self.disabled = true;
}
/// Reset to initialization state.
pub fn clear(&mut self) {
self.disabled = false;
self.patterns.clear();
self.files_prefetch.clear();
self.files_non_prefetch.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::node::NodeInfo;
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
use std::cell::RefCell;
#[test]
fn test_generate_pattern() {
let input = vec![
"/a/b".to_string(),
"/a/b/c".to_string(),
"/a/b/d".to_string(),
"/a/b/d/e".to_string(),
"/f".to_string(),
"/h/i".to_string(),
];
let patterns = generate_patterns(input).unwrap();
assert_eq!(patterns.len(), 3);
assert!(patterns.contains_key(&PathBuf::from("/a/b")));
assert!(patterns.contains_key(&PathBuf::from("/f")));
assert!(patterns.contains_key(&PathBuf::from("/h/i")));
assert!(!patterns.contains_key(&PathBuf::from("/")));
assert!(!patterns.contains_key(&PathBuf::from("/a")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/c")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d/e")));
assert!(!patterns.contains_key(&PathBuf::from("/k")));
}
#[test]
fn test_prefetch_policy() {
let policy = PrefetchPolicy::from_str("fs").unwrap();
assert_eq!(policy, PrefetchPolicy::Fs);
let policy = PrefetchPolicy::from_str("blob").unwrap();
assert_eq!(policy, PrefetchPolicy::Blob);
let policy = PrefetchPolicy::from_str("none").unwrap();
assert_eq!(policy, PrefetchPolicy::None);
PrefetchPolicy::from_str("").unwrap_err();
PrefetchPolicy::from_str("invalid").unwrap_err();
}
#[test]
fn test_prefetch() {
let input = vec![
"/a/b".to_string(),
"/f".to_string(),
"/h/i".to_string(),
"/k".to_string(),
];
let patterns = generate_patterns(input).unwrap();
let mut prefetch = Prefetch {
policy: PrefetchPolicy::Fs,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10),
files_non_prefetch: Vec::with_capacity(10),
};
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let info = NodeInfo::default();
let mut info1 = info.clone();
info1.target = PathBuf::from("/f");
let node1 = Node::new(inode.clone(), info1, 1);
let node1 = TreeNode::new(RefCell::from(node1));
prefetch.insert(&node1, &node1.borrow());
let inode2 = inode.clone();
let mut info2 = info.clone();
info2.target = PathBuf::from("/a/b");
let node2 = Node::new(inode2, info2, 1);
let node2 = TreeNode::new(RefCell::from(node2));
prefetch.insert(&node2, &node2.borrow());
let inode3 = inode.clone();
let mut info3 = info.clone();
info3.target = PathBuf::from("/h/i/j");
let node3 = Node::new(inode3, info3, 1);
let node3 = TreeNode::new(RefCell::from(node3));
prefetch.insert(&node3, &node3.borrow());
let inode4 = inode.clone();
let mut info4 = info.clone();
info4.target = PathBuf::from("/z");
let node4 = Node::new(inode4, info4, 1);
let node4 = TreeNode::new(RefCell::from(node4));
prefetch.insert(&node4, &node4.borrow());
let inode5 = inode.clone();
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_size(0);
let mut info5 = info;
info5.target = PathBuf::from("/a/b/d");
let node5 = Node::new(inode5, info5, 1);
let node5 = TreeNode::new(RefCell::from(node5));
prefetch.insert(&node5, &node5.borrow());
// node1, node2
assert_eq!(prefetch.fs_prefetch_rule_count(), 2);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 4);
assert_eq!(non_pre.len(), 1);
let pre_str: Vec<String> = pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]);
let non_pre_str: Vec<String> = non_pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(non_pre_str, vec!["/z"]);
prefetch.clear();
assert_eq!(prefetch.fs_prefetch_rule_count(), 0);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 0);
assert_eq!(non_pre.len(), 0);
}
}

View File

@ -1,533 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! An in-memory tree structure to maintain information for filesystem metadata.
//!
//! Steps to build the first layer for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Traverse the upper tree (FileSystemTree) to dump bootstrap and data blobs.
//!
//! Steps to build the second and following on layers for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Load the lower tree (MetadataTree) from a metadata blob.
//! - Merge the final tree (OverlayTree) by applying the upper tree (FileSystemTree) to the
//! lower tree (MetadataTree).
//! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs.
use std::cell::{RefCell, RefMut};
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use anyhow::{bail, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::{bytes_to_os_str, RafsXAttrs};
use nydus_rafs::metadata::{Inode, RafsInodeExt, RafsSuper};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer};
use super::node::{ChunkSource, Node, NodeChunk, NodeInfo};
use super::overlay::{Overlay, WhiteoutType};
use crate::core::overlay::OVERLAYFS_WHITEOUT_OPAQUE;
use crate::{BuildContext, ChunkDict};
/// Type alias for tree internal node.
pub type TreeNode = Rc<RefCell<Node>>;
/// An in-memory tree structure to maintain information and topology of filesystem nodes.
#[derive(Clone)]
pub struct Tree {
/// Filesystem node.
pub node: TreeNode,
/// Cached base name.
name: Vec<u8>,
/// Children tree nodes.
pub children: Vec<Tree>,
}
impl Tree {
/// Create a new instance of `Tree` from a filesystem node.
pub fn new(node: Node) -> Self {
let name = node.name().as_bytes().to_vec();
Tree {
node: Rc::new(RefCell::new(node)),
name,
children: Vec::new(),
}
}
/// Load a `Tree` from a bootstrap file, and optionally caches chunk information.
pub fn from_bootstrap<T: ChunkDict>(rs: &RafsSuper, chunk_dict: &mut T) -> Result<Self> {
let tree_builder = MetadataTreeBuilder::new(rs);
let root_ino = rs.superblock.root_ino();
let root_inode = rs.get_extended_inode(root_ino, true)?;
let root_node = MetadataTreeBuilder::parse_node(rs, root_inode, PathBuf::from("/"))?;
let mut tree = Tree::new(root_node);
tree.children = timing_tracer!(
{ tree_builder.load_children(root_ino, Option::<PathBuf>::None, chunk_dict, true,) },
"load_tree_from_bootstrap"
)?;
Ok(tree)
}
/// Get name of the tree node.
pub fn name(&self) -> &[u8] {
&self.name
}
/// Set `Node` associated with the tree node.
pub fn set_node(&mut self, node: Node) {
self.node.replace(node);
}
/// Get mutably borrowed value to access the associated `Node` object.
pub fn borrow_mut_node(&self) -> RefMut<'_, Node> {
self.node.as_ref().borrow_mut()
}
/// Walk all nodes in DFS mode.
pub fn walk_dfs<F1, F2>(&self, pre: &mut F1, post: &mut F2) -> Result<()>
where
F1: FnMut(&Tree) -> Result<()>,
F2: FnMut(&Tree) -> Result<()>,
{
pre(self)?;
for child in &self.children {
child.walk_dfs(pre, post)?;
}
post(self)?;
Ok(())
}
/// Walk all nodes in pre DFS mode.
pub fn walk_dfs_pre<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(cb, &mut |_t| Ok(()))
}
/// Walk all nodes in post DFS mode.
pub fn walk_dfs_post<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(&mut |_t| Ok(()), cb)
}
/// Walk the tree in BFS mode.
pub fn walk_bfs<F>(&self, handle_self: bool, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
if handle_self {
cb(self)?;
}
let mut dirs = Vec::with_capacity(32);
for child in &self.children {
cb(child)?;
if child.borrow_mut_node().is_dir() {
dirs.push(child);
}
}
for dir in dirs {
dir.walk_bfs(false, cb)?;
}
Ok(())
}
/// Insert a new child node into the tree.
pub fn insert_child(&mut self, child: Tree) {
if let Err(idx) = self
.children
.binary_search_by_key(&&child.name, |n| &n.name)
{
self.children.insert(idx, child);
}
}
/// Get index of child node with specified `name`.
pub fn get_child_idx(&self, name: &[u8]) -> Option<usize> {
self.children.binary_search_by_key(&name, |n| &n.name).ok()
}
/// Get the tree node corresponding to the path.
pub fn get_node(&self, path: &Path) -> Option<&Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
for name in &target_vec[1..] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &tree.children[idx],
None => return None,
}
}
Some(tree)
}
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes());
assert_eq!(upper.name, "/".as_bytes());
// Handle the root node.
upper.borrow_mut_node().overlay = Overlay::UpperModification;
self.node = upper.node.clone();
self.merge_children(ctx, &upper)?;
lazy_drop(upper);
Ok(())
}
fn merge_children(&mut self, ctx: &BuildContext, upper: &Tree) -> Result<()> {
// Handle whiteout nodes in the first round, and handle other nodes in the second round.
let mut modified = Vec::with_capacity(upper.children.len());
for u in upper.children.iter() {
let mut u_node = u.borrow_mut_node();
match u_node.whiteout_type(ctx.whiteout_spec) {
Some(WhiteoutType::OciRemoval) => {
if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) {
if let Some(idx) = self.get_child_idx(origin_name.as_bytes()) {
self.children.remove(idx);
}
}
}
Some(WhiteoutType::OciOpaque) => {
self.children.clear();
}
Some(WhiteoutType::OverlayFsRemoval) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children.remove(idx);
}
}
Some(WhiteoutType::OverlayFsOpaque) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children[idx].children.clear();
}
u_node.remove_xattr(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE));
modified.push(u);
}
None => modified.push(u),
}
}
let mut dirs = Vec::new();
for u in modified {
let mut u_node = u.borrow_mut_node();
if let Some(idx) = self.get_child_idx(&u.name) {
u_node.overlay = Overlay::UpperModification;
self.children[idx].node = u.node.clone();
} else {
u_node.overlay = Overlay::UpperAddition;
self.insert_child(Tree {
node: u.node.clone(),
name: u.name.clone(),
children: vec![],
});
}
if u_node.is_dir() {
dirs.push(u);
}
}
for dir in dirs {
if let Some(idx) = self.get_child_idx(&dir.name) {
self.children[idx].merge_children(ctx, dir)?;
} else {
bail!("builder: can not find directory in merged tree");
}
}
Ok(())
}
}
pub struct MetadataTreeBuilder<'a> {
rs: &'a RafsSuper,
}
impl<'a> MetadataTreeBuilder<'a> {
fn new(rs: &'a RafsSuper) -> Self {
Self { rs }
}
/// Build node tree by loading bootstrap file
fn load_children<T: ChunkDict, P: AsRef<Path>>(
&self,
ino: Inode,
parent: Option<P>,
chunk_dict: &mut T,
validate_digest: bool,
) -> Result<Vec<Tree>> {
let inode = self.rs.get_extended_inode(ino, validate_digest)?;
if !inode.is_dir() {
return Ok(Vec::new());
}
let parent_path = if let Some(parent) = parent {
parent.as_ref().join(inode.name())
} else {
PathBuf::from("/")
};
let blobs = self.rs.superblock.get_blob_infos();
let child_count = inode.get_child_count();
let mut children = Vec::with_capacity(child_count as usize);
for idx in 0..child_count {
let child = inode.get_child_by_index(idx)?;
let child_path = parent_path.join(child.name());
let child = Self::parse_node(self.rs, child.clone(), child_path)?;
if child.is_reg() {
for chunk in &child.chunks {
let blob_idx = chunk.inner.blob_index();
if let Some(blob) = blobs.get(blob_idx as usize) {
chunk_dict.add_chunk(chunk.inner.clone(), blob.digester());
}
}
}
let child = Tree::new(child);
children.push(child);
}
children.sort_unstable_by(|a, b| a.name.cmp(&b.name));
for child in children.iter_mut() {
let child_node = child.borrow_mut_node();
if child_node.is_dir() {
let child_ino = child_node.inode.ino();
drop(child_node);
child.children =
self.load_children(child_ino, Some(&parent_path), chunk_dict, validate_digest)?;
}
}
Ok(children)
}
/// Convert a `RafsInode` object to an in-memory `Node` object.
pub fn parse_node(rs: &RafsSuper, inode: Arc<dyn RafsInodeExt>, path: PathBuf) -> Result<Node> {
let chunks = if inode.is_reg() {
let chunk_count = inode.get_chunk_count();
let mut chunks = Vec::with_capacity(chunk_count as usize);
for i in 0..chunk_count {
let cki = inode.get_chunk_info(i)?;
chunks.push(NodeChunk {
source: ChunkSource::Parent,
inner: Arc::new(ChunkWrapper::from_chunk_info(cki)),
});
}
chunks
} else {
Vec::new()
};
let symlink = if inode.is_symlink() {
Some(inode.get_symlink()?)
} else {
None
};
let mut xattrs = RafsXAttrs::new();
for name in inode.get_xattrs()? {
let name = bytes_to_os_str(&name);
let value = inode.get_xattr(name)?;
xattrs.add(name.to_os_string(), value.unwrap_or_default())?;
}
// Nodes loaded from bootstrap will only be used as `Overlay::Lower`, so make `dev` invalid
// to avoid breaking hardlink detecting logic.
let src_dev = u64::MAX;
let rdev = inode.rdev() as u64;
let inode = InodeWrapper::from_inode_info(inode.clone());
let source = PathBuf::from("/");
let target = Node::generate_target(&path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: rs.meta.explicit_uidgid(),
src_ino: inode.ino(),
src_dev,
rdev,
path,
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
Ok(Node {
info: Arc::new(info),
index: 0,
layer_idx: 0,
overlay: Overlay::Lower,
inode,
chunks,
v6_offset: 0,
v6_dirents: Vec::new(),
v6_datalayout: 0,
v6_compact_inode: false,
v6_dirents_offset: 0,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::RAFS_DEFAULT_CHUNK_SIZE;
use vmm_sys_util::tempdir::TempDir;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_set_lock_node() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes());
let node1 = tree.borrow_mut_node();
drop(node1);
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
tree.set_node(node);
let node2 = tree.borrow_mut_node();
assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap());
}
#[test]
fn test_walk_tree() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
let tmpfile2 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree2 = Tree::new(node);
tree.insert_child(tree2);
let tmpfile3 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree3 = Tree::new(node);
tree.insert_child(tree3);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 3);
let mut count = 0;
tree.walk_bfs(false, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 2);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
bail!("test")
})
.unwrap_err();
assert_eq!(count, 1);
let idx = tree
.get_child_idx(tmpfile2.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
let idx = tree
.get_child_idx(tmpfile3.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
}
}

View File

@ -1,266 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::convert::TryFrom;
use std::mem::size_of;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::v5::{
RafsV5BlobTable, RafsV5ChunkInfo, RafsV5InodeTable, RafsV5InodeWrapper, RafsV5SuperBlock,
RafsV5XAttrsTable,
};
use nydus_rafs::metadata::{RafsStore, RafsVersion};
use nydus_rafs::RafsIoWrite;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{div_round_up, root_tracer, timing_tracer, try_round_up_4k};
use super::node::Node;
use crate::{Bootstrap, BootstrapContext, BuildContext, Tree};
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for directory
// entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we don't
// have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5 directory
// inode.
const RAFS_V5_VIRTUAL_ENTRY_SIZE: u64 = 8;
impl Node {
/// Dump RAFS v5 inode metadata to meta blob.
pub fn dump_bootstrap_v5(
&self,
ctx: &mut BuildContext,
f_bootstrap: &mut dyn RafsIoWrite,
) -> Result<()> {
trace!("[{}]\t{}", self.overlay, self);
if let InodeWrapper::V5(raw_inode) = &self.inode {
// Dump inode info
let name = self.name();
let inode = RafsV5InodeWrapper {
name,
symlink: self.info.symlink.as_deref(),
inode: raw_inode,
};
inode
.store(f_bootstrap)
.context("failed to dump inode to bootstrap")?;
// Dump inode xattr
if !self.info.xattrs.is_empty() {
self.info
.xattrs
.store_v5(f_bootstrap)
.context("failed to dump xattr to bootstrap")?;
ctx.has_xattr = true;
}
// Dump chunk info
if self.is_reg() && self.inode.child_count() as usize != self.chunks.len() {
bail!("invalid chunk count {}: {}", self.chunks.len(), self);
}
for chunk in &self.chunks {
chunk
.inner
.store(f_bootstrap)
.context("failed to dump chunk info to bootstrap")?;
trace!("\t\tchunk: {} compressor {}", chunk, ctx.compressor,);
}
Ok(())
} else {
bail!("dump_bootstrap_v5() encounters non-v5-inode");
}
}
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for
// directory entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we
// don't have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5
// directory inode.
pub fn v5_set_dir_size(&mut self, fs_version: RafsVersion, children: &[Tree]) {
if !self.is_dir() || !fs_version.is_v5() {
return;
}
let mut d_size = 0u64;
for child in children.iter() {
d_size += child.borrow_mut_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
}
if d_size == 0 {
self.inode.set_size(4096);
} else {
// Safe to unwrap() because we have u32 for child count.
self.inode.set_size(try_round_up_4k(d_size).unwrap());
}
self.v5_set_inode_blocks();
}
/// Calculate and set `i_blocks` for inode.
///
/// In order to support repeatable build, we can't reuse `i_blocks` from source filesystems,
/// so let's calculate it by ourself for stable `i_block`.
///
/// Normal filesystem includes the space occupied by Xattr into the directory size,
/// let's follow the normal behavior.
pub fn v5_set_inode_blocks(&mut self) {
// Set inode blocks for RAFS v5 inode, v6 will calculate it at runtime.
if let InodeWrapper::V5(_) = self.inode {
self.inode.set_blocks(div_round_up(
self.inode.size() + self.info.xattrs.aligned_size_v5() as u64,
512,
));
}
}
}
impl Bootstrap {
/// Calculate inode digest for directory.
fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) {
let mut node = tree.borrow_mut_node();
// We have set digest for non-directory inode in the previous dump_blob workflow.
if node.is_dir() {
let mut inode_hasher = RafsDigest::hasher(ctx.digester);
for child in tree.children.iter() {
let child = child.borrow_mut_node();
inode_hasher.digest_update(child.inode.digest().as_ref());
}
node.inode.set_digest(inode_hasher.digest_finalize());
}
}
/// Dump bootstrap and blob file, return (Vec<blob_id>, blob_size)
pub(crate) fn v5_dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsV5BlobTable,
) -> Result<()> {
// Set inode digest, use reverse iteration order to reduce repeated digest calculations.
self.tree.walk_dfs_post(&mut |t| {
self.v5_digest_node(ctx, t);
Ok(())
})?;
// Set inode table
let super_block_size = size_of::<RafsV5SuperBlock>();
let inode_table_entries = bootstrap_ctx.get_next_ino() as u32 - 1;
let mut inode_table = RafsV5InodeTable::new(inode_table_entries as usize);
let inode_table_size = inode_table.size();
// Set prefetch table
let (prefetch_table_size, prefetch_table_entries) =
if let Some(prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
(prefetch_table.size(), prefetch_table.len() as u32)
} else {
(0, 0u32)
};
// Set blob table, use sha256 string (length 64) as blob id if not specified
let prefetch_table_offset = super_block_size + inode_table_size;
let blob_table_offset = prefetch_table_offset + prefetch_table_size;
let blob_table_size = blob_table.size();
let extended_blob_table_offset = blob_table_offset + blob_table_size;
let extended_blob_table_size = blob_table.extended.size();
let extended_blob_table_entries = blob_table.extended.entries();
// Set super block
let mut super_block = RafsV5SuperBlock::new();
let inodes_count = bootstrap_ctx.inode_map.len() as u64;
super_block.set_inodes_count(inodes_count);
super_block.set_inode_table_offset(super_block_size as u64);
super_block.set_inode_table_entries(inode_table_entries);
super_block.set_blob_table_offset(blob_table_offset as u64);
super_block.set_blob_table_size(blob_table_size as u32);
super_block.set_extended_blob_table_offset(extended_blob_table_offset as u64);
super_block.set_extended_blob_table_entries(u32::try_from(extended_blob_table_entries)?);
super_block.set_prefetch_table_offset(prefetch_table_offset as u64);
super_block.set_prefetch_table_entries(prefetch_table_entries);
super_block.set_compressor(ctx.compressor);
super_block.set_digester(ctx.digester);
super_block.set_chunk_size(ctx.chunk_size);
if ctx.explicit_uidgid {
super_block.set_explicit_uidgid();
}
// Set inodes and chunks
let mut inode_offset = (super_block_size
+ inode_table_size
+ prefetch_table_size
+ blob_table_size
+ extended_blob_table_size) as u32;
let mut has_xattr = false;
self.tree.walk_dfs_pre(&mut |t| {
let node = t.borrow_mut_node();
inode_table.set(node.index, inode_offset)?;
// Add inode size
inode_offset += node.inode.inode_size() as u32;
if node.inode.has_xattr() {
has_xattr = true;
if !node.info.xattrs.is_empty() {
inode_offset += (size_of::<RafsV5XAttrsTable>()
+ node.info.xattrs.aligned_size_v5())
as u32;
}
}
// Add chunks size
if node.is_reg() {
inode_offset += node.inode.child_count() * size_of::<RafsV5ChunkInfo>() as u32;
}
Ok(())
})?;
if has_xattr {
super_block.set_has_xattr();
}
// Dump super block
super_block
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store superblock")?;
// Dump inode table
inode_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store inode table")?;
// Dump prefetch table
if let Some(mut prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
prefetch_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store prefetch table")?;
}
// Dump blob table
blob_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store blob table")?;
// Dump extended blob table
blob_table
.store_extended(bootstrap_ctx.writer.as_mut())
.context("failed to store extended blob table")?;
// Dump inodes and chunks
timing_tracer!(
{
self.tree.walk_dfs_pre(&mut |t| {
t.borrow_mut_node()
.dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut())
.context("failed to dump bootstrap")
})
},
"dump_bootstrap"
)?;
Ok(())
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,267 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fs;
use std::fs::DirEntry;
use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
};
use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
struct FilesystemTreeBuilder {}
impl FilesystemTreeBuilder {
fn new() -> Self {
Self {}
}
#[allow(clippy::only_used_in_recursion)]
/// Walk directory to build node tree by DFS
fn load_children(
&self,
ctx: &mut BuildContext,
parent: &TreeNode,
layer_idx: u16,
) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut trees = Vec::new();
let mut external_trees = Vec::new();
let parent = parent.borrow();
if !parent.is_dir() {
return Ok((trees.clone(), external_trees));
}
let children = fs::read_dir(parent.path())
.with_context(|| format!("failed to read dir {:?}", parent.path()))?;
let children = children.collect::<Result<Vec<DirEntry>, std::io::Error>>()?;
event_tracer!("load_from_directory", +children.len());
for child in children {
let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
file_size,
parent.info.explicit_uidgid,
true,
)
.with_context(|| format!("failed to create node {:?}", path))?;
child.layer_idx = layer_idx;
// as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers.
if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec)
{
continue;
}
let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
}
trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok((trees, external_trees))
}
}
#[derive(Default)]
pub struct DirectoryBuilder {}
impl DirectoryBuilder {
pub fn new() -> Self {
Self {}
}
/// Build node tree from a filesystem directory
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
let node = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
ctx.source_path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
0,
ctx.explicit_uidgid,
true,
)?;
let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new();
let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory"
)?;
tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok((tree, external_tree))
}
fn one_build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> {
// Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
}
}

View File

@ -1,411 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Builder to create RAFS filesystems from directories and tarballs.
#[macro_use]
extern crate log;
use crate::core::context::Artifact;
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::meta::toc;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, digest, root_tracer, timing_tracer};
use sha2::Digest;
use self::core::node::{Node, NodeInfo};
pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{
ArtifactStorage, ArtifactWriter, BlobCacheGenerator, BlobContext, BlobManager,
BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
pub use self::core::feature::{Feature, Features};
pub use self::core::node::{ChunkSource, NodeChunk};
pub use self::core::overlay::{Overlay, WhiteoutSpec};
pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator;
mod compact;
mod core;
mod directory;
mod merge;
mod optimize_prefetch;
mod stargz;
mod tarball;
/// Trait to generate a RAFS filesystem from the source.
pub trait Builder {
fn build(
&mut self,
build_ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput>;
}
fn build_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
blob_mgr: &mut BlobManager,
mut tree: Tree,
) -> Result<Bootstrap> {
// For multi-layer build, merge the upper layer and lower layer with overlay whiteout applied.
if bootstrap_ctx.layered {
let mut parent = Bootstrap::load_parent_bootstrap(ctx, bootstrap_mgr, blob_mgr)?;
timing_tracer!({ parent.merge_overaly(ctx, tree) }, "merge_bootstrap")?;
tree = parent;
}
let mut bootstrap = Bootstrap::new(tree)?;
timing_tracer!({ bootstrap.build(ctx, bootstrap_ctx) }, "build_bootstrap")?;
Ok(bootstrap)
}
fn dump_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
bootstrap: &mut Bootstrap,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Make sure blob id is updated according to blob hash if not specified by user.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.blob_id.is_empty() {
// `Blob::dump()` should have set `blob_ctx.blob_id` to referenced OCI tarball for
// ref-type conversion.
assert!(!ctx.conversion_type.is_to_ref());
if ctx.blob_inline_meta {
// Set special blob id for blob with inlined meta.
blob_ctx.blob_id = "x".repeat(64);
} else {
blob_ctx.blob_id = format!("{:x}", blob_ctx.blob_hash.clone().finalize());
}
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
}
// Dump bootstrap file
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, bootstrap_ctx, &blob_table)?;
// Dump RAFS meta to data blob if inline meta is enabled.
if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob.
let blob_ctx = if blob_mgr.external {
&mut blob_mgr.new_blob_ctx(ctx)?
} else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len();
let uncompressed_digest =
RafsDigest::from_buf(&uncompressed_bootstrap, digest::Algorithm::Sha256);
// Output uncompressed data for backward compatibility and compressed data for new format.
let (bootstrap_data, compressor) = if ctx.features.is_enabled(Feature::BlobToc) {
let mut compressor = compress::Algorithm::Zstd;
let (compressed_data, compressed) =
compress::compress(&uncompressed_bootstrap, compressor)
.with_context(|| "failed to compress bootstrap".to_string())?;
blob_ctx.write_data(blob_writer, &compressed_data)?;
if !compressed {
compressor = compress::Algorithm::None;
}
(compressed_data, compressor)
} else {
blob_ctx.write_data(blob_writer, &uncompressed_bootstrap)?;
(uncompressed_bootstrap, compress::Algorithm::None)
};
let compressed_size = bootstrap_data.len();
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BOOTSTRAP,
compressed_size as u64,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BOOTSTRAP,
compressor,
uncompressed_digest,
bootstrap_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
}
}
Ok(())
}
fn dump_toc(
ctx: &mut BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if ctx.features.is_enabled(Feature::BlobToc) {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let data = blob_ctx.entry_list.as_bytes().to_vec();
let toc_size = data.len() as u64;
blob_ctx.write_data(blob_writer, &data)?;
hasher.digest_update(&data);
let header = blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_TOC, toc_size)?;
hasher.digest_update(header.as_bytes());
blob_ctx.blob_toc_digest = hasher.digest_finalize().data;
blob_ctx.blob_toc_size = toc_size as u32 + header.as_bytes().len() as u32;
}
Ok(())
}
fn finalize_blob(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let is_tarfs = ctx.conversion_type == ConversionType::TarToTarfs;
if !is_tarfs {
dump_toc(ctx, blob_ctx, blob_writer)?;
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
if ctx.blob_inline_meta && blob_ctx.blob_id == "x".repeat(64) {
blob_ctx.blob_id = String::new();
}
let hash = blob_ctx.blob_hash.clone().finalize();
let blob_meta_id = if ctx.blob_id.is_empty() {
format!("{:x}", hash)
} else {
assert!(!ctx.conversion_type.is_to_ref() || is_tarfs);
ctx.blob_id.clone()
};
if ctx.conversion_type.is_to_ref() {
if blob_ctx.blob_id.is_empty() {
// Use `sha256(tarball)` as `blob_id`. A tarball without files will fall through
// this path because `Blob::dump()` hasn't generated `blob_ctx.blob_id`.
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
// Tarfs mode only has tar stream and meta blob, there's no data blob.
if !ctx.blob_inline_meta && !is_tarfs {
blob_ctx.blob_meta_digest = hash.into();
blob_ctx.blob_meta_size = blob_writer.pos()?;
}
} else if blob_ctx.blob_id.is_empty() {
// `blob_ctx.blob_id` should be RAFS blob id.
blob_ctx.blob_id = blob_meta_id.clone();
}
// Tarfs mode directly use the tar file as RAFS data blob, so no need to generate the data
// blob file.
if !is_tarfs {
blob_writer.finalize(Some(blob_meta_id))?;
}
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.finalize(&blob_ctx.blob_id)?;
}
}
Ok(())
}
/// Helper for TarballBuilder/StargzBuilder to build the filesystem tree.
pub struct TarBuilder {
pub explicit_uidgid: bool,
pub layer_idx: u16,
pub version: RafsVersion,
next_ino: Inode,
}
impl TarBuilder {
/// Create a new instance of [TarBuilder].
pub fn new(explicit_uidgid: bool, layer_idx: u16, version: RafsVersion) -> Self {
TarBuilder {
explicit_uidgid,
layer_idx,
next_ino: 0,
version,
}
}
/// Allocate an inode number.
pub fn next_ino(&mut self) -> Inode {
self.next_ino += 1;
self.next_ino
}
/// Insert a node into the tree, creating any missing intermediate directories.
pub fn insert_into_tree(&mut self, tree: &mut Tree, node: Node) -> Result<()> {
let target_paths = node.target_vec();
let target_paths_len = target_paths.len();
if target_paths_len == 1 {
// Handle root node modification
assert_eq!(node.path(), Path::new("/"));
tree.set_node(node);
} else {
let mut tmp_tree = tree;
for idx in 1..target_paths.len() {
match tmp_tree.get_child_idx(target_paths[idx].as_bytes()) {
Some(i) => {
if idx == target_paths_len - 1 {
tmp_tree.children[i].set_node(node);
break;
} else {
tmp_tree = &mut tmp_tree.children[i];
}
}
None => {
if idx == target_paths_len - 1 {
tmp_tree.insert_child(Tree::new(node));
break;
} else {
let node = self.create_directory(&target_paths[..=idx])?;
tmp_tree.insert_child(Tree::new(node));
let last_idx = tmp_tree.children.len() - 1;
tmp_tree = &mut tmp_tree.children[last_idx];
}
}
}
}
}
Ok(())
}
/// Create a new node for a directory.
pub fn create_directory(&mut self, target_paths: &[OsString]) -> Result<Node> {
let ino = self.next_ino();
let name = &target_paths[target_paths.len() - 1];
let mut inode = InodeWrapper::new(self.version);
inode.set_ino(ino);
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_nlink(2);
inode.set_name_size(name.len());
inode.set_rdev(u32::MAX);
let source = PathBuf::from("/");
let target_vec = target_paths.to_vec();
let mut target = PathBuf::new();
for name in target_paths.iter() {
target = target.join(name);
}
let info = NodeInfo {
explicit_uidgid: self.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: u64::MAX,
path: target.clone(),
source,
target,
target_vec,
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: false,
};
Ok(Node::new(inode, info, self.layer_idx))
}
/// Check whether the path is a eStargz special file.
pub fn is_stargz_special_files(&self, path: &Path) -> bool {
path == Path::new("/stargz.index.json")
|| path == Path::new("/.prefetch.landmark")
|| path == Path::new("/.no.prefetch.landmark")
}
}
#[cfg(test)]
mod tests {
use vmm_sys_util::tempdir::TempDir;
use super::*;
#[test]
fn test_tar_builder_is_stargz_special_files() {
let builder = TarBuilder::new(true, 0, RafsVersion::V6);
let path = Path::new("/stargz.index.json");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.no.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/no.prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/tar.index.json");
assert!(!builder.is_stargz_special_files(&path));
}
#[test]
fn test_tar_builder_create_directory() {
let tmp_dir = TempDir::new().unwrap();
let target_paths = [OsString::from(tmp_dir.as_path())];
let mut builder = TarBuilder::new(true, 0, RafsVersion::V6);
let node = builder.create_directory(&target_paths);
assert!(node.is_ok());
let node = node.unwrap();
println!("Node: {}", node);
assert_eq!(node.file_type(), "dir");
assert_eq!(node.target(), tmp_dir.as_path());
assert_eq!(builder.next_ino, 1);
assert_eq!(builder.next_ino(), 2);
}
}

View File

@ -1,440 +0,0 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{anyhow, bail, ensure, Context, Result};
use hex::FromHex;
use nydus_api::ConfigV2;
use nydus_rafs::metadata::{RafsSuper, RafsVersion};
use nydus_storage::device::{BlobFeatures, BlobInfo};
use nydus_utils::crypt;
use super::{
ArtifactStorage, BlobContext, BlobManager, Bootstrap, BootstrapContext, BuildContext,
BuildOutput, ChunkSource, ConversionType, Overlay, Tree,
};
/// Struct to generate the merged RAFS bootstrap for an image from per layer RAFS bootstraps.
///
/// A container image contains one or more layers, a RAFS bootstrap is built for each layer.
/// Those per layer bootstraps could be mounted by overlayfs to form the container rootfs.
/// To improve performance by avoiding overlayfs, an image level bootstrap is generated by
/// merging per layer bootstrap with overlayfs rules applied.
pub struct Merger {}
impl Merger {
fn get_string_from_list(
original_ids: &Option<Vec<String>>,
idx: usize,
) -> Result<Option<String>> {
Ok(if let Some(id) = &original_ids {
let id_string = id
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(id_string.clone())
} else {
None
})
}
fn get_digest_from_list(digests: &Option<Vec<String>>, idx: usize) -> Result<Option<[u8; 32]>> {
Ok(if let Some(digests) = &digests {
let digest = digests
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(<[u8; 32]>::from_hex(digest)?)
} else {
None
})
}
fn get_size_from_list(sizes: &Option<Vec<u64>>, idx: usize) -> Result<Option<u64>> {
Ok(if let Some(sizes) = &sizes {
let size = sizes
.get(idx)
.ok_or_else(|| anyhow!("unmatched size index {}", idx))?;
Some(*size)
} else {
None
})
}
/// Overlay multiple RAFS filesystems into a merged RAFS filesystem.
///
/// # Arguments
/// - sources: contains one or more per layer bootstraps in order of lower to higher.
/// - chunk_dict: contain the chunk dictionary used to build per layer boostrap, or None.
#[allow(clippy::too_many_arguments)]
pub fn merge(
ctx: &mut BuildContext,
parent_bootstrap_path: Option<String>,
sources: Vec<PathBuf>,
blob_digests: Option<Vec<String>>,
original_blob_ids: Option<Vec<String>>,
blob_sizes: Option<Vec<u64>>,
blob_toc_digests: Option<Vec<String>>,
blob_toc_sizes: Option<Vec<u64>>,
target: ArtifactStorage,
chunk_dict: Option<PathBuf>,
config_v2: Arc<ConfigV2>,
) -> Result<BuildOutput> {
if sources.is_empty() {
bail!("source bootstrap list is empty , at least one bootstrap is required");
}
if let Some(digests) = blob_digests.as_ref() {
ensure!(
digests.len() == sources.len(),
"number of blob digest entries {} doesn't match number of sources {}",
digests.len(),
sources.len(),
);
}
if let Some(original_ids) = original_blob_ids.as_ref() {
ensure!(
original_ids.len() == sources.len(),
"number of original blob id entries {} doesn't match number of sources {}",
original_ids.len(),
sources.len(),
);
}
if let Some(sizes) = blob_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of blob size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
if let Some(toc_digests) = blob_toc_digests.as_ref() {
ensure!(
toc_digests.len() == sources.len(),
"number of toc digest entries {} doesn't match number of sources {}",
toc_digests.len(),
sources.len(),
);
}
if let Some(sizes) = blob_toc_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of toc size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester, false);
let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0;
// Load parent bootstrap
if let Some(parent_bootstrap_path) = &parent_bootstrap_path {
let (rs, _) =
RafsSuper::load_from_file(parent_bootstrap_path, config_v2.clone(), false)
.context(format!("load parent bootstrap {:?}", parent_bootstrap_path))?;
let blobs = rs.superblock.get_blob_infos();
for blob in &blobs {
let blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
blob_idx_map.insert(blob_ctx.blob_id.clone(), blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
parent_layers = blobs.len();
tree = Some(Tree::from_bootstrap(&rs, &mut ())?);
}
// Get the blobs come from chunk dictionary.
let mut chunk_dict_blobs = HashSet::new();
let mut config = None;
if let Some(chunk_dict_path) = &chunk_dict {
let (rs, _) = RafsSuper::load_from_file(chunk_dict_path, config_v2.clone(), false)
.context(format!("load chunk dict bootstrap {:?}", chunk_dict_path))?;
config = Some(rs.meta.get_config());
for blob in rs.superblock.get_blob_infos() {
chunk_dict_blobs.insert(blob.blob_id().to_string());
}
}
let mut fs_version = RafsVersion::V6;
let mut chunk_size = None;
for (layer_idx, bootstrap_path) in sources.iter().enumerate() {
let (rs, _) = RafsSuper::load_from_file(bootstrap_path, config_v2.clone(), false)
.context(format!("load bootstrap {:?}", bootstrap_path))?;
config
.get_or_insert_with(|| rs.meta.get_config())
.check_compatibility(&rs.meta)?;
fs_version = RafsVersion::try_from(rs.meta.version)
.context("failed to get RAFS version number")?;
ctx.compressor = rs.meta.get_compressor();
ctx.digester = rs.meta.get_digester();
// If any RAFS filesystems are encrypted, the merged boostrap will be marked as encrypted.
match rs.meta.get_cipher() {
crypt::Algorithm::None => (),
crypt::Algorithm::Aes128Xts => ctx.cipher = crypt::Algorithm::Aes128Xts,
_ => bail!("invalid per layer bootstrap, only supports aes-128-xts"),
}
ctx.explicit_uidgid = rs.meta.explicit_uidgid();
if config.as_ref().unwrap().is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToTarfs;
ctx.blob_features |= BlobFeatures::TARFS;
}
let mut parent_blob_added = false;
let blobs = &rs.superblock.get_blob_infos();
for blob in blobs {
let mut blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
if let Some(chunk_size) = chunk_size {
ensure!(
chunk_size == blob_ctx.chunk_size,
"can not merge bootstraps with inconsistent chunk size, current bootstrap {:?} with chunk size {:x}, expected {:x}",
bootstrap_path,
blob_ctx.chunk_size,
chunk_size,
);
} else {
chunk_size = Some(blob_ctx.chunk_size);
}
if !chunk_dict_blobs.contains(&blob.blob_id()) {
// It is assumed that the `nydus-image create` at each layer and `nydus-image merge` commands
// use the same chunk dict bootstrap. So the parent bootstrap includes multiple blobs, but
// only at most one new blob, the other blobs should be from the chunk dict image.
if parent_blob_added {
bail!("invalid per layer bootstrap, having multiple associated data blobs");
}
parent_blob_added = true;
if ctx.configuration.internal.blob_accessible()
|| ctx.conversion_type == ConversionType::TarToTarfs
{
// `blob.blob_id()` should have been fixed when loading the bootstrap.
blob_ctx.blob_id = blob.blob_id();
} else {
// The blob id (blob sha256 hash) in parent bootstrap is invalid for nydusd
// runtime, should change it to the hash of whole tar blob.
if let Some(original_id) =
Self::get_string_from_list(&original_blob_ids, layer_idx)?
{
blob_ctx.blob_id = original_id;
} else {
blob_ctx.blob_id =
BlobInfo::get_blob_id_from_meta_path(bootstrap_path)?;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_digests, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_digest = digest;
} else {
blob_ctx.blob_id = hex::encode(digest);
}
}
if let Some(size) = Self::get_size_from_list(&blob_sizes, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_size = size;
} else {
blob_ctx.compressed_blob_size = size;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_toc_digests, layer_idx)?
{
blob_ctx.blob_toc_digest = digest;
}
if let Some(size) = Self::get_size_from_list(&blob_toc_sizes, layer_idx)? {
blob_ctx.blob_toc_size = size as u32;
}
}
if let Entry::Vacant(e) = blob_idx_map.entry(blob.blob_id()) {
e.insert(blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
}
let upper = Tree::from_bootstrap(&rs, &mut ())?;
upper.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = blobs[origin_blob_index].as_ref();
if let Some(blob_index) = blob_idx_map.get(&blob_ctx.blob_id()) {
// Set the blob index of chunk to real index in blob table of final bootstrap.
chunk.set_blob_index(*blob_index as u32);
}
}
// Set node's layer index to distinguish same inode number (from bootstrap)
// between different layers.
let idx = u16::try_from(layer_idx).context(format!(
"too many layers {}, limited to {}",
layer_idx,
u16::MAX
))?;
if parent_layers + idx as usize > u16::MAX as usize {
bail!("too many layers {}, limited to {}", layer_idx, u16::MAX);
}
node.layer_idx = idx + parent_layers as u16;
node.overlay = Overlay::UpperAddition;
Ok(())
})?;
if let Some(tree) = &mut tree {
tree.merge_overaly(ctx, upper)?;
} else {
tree = Some(upper);
}
}
if ctx.conversion_type == ConversionType::TarToTarfs {
if parent_layers > 0 {
bail!("merging RAFS in TARFS mode conflicts with `--parent-bootstrap`");
}
if !chunk_dict_blobs.is_empty() {
bail!("merging RAFS in TARFS mode conflicts with `--chunk-dict`");
}
}
// Safe to unwrap because there is at least one source bootstrap.
let tree = tree.unwrap();
ctx.fs_version = fs_version;
if let Some(chunk_size) = chunk_size {
ctx.chunk_size = chunk_size;
}
// After merging all trees, we need to re-calculate the blob index of
// referenced blobs, as the upper tree might have deleted some files
// or directories by opaques, and some blobs are dereferenced.
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
let origin_blobs = blob_mgr.get_blobs();
tree.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = origin_blobs[origin_blob_index].clone();
let origin_blob_id = blob_ctx.blob_id();
let new_blob_index = if let Some(new_blob_index) = used_blobs.get(&origin_blob_id) {
*new_blob_index
} else {
let new_blob_index = used_blob_mgr.len();
used_blobs.insert(origin_blob_id, new_blob_index);
used_blob_mgr.add_blob(blob_ctx);
new_blob_index
};
chunk.set_blob_index(new_blob_index as u32);
}
Ok(())
})?;
let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = used_blob_mgr.to_blob_table(ctx)?;
let mut bootstrap_storage = Some(target.clone());
bootstrap
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
.context(format!("dump bootstrap to {:?}", target.display()))?;
BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use nydus_utils::digest;
use vmm_sys_util::tempfile::TempFile;
use super::*;
#[test]
fn test_merger_get_string_from_list() {
let res = Merger::get_string_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "string2".to_owned()];
let original_ids = Some(original_ids);
let res = Merger::get_string_from_list(&original_ids, 0);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some("string1".to_owned()));
assert!(Merger::get_string_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_digest_from_list() {
let res = Merger::get_digest_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "12ab".repeat(16)];
let original_ids = Some(original_ids);
let res = Merger::get_digest_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(
res.unwrap(),
Some([
18u8, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171,
18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171
])
);
assert!(Merger::get_digest_from_list(&original_ids, 0).is_err());
assert!(Merger::get_digest_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_size_from_list() {
let res = Merger::get_size_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec![1u64, 2, 3, 4];
let original_ids = Some(original_ids);
let res = Merger::get_size_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some(2u64));
assert!(Merger::get_size_from_list(&original_ids, 4).is_err());
}
#[test]
fn test_merger_merge() {
let mut ctx = BuildContext::default();
ctx.configuration.internal.set_blob_accessible(false);
ctx.digester = digest::Algorithm::Sha256;
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path1 = PathBuf::from(root_dir);
source_path1.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let mut source_path2 = PathBuf::from(root_dir);
source_path2.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let tmp_file = TempFile::new().unwrap();
let target = ArtifactStorage::SingleFile(tmp_file.as_path().to_path_buf());
let blob_toc_digests = Some(vec![
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855".to_owned(),
"4cf0c409788fc1c149afbf4c81276b92427ae41e46412334ca495991b8526650".to_owned(),
]);
let build_output = Merger::merge(
&mut ctx,
None,
vec![source_path1, source_path2],
Some(vec!["a70f".repeat(16), "9bd3".repeat(16)]),
Some(vec!["blob_id".to_owned(), "blob_id2".to_owned()]),
Some(vec![16u64, 32u64]),
blob_toc_digests,
Some(vec![64u64, 128]),
target,
None,
Arc::new(ConfigV2::new("config_v2")),
);
assert!(build_output.is_ok());
let build_output = build_output.unwrap();
println!("BuildOutput: {}", build_output);
assert_eq!(build_output.blob_size, Some(16));
}
}

View File

@ -1,302 +0,0 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,744 +0,0 @@
// Copyright 2022 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate RAFS filesystem from a tarball.
//!
//! It support generating RAFS filesystem from a tar/targz/stargz file with or without data blob.
//!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as:
//! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufReader, Read, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::sync::Mutex;
use anyhow::{anyhow, bail, Context, Result};
use tar::{Archive, Entry, EntryType, Header};
use nydus_api::enosys;
use nydus_rafs::metadata::inode::{InodeWrapper, RafsInodeFlags, RafsV6Inode};
use nydus_rafs::metadata::layout::v5::RafsV5Inode;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::ZranContextGenerator;
use nydus_storage::RAFS_MAX_CHUNKS_PER_BLOB;
use nydus_utils::compact::makedev;
use nydus_utils::compress::zlib_random::{ZranReader, ZRAN_READER_BUF_SIZE};
use nydus_utils::compress::ZlibDecoder;
use nydus_utils::digest::RafsDigest;
use nydus_utils::{div_round_up, lazy_drop, root_tracer, timing_tracer, BufReaderInfo, ByteSize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
use super::core::node::{Node, NodeInfo};
use super::core::tree::Tree;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, TarBuilder};
enum CompressionType {
None,
Gzip,
}
enum TarReader {
File(File),
BufReader(BufReader<File>),
BufReaderInfo(BufReaderInfo<File>),
BufReaderInfoSeekable(BufReaderInfo<File>),
TarGzFile(Box<ZlibDecoder<File>>),
TarGzBufReader(Box<ZlibDecoder<BufReader<File>>>),
ZranReader(ZranReader<File>),
}
impl Read for TarReader {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
match self {
TarReader::File(f) => f.read(buf),
TarReader::BufReader(f) => f.read(buf),
TarReader::BufReaderInfo(b) => b.read(buf),
TarReader::BufReaderInfoSeekable(b) => b.read(buf),
TarReader::TarGzFile(f) => f.read(buf),
TarReader::TarGzBufReader(b) => b.read(buf),
TarReader::ZranReader(f) => f.read(buf),
}
}
}
impl TarReader {
fn seekable(&self) -> bool {
matches!(
self,
TarReader::File(_) | TarReader::BufReaderInfoSeekable(_)
)
}
}
impl Seek for TarReader {
fn seek(&mut self, pos: SeekFrom) -> std::io::Result<u64> {
match self {
TarReader::File(f) => f.seek(pos),
TarReader::BufReaderInfoSeekable(b) => b.seek(pos),
_ => Err(enosys!("seek() not supported!")),
}
}
}
struct TarballTreeBuilder<'a> {
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
buf: Vec<u8>,
builder: TarBuilder,
}
impl<'a> TarballTreeBuilder<'a> {
/// Create a new instance of `TarballBuilder`.
pub fn new(
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
layer_idx: u16,
) -> Self {
let builder = TarBuilder::new(ctx.explicit_uidgid, layer_idx, ctx.fs_version);
Self {
ty,
ctx,
blob_mgr,
buf: Vec::new(),
blob_writer,
builder,
}
}
fn build_tree(&mut self) -> Result<Tree> {
let file = OpenOptions::new()
.read(true)
.open(self.ctx.source_path.clone())
.context("tarball: can not open source file for conversion")?;
let mut is_file = match file.metadata() {
Ok(md) => md.file_type().is_file(),
Err(_) => false,
};
let reader = match self.ty {
ConversionType::EStargzToRef
| ConversionType::TargzToRef
| ConversionType::TarToRef => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
let generator = ZranContextGenerator::from_buf_reader(buf_reader)?;
let reader = generator.reader();
self.ctx.blob_zran_generator = Some(Mutex::new(generator));
self.ctx.blob_features.insert(BlobFeatures::ZRAN);
TarReader::ZranReader(reader)
}
(CompressionType::None, buf_reader) => {
self.ty = ConversionType::TarToRef;
let reader = BufReaderInfo::from_buf_reader(buf_reader);
self.ctx.blob_tar_reader = Some(reader.clone());
TarReader::BufReaderInfo(reader)
}
},
ConversionType::EStargzToRafs
| ConversionType::TargzToRafs
| ConversionType::TarToRafs => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::TarGzFile(Box::new(ZlibDecoder::new(file)))
} else {
TarReader::TarGzBufReader(Box::new(ZlibDecoder::new(buf_reader)))
}
}
(CompressionType::None, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::File(file)
} else {
TarReader::BufReader(buf_reader)
}
}
},
ConversionType::TarToTarfs => {
let mut reader = BufReaderInfo::from_buf_reader(BufReader::new(file));
self.ctx.blob_tar_reader = Some(reader.clone());
if !self.ctx.blob_id.is_empty() {
reader.enable_digest_calculation(false);
} else {
// Disable seek when need to calculate hash value.
is_file = false;
}
// only enable seek when hash computing is disabled.
if is_file {
TarReader::BufReaderInfoSeekable(reader)
} else {
TarReader::BufReaderInfo(reader)
}
}
_ => return Err(anyhow!("tarball: unsupported image conversion type")),
};
let is_seekable = reader.seekable();
let mut tar = Archive::new(reader);
tar.set_ignore_zeros(true);
tar.set_preserve_mtime(true);
tar.set_preserve_permissions(true);
tar.set_unpack_xattrs(true);
// Prepare scratch buffer for dumping file data.
if self.buf.len() < self.ctx.chunk_size as usize {
self.buf = vec![0u8; self.ctx.chunk_size as usize];
}
// Generate the root node in advance, it may be overwritten by entries from the tar stream.
let root = self.builder.create_directory(&[OsString::from("/")])?;
let mut tree = Tree::new(root);
// Generate RAFS node for each tar entry, and optionally adding missing parents.
let entries = if is_seekable {
tar.entries_with_seek()
.context("tarball: failed to read entries from tar")?
} else {
tar.entries()
.context("tarball: failed to read entries from tar")?
};
for entry in entries {
let mut entry = entry.context("tarball: failed to read entry from tar")?;
let path = entry
.path()
.context("tarball: failed to to get path from tar entry")?;
let path = PathBuf::from("/").join(path);
let path = path.components().as_path();
if !self.builder.is_stargz_special_files(path) {
self.parse_entry(&mut tree, &mut entry, path)?;
}
}
// Update directory size for RAFS V5 after generating the tree.
if self.ctx.fs_version.is_v5() {
Self::set_v5_dir_size(&mut tree);
}
Ok(tree)
}
fn parse_entry<R: Read>(
&mut self,
tree: &mut Tree,
entry: &mut Entry<R>,
path: &Path,
) -> Result<()> {
let header = entry.header();
let entry_type = header.entry_type();
if entry_type.is_gnu_longname() {
return Err(anyhow!("tarball: unsupported gnu_longname from tar header"));
} else if entry_type.is_gnu_longlink() {
return Err(anyhow!("tarball: unsupported gnu_longlink from tar header"));
} else if entry_type.is_pax_local_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_local_extensions from tar header"
));
} else if entry_type.is_pax_global_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_global_extensions from tar header"
));
} else if entry_type.is_contiguous() {
return Err(anyhow!(
"tarball: unsupported contiguous entry type from tar header"
));
} else if entry_type.is_gnu_sparse() {
return Err(anyhow!(
"tarball: unsupported gnu sparse file extension from tar header"
));
}
let mut file_size = entry.size();
let name = Self::get_file_name(path)?;
let mode = Self::get_mode(header)?;
let (uid, gid) = Self::get_uid_gid(self.ctx, header)?;
let mtime = header.mtime().unwrap_or_default();
let mut flags = match self.ctx.fs_version {
RafsVersion::V5 => RafsInodeFlags::default(),
RafsVersion::V6 => RafsInodeFlags::default(),
};
// Parse special files
let rdev = if entry_type.is_block_special()
|| entry_type.is_character_special()
|| entry_type.is_fifo()
{
let major = header
.device_major()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get major device from tar entry"))?;
let minor = header
.device_minor()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get minor device from tar entry"))?;
makedev(major as u64, minor as u64) as u32
} else {
u32::MAX
};
// Parse symlink
let (symlink, symlink_size) = if entry_type.is_symlink() {
let symlink_link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let symlink_size = symlink_link_path.as_os_str().byte_size();
if symlink_size > u16::MAX as usize {
bail!("tarball: symlink target from tar entry is too big");
}
file_size = symlink_size as u64;
flags |= RafsInodeFlags::SYMLINK;
(
Some(symlink_link_path.as_os_str().to_owned()),
symlink_size as u16,
)
} else {
(None, 0)
};
let mut child_count = 0;
if entry_type.is_file() {
child_count = div_round_up(file_size, self.ctx.chunk_size as u64);
if child_count > RAFS_MAX_CHUNKS_PER_BLOB as u64 {
bail!("tarball: file size 0x{:x} is too big", file_size);
}
}
// Handle hardlink ino
let mut hardlink_target = None;
let ino = if entry_type.is_hard_link() {
let link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let link_path = PathBuf::from("/").join(link_path);
let link_path = link_path.components().as_path();
let targets = Node::generate_target_vec(link_path);
assert!(!targets.is_empty());
let mut tmp_tree: &Tree = tree;
for name in &targets[1..] {
match tmp_tree.get_child_idx(name.as_bytes()) {
Some(idx) => tmp_tree = &tmp_tree.children[idx],
None => {
bail!(
"tarball: unknown target {} for hardlink {}",
link_path.display(),
path.display()
);
}
}
}
let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() {
bail!(
"tarball: target {} for hardlink {} is not a regular file",
link_path.display(),
path.display()
);
}
hardlink_target = Some(tmp_tree);
flags |= RafsInodeFlags::HARDLINK;
tmp_node.inode.set_has_hardlink(true);
tmp_node.inode.ino()
} else {
self.builder.next_ino()
};
// Parse xattrs
let mut xattrs = RafsXAttrs::new();
if let Some(exts) = entry.pax_extensions()? {
for p in exts {
match p {
Ok(pax) => {
let prefix = b"SCHILY.xattr.";
let key = pax.key_bytes();
if key.starts_with(prefix) {
let x_key = OsStr::from_bytes(&key[prefix.len()..]);
xattrs.add(x_key.to_os_string(), pax.value_bytes().to_vec())?;
}
}
Err(e) => {
return Err(anyhow!(
"tarball: failed to parse PaxExtension from tar header, {}",
e
))
}
}
}
}
let mut inode = match self.ctx.fs_version {
RafsVersion::V5 => InodeWrapper::V5(RafsV5Inode {
i_digest: RafsDigest::default(),
i_parent: 0,
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_index: 0,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
i_reserved: [0; 8],
}),
RafsVersion::V6 => InodeWrapper::V6(RafsV6Inode {
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
}),
};
inode.set_has_xattr(!xattrs.is_empty());
let source = PathBuf::from("/");
let target = Node::generate_target(path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: self.ctx.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: rdev as u64,
path: path.to_path_buf(),
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
let mut node = Node::new(inode, info, self.builder.layer_idx);
// Special handling of hardlink.
// Tar hardlink header has zero file size and no file data associated, so copy value from
// the associated regular file.
if let Some(t) = hardlink_target {
let n = t.borrow_mut_node();
if n.inode.is_v5() {
node.inode.set_digest(n.inode.digest().to_owned());
}
node.inode.set_size(n.inode.size());
node.inode.set_child_count(n.inode.child_count());
node.chunks = n.chunks.clone();
node.set_xattr(n.info.xattrs.clone());
} else {
node.dump_node_data_with_reader(
self.ctx,
self.blob_mgr,
self.blob_writer,
Some(entry),
&mut self.buf,
)?;
}
// Update inode.i_blocks for RAFS v5.
if self.ctx.fs_version == RafsVersion::V5 && !entry_type.is_dir() {
node.v5_set_inode_blocks();
}
self.builder.insert_into_tree(tree, node)
}
fn get_uid_gid(ctx: &BuildContext, header: &Header) -> Result<(u32, u32)> {
let uid = if ctx.explicit_uidgid {
header.uid().unwrap_or_default()
} else {
0
};
let gid = if ctx.explicit_uidgid {
header.gid().unwrap_or_default()
} else {
0
};
if uid > u32::MAX as u64 || gid > u32::MAX as u64 {
bail!(
"tarball: uid {:x} or gid {:x} from tar entry is out of range",
uid,
gid
);
}
Ok((uid as u32, gid as u32))
}
fn get_mode(header: &Header) -> Result<u32> {
let mode = header
.mode()
.context("tarball: failed to get permission/mode from tar entry")?;
let ty = match header.entry_type() {
EntryType::Regular | EntryType::Link => libc::S_IFREG,
EntryType::Directory => libc::S_IFDIR,
EntryType::Symlink => libc::S_IFLNK,
EntryType::Block => libc::S_IFBLK,
EntryType::Char => libc::S_IFCHR,
EntryType::Fifo => libc::S_IFIFO,
_ => bail!("tarball: unsupported tar entry type"),
};
Ok((mode & !libc::S_IFMT as u32) | ty as u32)
}
fn get_file_name(path: &Path) -> Result<&OsStr> {
let name = if path == Path::new("/") {
path.as_os_str()
} else {
path.file_name().ok_or_else(|| {
anyhow!(
"tarball: failed to get file name from tar entry with path {}",
path.display()
)
})?
};
if name.len() > u16::MAX as usize {
bail!(
"tarball: file name {} from tar entry is too long",
name.to_str().unwrap_or_default()
);
}
Ok(name)
}
fn set_v5_dir_size(tree: &mut Tree) {
for c in &mut tree.children {
Self::set_v5_dir_size(c);
}
let mut node = tree.borrow_mut_node();
node.v5_set_dir_size(RafsVersion::V5, &tree.children);
}
fn detect_compression_algo(file: File) -> Result<(CompressionType, BufReader<File>)> {
// Use 64K buffer to keep consistence with zlib-random.
let mut buf_reader = BufReader::with_capacity(ZRAN_READER_BUF_SIZE, file);
let mut buf = [0u8; 3];
buf_reader.read_exact(&mut buf)?;
if buf[0] == 0x1f && buf[1] == 0x8b && buf[2] == 0x08 {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::Gzip, buf_reader))
} else {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::None, buf_reader))
}
}
}
/// Builder to create RAFS filesystems from tarballs.
pub struct TarballBuilder {
ty: ConversionType,
}
impl TarballBuilder {
/// Create a new instance of [TarballBuilder] to build a RAFS filesystem from a tarball.
pub fn new(conversion_type: ConversionType) -> Self {
Self {
ty: conversion_type,
}
}
}
impl Builder for TarballBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer: Box<dyn Artifact> = match self.ty {
ConversionType::EStargzToRafs
| ConversionType::EStargzToRef
| ConversionType::TargzToRafs
| ConversionType::TargzToRef
| ConversionType::TarToRafs
| ConversionType::TarToTarfs => {
if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
}
}
_ => {
return Err(anyhow!(
"tarball: unsupported image conversion type '{}'",
self.ty
))
}
};
let mut tree_builder =
TarballTreeBuilder::new(self.ty, ctx, blob_mgr, blob_writer.as_mut(), layer_idx);
let tree = timing_tracer!({ tree_builder.build_tree() }, "build_tree")?;
// Build bootstrap
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest};
#[test]
fn test_build_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
#[test]
fn test_build_encrypted_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
true,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
}

View File

@ -1,28 +0,0 @@
[package]
name = "nydus-clib"
version = "0.1.0"
description = "C wrapper library for Nydus SDK"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[lib]
name = "nydus_clib"
crate-type = ["cdylib", "staticlib"]
[dependencies]
libc = "0.2.137"
log = "0.4.17"
fuse-backend-rs = "^0.12.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage" }
[features]
baekend-s3 = ["nydus-storage/backend-s3"]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = ["nydus-storage/backend-localdisk"]

View File

@ -1 +0,0 @@
../LICENSE-APACHE

View File

@ -1,20 +0,0 @@
#include <stdio.h>
#include "../nydus.h"
int main(int argc, char **argv)
{
char *bootstrap = "../../tests/texture/repeatable/sha256-nocompress-repeatable";
char *config = "version = 2\nid = \"my_id\"\n[backend]\ntype = \"localfs\"\n[backend.localfs]\ndir = \"../../tests/texture/repeatable/blobs\"\n[cache]\ntype = \"dummycache\"\n[rafs]";
NydusFsHandle fs_handle;
fs_handle = nydus_open_rafs(bootstrap, config);
if (fs_handle == NYDUS_INVALID_FS_HANDLE) {
printf("failed to open rafs filesystem from ../../tests/texture/repeatable/sha256-nocompress-repeatable\n");
return -1;
}
printf("succeed to open rafs filesystem from ../../tests/texture/repeatable/sha256-nocompress-repeatable\n");
nydus_close_rafs(fs_handle);
return 0;
}

View File

@ -1,70 +0,0 @@
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
/**
* Magic number for Nydus file handle.
*/
#define NYDUS_FILE_HANDLE_MAGIC 17148644263605784967ull
/**
* Value representing an invalid Nydus file handle.
*/
#define NYDUS_INVALID_FILE_HANDLE 0
/**
* Magic number for Nydus filesystem handle.
*/
#define NYDUS_FS_HANDLE_MAGIC 17148643159786606983ull
/**
* Value representing an invalid Nydus filesystem handle.
*/
#define NYDUS_INVALID_FS_HANDLE 0
/**
* Handle representing a Nydus file object.
*/
typedef uintptr_t NydusFileHandle;
/**
* Handle representing a Nydus filesystem object.
*/
typedef uintptr_t NydusFsHandle;
/**
* Open the file with `path` in readonly mode.
*
* The `NydusFileHandle` returned should be freed by calling `nydus_close()`.
*/
NydusFileHandle nydus_fopen(NydusFsHandle fs_handle, const char *path);
/**
* Close the file handle returned by `nydus_fopen()`.
*/
void nydus_fclose(NydusFileHandle handle);
/**
* Open a RAFS filesystem and return a handle to the filesystem object.
*
* The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
* it will cause memory leak.
*/
NydusFsHandle nydus_open_rafs(const char *bootstrap, const char *config);
/**
* Open a RAFS filesystem with default configuration and return a handle to the filesystem object.
*
* The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
* it will cause memory leak.
*/
NydusFsHandle nydus_open_rafs_default(const char *bootstrap, const char *dir_path);
/**
* Close the RAFS filesystem returned by `nydus_open_rafs()` and friends.
*
* All `NydusFileHandle` objects created from the `NydusFsHandle` should be freed before calling
* `nydus_close_rafs()`, otherwise it may cause panic.
*/
void nydus_close_rafs(NydusFsHandle handle);

View File

@ -1,90 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Implement file operations for RAFS filesystem in userspace.
//!
//! Provide following file operation functions to access files in a RAFS filesystem:
//! - fopen:
//! - fclose:
//! - fread:
//! - fwrite:
//! - fseek:
//! - ftell
use std::os::raw::c_char;
use std::ptr::null_mut;
use fuse_backend_rs::api::filesystem::{Context, FileSystem};
use crate::{set_errno, FileSystemState, Inode, NydusFsHandle};
/// Magic number for Nydus file handle.
pub const NYDUS_FILE_HANDLE_MAGIC: u64 = 0xedfc_3919_afc3_5187;
/// Value representing an invalid Nydus file handle.
pub const NYDUS_INVALID_FILE_HANDLE: usize = 0;
/// Handle representing a Nydus file object.
pub type NydusFileHandle = usize;
#[repr(C)]
pub(crate) struct FileState {
magic: u64,
ino: Inode,
pos: u64,
fs_handle: NydusFsHandle,
}
/// Open the file with `path` in readonly mode.
///
/// The `NydusFileHandle` returned should be freed by calling `nydus_close()`.
///
/// # Safety
/// Caller needs to ensure `fs_handle` and `path` are valid, otherwise it may cause memory access
/// violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_fopen(
fs_handle: NydusFsHandle,
path: *const c_char,
) -> NydusFileHandle {
if path.is_null() {
set_errno(libc::EINVAL);
return null_mut::<FileState>() as NydusFileHandle;
}
let fs = match FileSystemState::try_from_handle(fs_handle) {
Err(e) => {
set_errno(e);
return null_mut::<FileState>() as NydusFileHandle;
}
Ok(v) => v,
};
////////////////////////////////////////////////////////////
// TODO: open file;
//////////////////////////////////////////////////////////////////////////
let file = Box::new(FileState {
magic: NYDUS_FILE_HANDLE_MAGIC,
ino: fs.root_ino,
pos: 0,
fs_handle,
});
Box::into_raw(file) as NydusFileHandle
}
/// Close the file handle returned by `nydus_fopen()`.
///
/// # Safety
/// Caller needs to ensure `fs_handle` is valid, otherwise it may cause memory access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_fclose(handle: NydusFileHandle) {
let mut file = Box::from_raw(handle as *mut FileState);
assert_eq!(file.magic, NYDUS_FILE_HANDLE_MAGIC);
let ctx = Context::default();
let fs = FileSystemState::from_handle(file.fs_handle);
fs.rafs.forget(&ctx, file.ino, 1);
file.magic -= 0x4fdf_ae34_9d9a_03cd;
}

View File

@ -1,251 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Provide structures and functions to open/close/access a filesystem instance.
use std::ffi::CStr;
use std::os::raw::c_char;
use std::path::Path;
use std::ptr::{null, null_mut};
use std::str::FromStr;
use std::sync::Arc;
use nydus_api::ConfigV2;
use nydus_rafs::fs::Rafs;
use crate::{cstr_to_str, set_errno, Inode};
/// Magic number for Nydus filesystem handle.
pub const NYDUS_FS_HANDLE_MAGIC: u64 = 0xedfc_3818_af03_5187;
/// Value representing an invalid Nydus filesystem handle.
pub const NYDUS_INVALID_FS_HANDLE: usize = 0;
/// Handle representing a Nydus filesystem object.
pub type NydusFsHandle = usize;
#[repr(C)]
pub(crate) struct FileSystemState {
magic: u64,
pub(crate) root_ino: Inode,
pub(crate) rafs: Rafs,
}
impl FileSystemState {
/// Caller needs to ensure the lifetime of returned reference.
pub(crate) unsafe fn from_handle(hdl: NydusFsHandle) -> &'static mut Self {
let fs = &mut *(hdl as *const FileSystemState as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
fs
}
/// Caller needs to ensure the lifetime of returned reference.
pub(crate) unsafe fn try_from_handle(hdl: NydusFsHandle) -> Result<&'static mut Self, i32> {
if hdl == null::<FileSystemState>() as usize {
return Err(libc::EINVAL);
}
let fs = &mut *(hdl as *const FileSystemState as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
Ok(fs)
}
}
fn fs_error_einval() -> NydusFsHandle {
set_errno(libc::EINVAL);
null_mut::<FileSystemState>() as NydusFsHandle
}
fn default_localfs_rafs_config(dir: &str) -> String {
format!(
r#"
version = 2
id = "my_id"
[backend]
type = "localfs"
[backend.localfs]
dir = "{}"
[cache]
type = "dummycache"
[rafs]
"#,
dir
)
}
fn do_nydus_open_rafs(bootstrap: &str, config: &str) -> NydusFsHandle {
let cfg = match ConfigV2::from_str(config) {
Ok(v) => v,
Err(e) => {
warn!("failed to parse configuration info: {}", e);
return fs_error_einval();
}
};
let cfg = Arc::new(cfg);
let (mut rafs, reader) = match Rafs::new(&cfg, &cfg.id, Path::new(bootstrap)) {
Err(e) => {
warn!(
"failed to open filesystem from bootstrap {}, {}",
bootstrap, e
);
return fs_error_einval();
}
Ok(v) => v,
};
if let Err(e) = rafs.import(reader, None) {
warn!("failed to import RAFS filesystem, {}", e);
return fs_error_einval();
}
let root_ino = rafs.metadata().root_inode;
let fs = Box::new(FileSystemState {
magic: NYDUS_FS_HANDLE_MAGIC,
root_ino,
rafs,
});
Box::into_raw(fs) as NydusFsHandle
}
/// Open a RAFS filesystem and return a handle to the filesystem object.
///
/// The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
/// it will cause memory leak.
///
/// # Safety
/// Caller needs to ensure `bootstrap` and `config` are valid, otherwise it may cause memory access
/// violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_open_rafs(
bootstrap: *const c_char,
config: *const c_char,
) -> NydusFsHandle {
if bootstrap.is_null() || config.is_null() {
return fs_error_einval();
}
let bootstrap = cstr_to_str!(bootstrap, null_mut::<FileSystemState>() as NydusFsHandle);
let config = cstr_to_str!(config, null_mut::<FileSystemState>() as NydusFsHandle);
do_nydus_open_rafs(bootstrap, config)
}
/// Open a RAFS filesystem with default configuration and return a handle to the filesystem object.
///
/// The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
/// it will cause memory leak.
///
/// # Safety
/// Caller needs to ensure `bootstrap` and `dir_path` are valid, otherwise it may cause memory
/// access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_open_rafs_default(
bootstrap: *const c_char,
dir_path: *const c_char,
) -> NydusFsHandle {
if bootstrap.is_null() || dir_path.is_null() {
return fs_error_einval();
}
let bootstrap = cstr_to_str!(bootstrap, null_mut::<FileSystemState>() as NydusFsHandle);
let dir_path = cstr_to_str!(dir_path, null_mut::<FileSystemState>() as NydusFsHandle);
let p_tmp;
let mut path = Path::new(bootstrap);
if path.parent().is_none() {
p_tmp = Path::new(dir_path).join(bootstrap);
path = &p_tmp
}
let bootstrap = match path.to_str() {
Some(v) => v,
None => {
warn!("invalid bootstrap path '{}'", bootstrap);
return fs_error_einval();
}
};
let config = default_localfs_rafs_config(dir_path);
do_nydus_open_rafs(bootstrap, &config)
}
/// Close the RAFS filesystem returned by `nydus_open_rafs()` and friends.
///
/// All `NydusFileHandle` objects created from the `NydusFsHandle` should be freed before calling
/// `nydus_close_rafs()`, otherwise it may cause panic.
///
/// # Safety
/// Caller needs to ensure `handle` is valid, otherwise it may cause memory access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_close_rafs(handle: NydusFsHandle) {
let mut fs = Box::from_raw(handle as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
fs.magic -= 0x4fdf_03cd_ae34_9d9a;
fs.rafs.destroy().unwrap();
}
#[cfg(test)]
mod tests {
use super::*;
use std::ffi::CString;
use std::io::Error;
use std::path::PathBuf;
use std::ptr::null;
pub(crate) fn open_file_system() -> NydusFsHandle {
let ret = unsafe { nydus_open_rafs(null(), null()) };
assert_eq!(ret, NYDUS_INVALID_FS_HANDLE);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::EINVAL)
);
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let bootstrap = PathBuf::from(root_dir)
.join("../tests/texture/repeatable/sha256-nocompress-repeatable");
let bootstrap = bootstrap.to_str().unwrap();
let bootstrap = CString::new(bootstrap).unwrap();
let blob_dir = PathBuf::from(root_dir).join("../tests/texture/repeatable/blobs");
let config = format!(
r#"
version = 2
id = "my_id"
[backend]
type = "localfs"
[backend.localfs]
dir = "{}"
[cache]
type = "dummycache"
[rafs]
"#,
blob_dir.display()
);
let config = CString::new(config).unwrap();
let fs = unsafe {
nydus_open_rafs(
bootstrap.as_ptr() as *const c_char,
config.as_ptr() as *const c_char,
)
};
assert_ne!(fs, NYDUS_INVALID_FS_HANDLE);
fs
}
#[test]
fn test_open_rafs() {
let fs = open_file_system();
unsafe { nydus_close_rafs(fs) };
}
#[test]
fn test_open_rafs_default() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let bootstrap = PathBuf::from(root_dir)
.join("../tests/texture/repeatable/sha256-nocompress-repeatable");
let bootstrap = bootstrap.to_str().unwrap();
let bootstrap = CString::new(bootstrap).unwrap();
let blob_dir = PathBuf::from(root_dir).join("../tests/texture/repeatable/blobs");
let blob_dir = blob_dir.to_str().unwrap();
let fs = unsafe {
nydus_open_rafs_default(bootstrap.as_ptr(), blob_dir.as_ptr() as *const c_char)
};
unsafe { nydus_close_rafs(fs) };
}
}

View File

@ -1,80 +0,0 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! SDK C wrappers to access `nydus-rafs` and `nydus-storage` functionalities.
//!
//! # Generate Header File
//! Please use cbindgen to generate `nydus.h` header file from rust source code by:
//! ```
//! cargo install cbindgen
//! cbindgen -l c -v -o include/nydus.h
//! ```
//!
//! # Run C Test
//! ```
//! gcc -o nydus -L ../../target/debug/ -lnydus_clib nydus_rafs.c
//! ```
#[macro_use]
extern crate log;
extern crate core;
pub use file::*;
pub use fs::*;
mod file;
mod fs;
/// Type for RAFS filesystem inode number.
pub type Inode = u64;
/// Helper to set libc::errno
#[cfg(target_os = "linux")]
fn set_errno(errno: i32) {
unsafe { *libc::__errno_location() = errno };
}
/// Helper to set libc::errno
#[cfg(target_os = "macos")]
fn set_errno(errno: i32) {
unsafe { *libc::__error() = errno };
}
/// Macro to convert C `char *` into rust `&str`.
#[macro_export]
macro_rules! cstr_to_str {
($var: ident, $ret: expr) => {{
let s = CStr::from_ptr($var);
match s.to_str() {
Ok(v) => v,
Err(_e) => {
set_errno(libc::EINVAL);
return $ret;
}
}
}};
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Error;
#[test]
fn test_set_errno() {
assert_eq!(Error::raw_os_error(&Error::last_os_error()), Some(0));
set_errno(libc::EINVAL);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::EINVAL)
);
set_errno(libc::ENOSYS);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::ENOSYS)
);
set_errno(0);
assert_eq!(Error::raw_os_error(&Error::last_os_error()), Some(0));
}
}

1
contrib/ctr-remote/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
bin/

View File

@ -0,0 +1,13 @@
all:clear build
.PHONY: build
build:
GOOS=linux go build -v -o bin/ctr-remote ./cmd/main.go
.PHONY: clear
clear:
rm -f bin/*
.PHONY: static-release
static-release:
GOOS=linux go build -ldflags '-s -w -extldflags "-static"' -v -o bin/ctr-remote ./cmd/main.go

View File

@ -0,0 +1,64 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"os"
"github.com/containerd/containerd/cmd/ctr/app"
"github.com/containerd/containerd/pkg/seed"
"github.com/dragonflyoss/image-service/contrib/ctr-remote/commands"
"github.com/urfave/cli"
)
func init() {
seed.WithTimeAndRand()
}
func main() {
customCommands := []cli.Command{commands.RpullCommand}
app := app.New()
for i := range app.Commands {
if app.Commands[i].Name == "images" {
sc := map[string]cli.Command{}
for _, subcmd := range customCommands {
sc[subcmd.Name] = subcmd
}
// First, replace duplicated subcommands
for j := range app.Commands[i].Subcommands {
for name, subcmd := range sc {
if name == app.Commands[i].Subcommands[j].Name {
app.Commands[i].Subcommands[j] = subcmd
delete(sc, name)
}
}
}
// Next, append all new sub commands
for _, subcmd := range sc {
app.Commands[i].Subcommands = append(app.Commands[i].Subcommands, subcmd)
}
break
}
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "ctr-remote: %v\n", err)
os.Exit(1)
}
}

View File

@ -0,0 +1,148 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"context"
"fmt"
"strings"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cmd/ctr/commands"
"github.com/containerd/containerd/cmd/ctr/commands/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/labels"
"github.com/containerd/containerd/log"
"github.com/dragonflyoss/image-service/contrib/nydus-snapshotter/pkg/label"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
const (
remoteSnapshotterName = "nydus"
targetManifestDigestLabel = "containerd.io/snapshot/cri.manifest-digest"
)
var RpullCommand = cli.Command{
Name: "rpull",
Usage: "pull an image from a registry levaraging nydus snapshotter",
ArgsUsage: "[flags] <ref>",
Description: `Fetch and prepare an image for use in containerd levaraging nydus snapshotter.
After pulling an image, it should be ready to use the same reference in a run command.`,
Flags: append(commands.RegistryFlags, commands.LabelFlag),
Action: func(context *cli.Context) error {
var (
ref = context.Args().First()
config = &rPullConfig{}
)
if ref == "" {
return fmt.Errorf("please provide an image reference to pull")
}
client, ctx, cancel, err := commands.NewClient(context)
if err != nil {
return err
}
defer cancel()
ctx, done, err := client.WithLease(ctx)
if err != nil {
return err
}
defer done(ctx)
fc, err := content.NewFetchConfig(ctx, context)
if err != nil {
return err
}
config.FetchConfig = fc
if err := pull(ctx, client, ref, config); err != nil {
return err
}
return nil
},
}
type rPullConfig struct {
*content.FetchConfig
}
func pull(ctx context.Context, client *containerd.Client, ref string, config *rPullConfig) error {
pCtx := ctx
h := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.MediaType != images.MediaTypeDockerSchema1Manifest {
fmt.Printf("fetching %v... %v\n", desc.Digest.String()[:15], desc.MediaType)
}
return nil, nil
})
log.G(pCtx).WithField("image", ref).Debug("fetching")
configLabels := commands.LabelArgs(config.Labels)
if _, err := client.Pull(pCtx, ref, []containerd.RemoteOpt{
containerd.WithPullLabels(configLabels),
containerd.WithResolver(config.Resolver),
containerd.WithImageHandler(h),
containerd.WithSchema1Conversion,
containerd.WithPullUnpack,
containerd.WithPullSnapshotter(remoteSnapshotterName),
containerd.WithImageHandlerWrapper(appendDefaultLabelsHandlerWrapper(ref)),
}...); err != nil {
return err
}
return nil
}
func appendDefaultLabelsHandlerWrapper(ref string) func(f images.Handler) images.Handler {
return func(f images.Handler) images.Handler {
return images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
children, err := f.Handle(ctx, desc)
if err != nil {
return nil, err
}
switch desc.MediaType {
case ocispec.MediaTypeImageManifest, images.MediaTypeDockerSchema2Manifest:
for i := range children {
c := &children[i]
if images.IsLayerType(c.MediaType) {
if c.Annotations == nil {
c.Annotations = make(map[string]string)
}
c.Annotations[label.ImageRef] = ref
c.Annotations[label.CRIDigest] = c.Digest.String()
var layers string
for _, l := range children[i:] {
if images.IsLayerType(l.MediaType) {
ls := fmt.Sprintf("%s,", l.Digest.String())
// This avoids the label hits the size limitation.
// Skipping layers is allowed here and only affects performance.
if err := labels.Validate(label.NydusDataLayer, layers+ls); err != nil {
break
}
layers += ls
}
}
c.Annotations[label.CRIImageLayer] = strings.TrimSuffix(layers, ",")
c.Annotations[targetManifestDigestLabel] = desc.Digest.String()
}
}
}
return children, nil
})
}
}

10
contrib/ctr-remote/go.mod Normal file
View File

@ -0,0 +1,10 @@
module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.16
require (
github.com/containerd/containerd v1.5.8
github.com/dragonflyoss/image-service/contrib/nydus-snapshotter v0.0.0-20210812024946-ec518a7d1cb8
github.com/opencontainers/image-spec v1.0.1
github.com/urfave/cli v1.22.5
)

1477
contrib/ctr-remote/go.sum Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
/bin

View File

@ -0,0 +1,13 @@
FROM golang:1.16
ARG GOPROXY="https://goproxy.cn,direct"
RUN mkdir -p /app
WORKDIR /app
COPY . ./
RUN CGO_ENABLED=0 GOOS=linux go build -v .
FROM alpine:3.13.6
RUN mkdir -p /plugin; mkdir -p /nydus
ARG NYDUSD_PATH=./nydusd
COPY --from=0 /app/nydus_graphdriver /plugin/nydus_graphdriver
COPY ${NYDUSD_PATH} /nydus
ENTRYPOINT [ "/plugin/nydus_graphdriver" ]

View File

@ -0,0 +1,13 @@
all:clear build
.PHONY: build
build:
GOOS=linux go build -v -o bin/nydus_graphdriver .
.PHONY: clear
clear:
rm -f bin/*
.PHONY: static-release
static-release:
GOOS=linux go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus_graphdriver .

View File

@ -1,3 +1,65 @@
# Docker Nydus Graph Driver
Moved to [docker-nydus-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver).
Docker supports remote graph driver as a plugin. With the nydus graph driver, you can start a container from previously converted nydus image. The initial intent to build the graph driver is to provide a way to help users quickly experience the speed starting a container from nydus image. So it is **not ready for productive usage**. If you think docker is important in your use case, a PR is welcomed to listen to your story. We might enhance this in the future.
## Architecture
---
![Docker Info](../../docs/images/docker_graphdriver_arch.png)
## Procedures
### 1 Configure Nydus
Put your nydus configuration into path `/var/lib/nydus/config.json`, where nydus remote backend is also specified.
### 2 Install Graph Driver Plugin
#### Install from DockerHub
```
$ docker plugin install gechangwei/docker-nydus-graphdriver:0.2.0
```
### 3 Enable the Graph Driver
Before facilitating nydus graph driver to start container, the plugin must be enabled.
```
$ sudo docker plugin enable gechangwei/docker-nydus-graphdriver:0.2.0
```
### 4 Switch to Docker Graph Driver
By default, docker manages all images by build-in `overlay` graph driver which can be switched to another like nydus graph driver by specifying a new one in its
daemon configuration file.
```
{
"experimental": true,
"storage-driver": "gechangwei/docker-nydus-graphdriver:0.2.0"
}
```
### 5 Restart Docker Service
```
$ sudo systemctl restart docker
```
## Verification
Execute `docker info` to verify above steps were all done and nydus graph driver works normally.
![Docker Info](../../docs/images/docker_info_storage_driver.png)
## Start Container
Now, just `run` container or `pull` image like what you are used to
## Limitation
1. docker's version >=20.10.2. Lower version probably works well, but it is not tested yet
2. When converting images through `nydusify`, backend must be specified as `oss`.
3. Nydus graph driver is not compatible with classic oci images. So you have to switch back to build-in graphdriver to use those images.

View File

@ -0,0 +1,43 @@
{
"description": "nydus image service plugin for Docker",
"documentation": "https://docs.docker.com/engine/extend/plugins/",
"entrypoint": [
"/plugin/nydus_graphdriver"
],
"network": {
"type": "host"
},
"interface": {
"types": [
"docker.graphdriver/1.0"
],
"socket": "plugin.sock"
},
"linux": {
"capabilities": [
"CAP_SYS_ADMIN",
"CAP_SYS_RESOURCE"
],
"Devices": [
{
"Path": "/dev/fuse"
}
]
},
"PropagatedMount": "/home",
"Mounts": [
{
"Name": "NYDUS_CONFIG",
"Source": "/var/lib/nydus/config.json",
"Destination": "/nydus/config.json",
"Type": "none",
"Options": [
"bind",
"ro"
],
"Settable": [
"source"
]
}
]
}

View File

@ -0,0 +1,20 @@
module github.com/dragonflyoss/image-service/contrib/nydus_graphdriver
go 1.15
require (
github.com/containerd/containerd v1.5.4 // indirect
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect
github.com/docker/docker v20.10.7+incompatible
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-plugins-helpers v0.0.0-20200102110956-c9a8a2d92ccc
github.com/moby/sys/mount v0.2.0 // indirect
github.com/moby/sys/mountinfo v0.4.1
github.com/opencontainers/selinux v1.8.0
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.8.1
github.com/vbatts/tar-split v0.11.1 // indirect
golang.org/x/net v0.0.0-20210525063256-abc453219eb5 // indirect
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139
google.golang.org/grpc v1.38.0 // indirect
)

View File

@ -0,0 +1,957 @@
bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
github.com/Microsoft/go-winio v0.4.16-0.20201130162521-d1ffc52c7331/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.4.17 h1:iT12IBVClFevaf8PuVyi3UmZOVh4OqnaLxDTW2O6j3w=
github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
github.com/Microsoft/hcsshim v0.8.18 h1:cYnKADiM1869gvBpos3YCteeT6sZLB48lB5dmMMs8Tg=
github.com/Microsoft/hcsshim v0.8.18/go.mod h1:+w2gRZ5ReXQhFOrvSQeNfhrYB/dg3oDwTOcER2fw4I4=
github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/aws/aws-sdk-go v1.15.11/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
github.com/buger/jsonparser v0.0.0-20180808090653-f4dd9f5a6b44/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc=
github.com/cilium/ebpf v0.2.0/go.mod h1:To2CFviqOWL/M0gIMsvSMlqe7em/l1ALkX1PyjrX2Qs=
github.com/cilium/ebpf v0.4.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/containerd/aufs v0.0.0-20200908144142-dab0cbea06f4/go.mod h1:nukgQABAEopAHvB6j7cnP5zJ+/3aVcE7hCYqvIwAHyE=
github.com/containerd/aufs v0.0.0-20201003224125-76a6863f2989/go.mod h1:AkGGQs9NM2vtYHaUen+NljV0/baGCAPELGm2q9ZXpWU=
github.com/containerd/aufs v0.0.0-20210316121734-20793ff83c97/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
github.com/containerd/btrfs v0.0.0-20201111183144-404b9149801e/go.mod h1:jg2QkJcsabfHugurUvvPhS3E08Oxiuh5W/g1ybB4e0E=
github.com/containerd/btrfs v0.0.0-20210316141732-918d888fb676/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
github.com/containerd/btrfs v1.0.0/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
github.com/containerd/cgroups v0.0.0-20190717030353-c4b9ac5c7601/go.mod h1:X9rLEHIqSf/wfK8NsPqxJmeZgW4pcfzdXITDrUSJ6uI=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1pT8KYB3TCXK/ocprsh7MAkoW8bZVzPdih9snmM=
github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
github.com/containerd/cgroups v1.0.1 h1:iJnMvco9XGvKUvNQkv88bE4uJXxRQH18efbKo9w5vHQ=
github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ=
github.com/containerd/containerd v1.2.10/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.1-0.20191213020239-082f7e3aed57/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.0-beta.2.0.20200729163537-40b22ef07410/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.5.0-beta.1/go.mod h1:5HfvG1V2FsKesEGQ17k5/T7V960Tmcumvqn8Mc+pCYQ=
github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo/uBBoBORwEx6ardVcmKU=
github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI=
github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
github.com/containerd/containerd v1.5.1/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g=
github.com/containerd/containerd v1.5.4 h1:uPF0og3ByFzDnaStfiQj3fVGTEtaSNyU+bW7GR/nqGA=
github.com/containerd/containerd v1.5.4/go.mod h1:sx18RgvW6ABJ4iYUw7Q5x7bgFOAB9B6G7+yO0XBc4zw=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20200710164510-efbc4488d8fe/go.mod h1:cECdGN1O8G9bgKTlLhuPJimka6Xb/Gg7vYzCTNVxhvo=
github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
github.com/containerd/continuity v0.0.0-20210208174643-50096c924a4e/go.mod h1:EXlVlkqNba9rJe3j7w3Xa924itAMLgZH4UD/Q4PExuQ=
github.com/containerd/continuity v0.1.0 h1:UFRRY5JemiAhPZrr/uE0n8fMTLcZsUvySPr1+D7pgr8=
github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
github.com/containerd/fifo v0.0.0-20201026212402-0724c46b320c/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
github.com/containerd/fifo v0.0.0-20210316144830-115abcc95a1d/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
github.com/containerd/go-cni v1.0.1/go.mod h1:+vUpYxKvAF72G9i1WoDOiPGRtQpqsNW/ZHtSlv++smU=
github.com/containerd/go-cni v1.0.2/go.mod h1:nrNABBHzu0ZwCug9Ije8hL2xBCYh/pjfMb1aZGrrohk=
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
github.com/containerd/go-runc v0.0.0-20201020171139-16b287bc67d0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
github.com/containerd/imgcrypt v1.0.1/go.mod h1:mdd8cEPW7TPgNG4FpuP3sGBiQ7Yi/zak9TYCG3juvb0=
github.com/containerd/imgcrypt v1.0.4-0.20210301171431-0ae5c75f59ba/go.mod h1:6TNsg0ctmizkrOgXRNQjAPFWpMYRWuiB6dSF4Pfa5SA=
github.com/containerd/imgcrypt v1.1.1-0.20210312161619-7ed62a527887/go.mod h1:5AZJNI6sLHJljKuI9IHnw1pWqo/F0nGDOuR9zgTs7ow=
github.com/containerd/imgcrypt v1.1.1/go.mod h1:xpLnwiQmEUJPvQoAapeb2SNCxz7Xr6PJrXQb0Dpc4ms=
github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFYfE5+So4M5syatU0N0f0LbWpuqyMi4/BE8c=
github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
github.com/containerd/ttrpc v1.0.1/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
github.com/containerd/zfs v0.0.0-20200918131355-0a33824f23a2/go.mod h1:8IgZOBdv8fAgXddBT4dBXJPtxyRsejFIpXoklgxgEjw=
github.com/containerd/zfs v0.0.0-20210301145711-11e8f1707f62/go.mod h1:A9zfAbMlQwE+/is6hi0Xw8ktpL+6glmqZYtevJgaB8Y=
github.com/containerd/zfs v0.0.0-20210315114300-dde8f0fda960/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
github.com/containerd/zfs v0.0.0-20210324211415-d5c4544f0433/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
github.com/containerd/zfs v1.0.0/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/containernetworking/cni v0.8.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/containernetworking/cni v0.8.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/containernetworking/plugins v0.8.6/go.mod h1:qnw5mN19D8fIwkqW7oHHYDHVlzhJpcY6TQxn/fUyDDM=
github.com/containernetworking/plugins v0.9.1/go.mod h1:xP/idU2ldlzN6m4p5LmGiwRDjeJr6FLK6vuiUwoH7P8=
github.com/containers/ocicrypt v1.0.1/go.mod h1:MeJDzk1RJHv89LjsH0Sp5KTY3ZYkjXO/C+bKAeWFIrc=
github.com/containers/ocicrypt v1.1.0/go.mod h1:b8AOe0YR67uU8OqfVNcznfFpAzu3rdgUV4GP9qXPfu4=
github.com/containers/ocicrypt v1.1.1/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-iptables v0.4.5/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-iptables v0.5.0/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20161114122254-48702e0da86b/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/go-systemd/v22 v22.1.0 h1:kq/SbG2BCKLkDKkjQf5OWwKWUKj1lgs3lFI4PxnR5lg=
github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
github.com/d2g/dhcp4 v0.0.0-20170904100407-a1d1b6c41b1c/go.mod h1:Ct2BUK8SB0YC1SMSibvLzxjeJLnrYEVLULFNiHY9YfQ=
github.com/d2g/dhcp4client v1.0.0/go.mod h1:j0hNfjhrt2SxUOw55nL0ATM/z4Yt3t2Kd1mW34z5W5s=
github.com/d2g/dhcp4server v0.0.0-20181031114812-7d4a0a7f59a5/go.mod h1:Eo87+Kg/IX2hfWJfwxMzLyuSZyxSoAug2nGa1G2QAi8=
github.com/d2g/hardwareaddr v0.0.0-20190221164911-e7d9fbe030e4/go.mod h1:bMl4RjIciD2oAxI7DmWRx6gbeqrkoLqv3MV0vzNad+I=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
github.com/docker/distribution v0.0.0-20190905152932-14b96e55d84c/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.7+incompatible h1:Z6O9Nhsjv+ayUEeI1IojKbYcsGdgYSNqxe1s2MYzUhQ=
github.com/docker/docker v20.10.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-events v0.0.0-20170721190031-9461782956ad/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
github.com/docker/go-plugins-helpers v0.0.0-20200102110956-c9a8a2d92ccc h1:/A+mPcpajLsWiX9gSnzdVKM/IzZoYiNqXHe83z50k2c=
github.com/docker/go-plugins-helpers v0.0.0-20200102110956-c9a8a2d92ccc/go.mod h1:LFyLie6XcDbyKGeVK6bHe+9aJTYCxWLBg5IrJZOaXKA=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
github.com/godbus/dbus v0.0.0-20180201030542-885f9cc04c9c/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e h1:BWhy2j3IXJhjCbC68FptL43tDKIq8FladmaTs3Xs7Z8=
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
github.com/godbus/dbus/v5 v5.0.3 h1:ZqHaoEF7TBzh4jzPmqVhE/5A1z9of6orkAe5uHoAeME=
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4 h1:L8R9j+yAqZuZjsqh/z+F1NCffTKKLShY6zXTItVIZ8M=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-shellwords v1.0.3/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/sys/mount v0.2.0 h1:WhCW5B355jtxndN5ovugJlMFJawbUODuW8fSnEH6SSM=
github.com/moby/sys/mount v0.2.0/go.mod h1:aAivFE2LB3W4bACsUXChRHQ0qKWsetY4Y9V7sxOougM=
github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/mountinfo v0.4.1 h1:1O+1cHA1aujwEwwVMa2Xm2l+gIpUHyd3+D+d7LZh1kM=
github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/symlink v0.1.0 h1:MTFZ74KtNI6qQQpuBxU+uKCim4WtOMokr03hCfJcazE=
github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v0.0.0-20151202141238-7f8ab55aaf3b/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/gomega v0.0.0-20151007035656-2152b45fa28a/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0-rc1.0.20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc8.0.20190926000215-3e425f80a8c9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc93 h1:x2UMpOOVf3kQ8arv/EsDGwim8PTNqzL1/EYDr/+scOM=
github.com/opencontainers/runc v1.0.0-rc93/go.mod h1:3NOsor4w32B2tC0Zbl8Knk4Wg84SM2ImC1fxBuqJ/H0=
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.2-0.20190207185410-29686dbc5559/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d h1:pNa8metDkwZjb9g4T8s+krQ+HRgZAkqnXml+wNir/+s=
github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE=
github.com/opencontainers/selinux v1.8.0 h1:+77ba4ar4jsCbL1GLbFL8fFM57w6suPfSS9PDLDY7KM=
github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tchap/go-patricia v2.2.6+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vbatts/tar-split v0.11.1 h1:0Odu65rhcZ3JZaPHxl7tCI3V/C/Q9Zf82UFravl02dE=
github.com/vbatts/tar-split v0.11.1/go.mod h1:LEuURwDEiWjRjwu46yU3KVGuUdVv/dcnpcEPSzR8z6g=
github.com/vishvananda/netlink v0.0.0-20181108222139-023a6dafdcdf/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netlink v1.1.1-0.20201029203352-d40f9887b852/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/willf/bitset v1.1.11 h1:N7Z7E9UvjW+sGsEl7k/SJrvY2reP1A07MrGuCjIOjRE=
github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3 h1:8sGtKOrtQqkN1bp2AtX+misvLIlOmsEsNd+9NIcPEm8=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181009213950-7c1a557ab941/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5 h1:wjuX4b5yYQnEQHzd+CBcrcC6OVR2J1CN6mUy0oSxIPo=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190812073006-9eafafc0a87e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200817155316-9781c653f443/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201117170446-d9b008d0a637/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201202213521-69691e467435/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139 h1:C+AwYEtBp/VQwoLntUmQ/yx3MS9vmZaKNdw5eOpoQe8=
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a h1:pOwg4OoaRYScjmR4LlLgdtnyoHYTSAVhhqe5uPdpII8=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo=
k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ=
k8s.io/api v0.20.6/go.mod h1:X9e8Qag6JV/bL5G6bU8sdVRltWKmdHsFUGS3eVndqE8=
k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.20.6/go.mod h1:ejZXtW1Ra6V1O5H8xPBGz+T3+4gfkTCeExAHKU57MAc=
k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU=
k8s.io/apiserver v0.20.4/go.mod h1:Mc80thBKOyy7tbvFtB4kJv1kbdD0eIH8k8vianJcbFM=
k8s.io/apiserver v0.20.6/go.mod h1:QIJXNt6i6JB+0YQRNcS0hdRHJlMhflFmsBDeSgT1r8Q=
k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y=
k8s.io/client-go v0.20.4/go.mod h1:LiMv25ND1gLUdBeYxBIwKpkSC5IsozMMmOOeSJboP+k=
k8s.io/client-go v0.20.6/go.mod h1:nNQMnOvEUEsOzRRFIIkdmYOjAZrC8bgq0ExboWSU1I0=
k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk=
k8s.io/component-base v0.20.4/go.mod h1:t4p9EdiagbVCJKrQ1RsA5/V4rFQNDfRlevJajlGwgjI=
k8s.io/component-base v0.20.6/go.mod h1:6f1MPBAeI+mvuts3sIdtpjljHWBQ2cIy38oBIWMYnrM=
k8s.io/cri-api v0.17.3/go.mod h1:X1sbHmuXhwaHs9xxYffLqJogVsnI+f6cPRcgPel7ywM=
k8s.io/cri-api v0.20.1/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
k8s.io/cri-api v0.20.4/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
k8s.io/cri-api v0.20.6/go.mod h1:ew44AjNXwyn1s0U4xCKGodU7J1HzBeZ1MpGrpa5r8Yc=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.15/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.0.3/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=

View File

@ -0,0 +1,16 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package main
import (
"github.com/docker/go-plugins-helpers/graphdriver/shim"
"github.com/dragonflyoss/image-service/contrib/nydus_graphdriver/plugin/nydus"
)
func main() {
handler := shim.NewHandlerFromGraphDriver(nydus.Init)
handler.ServeUnix("plugin", 0)
}

View File

@ -0,0 +1,131 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package nydus
import (
"context"
"encoding/json"
"io/ioutil"
"net"
"net/http"
"os"
"os/exec"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
NudusdConfigPath = "/nydus/config.json"
NydusdBin = "/nydus/nydusd"
NydusdSocket = "/nydus/api.sock"
)
type Nydus struct {
command *exec.Cmd
}
func New() *Nydus {
return &Nydus{}
}
type DaemonInfo struct {
ID string `json:"id"`
State string `json:"state"`
}
type errorMessage struct {
Code string `json:"code"`
Message string `json:"message"`
}
func getDaemonStatus(socket string) error {
transport := http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
dialer := &net.Dialer{
Timeout: 5 * time.Second,
KeepAlive: 5 * time.Second,
}
return dialer.DialContext(ctx, "unix", socket)
},
}
client := http.Client{Transport: &transport, Timeout: 30 * time.Second}
if resp, err := client.Get("http://unix/api/v1/daemon"); err != nil {
return err
} else {
defer resp.Body.Close()
if b, err := ioutil.ReadAll(resp.Body); err != nil {
return err
} else {
if resp.StatusCode >= 400 {
var message errorMessage
json.Unmarshal(b, &message)
return errors.Errorf("request error, status = %d, message %s", resp.StatusCode, message)
}
var info DaemonInfo
if err = json.Unmarshal(b, &info); err != nil {
return err
} else {
if info.State != "RUNNING" {
return errors.Errorf("nydus is not ready. current stat %s", info.State)
}
}
}
}
return nil
}
func (nydus *Nydus) Mount(bootstrap, mountpoint string) error {
args := []string{
"--apisock", NydusdSocket,
"--log-level", "info",
"--thread-num", "4",
"--bootstrap", bootstrap,
"--config", NudusdConfigPath,
"--mountpoint", mountpoint,
}
cmd := exec.Command(NydusdBin, args...)
logrus.Infof("Start nydusd. %s", cmd.String())
// Redirect logs from nydusd daemon to a proper place.
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
if err := cmd.Start(); err != nil {
return errors.Wrapf(err, "start nydusd")
}
nydus.command = cmd
ready := false
// return error if nydusd does not reach normal state after elapse.
for i := 0; i < 30; i += 1 {
err := getDaemonStatus(NydusdSocket)
if err == nil {
ready = true
break
} else {
logrus.Error(err)
time.Sleep(100 * time.Millisecond)
}
}
if !ready {
logrus.Errorf("It take too long until nydusd gets RUNNING")
cmd.Process.Kill()
cmd.Wait()
}
return nil
}

View File

@ -0,0 +1,499 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package nydus
import (
"context"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
"github.com/pkg/errors"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/containerfs"
"github.com/docker/docker/pkg/directory"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/system"
"github.com/moby/sys/mountinfo"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
// With nydus image layer, there won't be plenty of layers that need to be stacked.
const (
diffDirName = "diff"
workDirName = "work"
mergedDirName = "merged"
lowerFile = "lower"
nydusDirName = "nydus"
nydusMetaRelapath = "image/image.boot"
parentFile = "parent"
)
var backingFs = "<unknown>"
func isFileExisted(file string) (bool, error) {
if _, err := os.Stat(file); err == nil {
return true, nil
} else if os.IsNotExist(err) {
return false, nil
} else {
return false, err
}
}
// Nydus graphdriver contains information about the home directory and the list of active
// mounts that are created using this driver.
type Driver struct {
home string
nydus *Nydus
NydusMountpoint string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
ctr *graphdriver.RefCounter
}
func (d *Driver) dir(id string) string {
return path.Join(d.home, id)
}
func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
os.MkdirAll(home, os.ModePerm)
fsMagic, err := graphdriver.GetFSMagic(home)
if err != nil {
return nil, err
}
if fsName, ok := graphdriver.FsNames[fsMagic]; ok {
backingFs = fsName
}
// check if they are running over btrfs, aufs, zfs, overlay, or ecryptfs
switch fsMagic {
case graphdriver.FsMagicBtrfs, graphdriver.FsMagicAufs, graphdriver.FsMagicZfs, graphdriver.FsMagicOverlay, graphdriver.FsMagicEcryptfs:
logrus.Errorf("'overlay2' is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
}
return &Driver{
home: home,
uidMaps: uidMaps,
gidMaps: gidMaps,
ctr: graphdriver.NewRefCounter(graphdriver.NewFsChecker(graphdriver.FsMagicOverlay))}, nil
}
// Status returns current driver information in a two dimensional string array.
// Output contains "Backing Filesystem" used in this implementation.
func (d *Driver) Status() [][2]string {
return [][2]string{
{"Backing Filesystem", backingFs},
// TODO: Add nydusd working status and version here.
{"Nydusd", "TBD"},
}
}
func (d *Driver) String() string {
return "Nydus graph driver"
}
// GetMetadata returns meta data about the overlay driver such as
// LowerDir, UpperDir, WorkDir and MergeDir used to store data.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
}
metadata := map[string]string{
"WorkDir": path.Join(dir, "work"),
"MergedDir": path.Join(dir, "merged"),
"UpperDir": path.Join(dir, "diff"),
}
lowerDirs, err := d.getLowerDirs(id)
if err != nil {
return nil, err
}
if len(lowerDirs) > 0 {
metadata["LowerDir"] = strings.Join(lowerDirs, ":")
}
return metadata, nil
}
// Cleanup any state created by overlay which should be cleaned when daemon
// is being shutdown. For now, we just have to unmount the bind mounted
// we had created.
func (d *Driver) Cleanup() error {
if d.nydus != nil {
d.nydus.command.Process.Signal(os.Interrupt)
d.nydus.command.Wait()
}
return nil
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {
logrus.Infof("Create read write - id %s parent %s", id, parent)
return d.Create(id, parent, opts)
}
// Create is used to create the upper, lower, and merged directories required for
// overlay fs for a given id.
// The parent filesystem is used to configure these directories for the overlay.
func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr error) {
logrus.Infof("Create. id %s, parent %s", id, parent)
dir := d.dir(id)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return err
}
root := idtools.Identity{UID: rootUID, GID: rootGID}
if err := idtools.MkdirAllAndChown(path.Dir(dir), 0700, root); err != nil {
return err
}
if err := idtools.MkdirAndChown(dir, 0700, root); err != nil {
return err
}
defer func() {
// Clean up on failure
if retErr != nil {
os.RemoveAll(dir)
}
}()
if err := idtools.MkdirAndChown(path.Join(dir, diffDirName), 0755, root); err != nil {
return err
}
// if no parent directory, done
if parent == "" {
return nil
}
if err := idtools.MkdirAndChown(path.Join(dir, mergedDirName), 0700, root); err != nil {
return err
}
if err := idtools.MkdirAndChown(path.Join(dir, workDirName), 0700, root); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(dir, parentFile), []byte(parent), 0666); err != nil {
return err
}
if parentLowers, err := d.getLowerDirs(parent); err == nil {
lowers := strings.Join(append(parentLowers, parent), ":")
lowerFilePath := path.Join(d.dir(id), lowerFile)
if len(lowers) > 0 {
if err := ioutil.WriteFile(lowerFilePath, []byte(lowers), 0666); err != nil {
return err
}
}
} else {
return err
}
return nil
}
func (d *Driver) getLowerDirs(id string) ([]string, error) {
var lowersArray []string
lowers, err := ioutil.ReadFile(path.Join(d.dir(id), lowerFile))
if err == nil {
lowersArray = strings.Split(string(lowers), ":")
} else if !os.IsNotExist(err) {
return nil, err
}
return lowersArray, nil
}
// Remove cleans the directories that are created for this id.
func (d *Driver) Remove(id string) error {
logrus.Infof("Remove %s", id)
dir := d.dir(id)
if err := system.EnsureRemoveAll(dir); err != nil && !os.IsNotExist(err) {
return errors.Errorf("Can't remove %s", dir)
}
return nil
}
// Get creates and mounts the required file system for the given id and returns the mount path.
// The `id` is mount-id.
func (d *Driver) Get(id, mountLabel string) (fs containerfs.ContainerFS, retErr error) {
logrus.Infof("Mount layer - id %s, label %s", id, mountLabel)
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
}
var lowers []string
lowers, retErr = d.getLowerDirs(id)
if retErr != nil {
return
}
for _, l := range lowers {
if l == id {
break
}
// Encounter nydus layer, start nydusd daemon, thus to mount rafs as
// overlay lower dir for later use.
if isNydus, err := d.isNydusLayer(l); isNydus {
if mounted, err := d.isNydusMounted(l); !mounted {
bootstrapPath := path.Join(d.dir(l), diffDirName, nydusMetaRelapath)
absMountpoint := path.Join(d.dir(l), nydusDirName)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return nil, err
}
root := idtools.Identity{UID: rootUID, GID: rootGID}
if err := idtools.MkdirAllAndChown(absMountpoint, 0700, root); err != nil {
return nil, errors.Wrap(err, "failed in creating nydus mountpoint")
}
nydus := New()
// Keep it, so we can wait for process termination.
d.nydus = nydus
if e := nydus.Mount(bootstrapPath, absMountpoint); e != nil {
return nil, e
}
} else if err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
// Relative path
nydusRelaMountpoint := path.Join(l, nydusDirName)
if _, err := os.Stat(path.Join(d.home, nydusRelaMountpoint)); err == nil {
lowers = append(lowers, nydusRelaMountpoint)
} else {
diffDir := path.Join(l, "diff")
if _, err := os.Stat(diffDir); err == nil {
lowers = append(lowers, diffDir)
}
}
}
mergedDir := path.Join(dir, mergedDirName)
if count := d.ctr.Increment(mergedDir); count > 1 {
return containerfs.NewLocalContainerFS(mergedDir), nil
}
defer func() {
if retErr != nil {
if c := d.ctr.Decrement(mergedDir); c <= 0 {
if err := unix.Unmount(mergedDir, 0); err != nil {
logrus.Warnf("unmount error %v: %v", mergedDir, err)
}
if err := unix.Rmdir(mergedDir); err != nil && !os.IsNotExist(err) {
logrus.Warnf("failed to remove %s: %v", id, err)
}
}
}
}()
os.Chdir(path.Join(d.home))
upperDir := path.Join(id, diffDirName)
workDir := path.Join(id, workDirName)
opts := "lowerdir=" + strings.Join(lowers, ":") + ",upperdir=" + upperDir + ",workdir=" + workDir
mountData := label.FormatMountLabel(opts, mountLabel)
mount := unix.Mount
mountTarget := mergedDir
logrus.Infof("mount options %s, target %s", opts, mountTarget)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return nil, err
}
if err := idtools.MkdirAndChown(mergedDir, 0700, idtools.Identity{UID: rootUID, GID: rootGID}); err != nil {
return nil, err
}
pageSize := unix.Getpagesize()
if len(mountData) > pageSize {
return nil, fmt.Errorf("cannot mount layer, mount label too large %d", len(mountData))
}
if err := mount("overlay", mountTarget, "overlay", 0, mountData); err != nil {
return nil, fmt.Errorf("error creating overlay mount to %s: %v", mergedDir, err)
}
// chown "workdir/work" to the remapped root UID/GID. Overlay fs inside a
// user namespace requires this to move a directory from lower to upper.
if err := os.Chown(path.Join(workDir, workDirName), rootUID, rootGID); err != nil {
return nil, err
}
return containerfs.NewLocalContainerFS(mergedDir), nil
}
func (d *Driver) isNydusLayer(id string) (bool, error) {
dir := d.dir(id)
bootstrapPath := path.Join(dir, diffDirName, nydusMetaRelapath)
return isFileExisted(bootstrapPath)
}
func (d *Driver) isNydusMounted(id string) (bool, error) {
if isNydus, err := d.isNydusLayer(id); !isNydus {
return isNydus, err
}
mp := path.Join(d.dir(id), nydusDirName)
if exited, err := isFileExisted(mp); !exited {
return exited, err
}
if mounted, err := mountinfo.Mounted(mp); !mounted {
return mounted, err
}
return true, nil
}
// Put unmounts the mount path created for the give id.
func (d *Driver) Put(id string) error {
if mounted, _ := d.isNydusMounted(id); mounted {
if d.nydus != nil {
// Signal to nydusd causes it umount itself before terminating.
// So we don't have to invoke os/umount here.
// Note: this only umount nydusd fuse mount point rather than overlay merged dir
d.nydus.command.Process.Signal(os.Interrupt)
d.nydus.command.Wait()
}
}
dir := d.dir(id)
mountpoint := path.Join(dir, mergedDirName)
if count := d.ctr.Decrement(mountpoint); count > 0 {
return nil
}
if err := unix.Unmount(mountpoint, unix.MNT_DETACH); err != nil {
return errors.Wrapf(err, "failed to unmount from %s", mountpoint)
}
if err := unix.Rmdir(mountpoint); err != nil && !os.IsNotExist(err) {
return errors.Wrapf(err, "failed in removing %s", mountpoint)
}
return nil
}
// Exists checks to see if the id is already mounted.
func (d *Driver) Exists(id string) bool {
logrus.Info("Execute `Exists()`")
_, err := os.Stat(d.dir(id))
return err == nil
}
// isParent returns if the passed in parent is the direct parent of the passed in layer
func (d *Driver) isParent(id, parent string) bool {
lowers, err := d.getLowerDirs(id)
if err != nil || len(lowers) == 0 && parent != "" {
return false
}
if parent == "" {
if len(lowers) == 0 {
return true
} else {
return false
}
}
return parent == lowers[len(lowers)-1]
}
// ApplyDiff applies the new layer into a root
func (d *Driver) ApplyDiff(id, parent string, diff io.Reader) (size int64, err error) {
if !d.isParent(id, parent) {
return 0, errors.Errorf("Parent %s is not true parent of id %s", parent, id)
}
applyDir := path.Join(d.dir(id), diffDirName)
if err := archive.Unpack(diff, applyDir, &archive.TarOptions{
UIDMaps: d.uidMaps,
GIDMaps: d.gidMaps,
WhiteoutFormat: archive.OverlayWhiteoutFormat,
InUserNS: false,
}); err != nil {
return 0, err
}
parentLowers, err := d.getLowerDirs(parent)
if err != nil {
return 0, err
}
newLowers := strings.Join(append(parentLowers, parent), ":")
lowerFilePath := path.Join(d.dir(id), lowerFile)
if len(newLowers) > 0 {
ioutil.WriteFile(lowerFilePath, []byte(newLowers), 0666)
}
return directory.Size(context.TODO(), applyDir)
}
// DiffSize calculates the changes between the specified id
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
return 0, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
func (d *Driver) Diff(id, parent string) (io.ReadCloser, error) {
return nil, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
return nil, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}

View File

@ -1,8 +0,0 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

View File

@ -1,132 +0,0 @@
From 304939a8dca54edd9833b27f1ca48435ade2ed49 Mon Sep 17 00:00:00 2001
From: Xin Yin <yinxin.x@bytedance.com>
Date: Thu, 8 Sep 2022 10:52:08 +0800
Subject: [PATCH] cachefiles: optimize on-demand IO path with buffer IO
The cachefiles framework use dio for local cache files filling
and reading, which may affects the performance for on-demand IO
path.
Change to use buffer IO for cache files filling, and first try
to find data in the pagecache during cache files reading. After
the pagecache for cache files is recycled, we will not suffer from
double caching issue.
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
---
fs/cachefiles/io.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 72 insertions(+), 2 deletions(-)
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 000a28f46e59..636491806ff8 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -11,9 +11,11 @@
#include <linux/uio.h>
#include <linux/falloc.h>
#include <linux/sched/mm.h>
+#include <linux/pagevec.h>
#include <trace/events/fscache.h>
#include "internal.h"
+
struct cachefiles_kiocb {
struct kiocb iocb;
refcount_t ki_refcnt;
@@ -67,6 +69,60 @@ static void cachefiles_read_complete(struct kiocb *iocb, long ret)
cachefiles_put_kiocb(ki);
}
+static void cachefiles_page_copy(struct cachefiles_kiocb *ki, struct iov_iter *iter)
+{
+ struct address_space *mapping = ki->iocb.ki_filp->f_mapping;
+ struct kiocb *iocb = &ki->iocb;
+ loff_t isize = i_size_read(mapping->host);
+ loff_t end = min_t(loff_t, isize, iocb->ki_pos + iov_iter_count(iter));
+ struct pagevec pv;
+ pgoff_t index;
+ unsigned int i;
+ bool writably_mapped;
+ int error = 0;
+
+ while (iocb->ki_pos < end && !error) {
+ index = iocb->ki_pos >> PAGE_SHIFT;
+ pv.nr = find_get_pages_contig(mapping, index, PAGEVEC_SIZE, pv.pages);
+
+ if (pv.nr == 0)
+ break;
+
+ writably_mapped = mapping_writably_mapped(mapping);
+
+ for (i = 0; i < pv.nr; i++) {
+ struct page *page = pv.pages[i];
+ unsigned int offset = iocb->ki_pos & ~PAGE_MASK;
+ unsigned int bytes = min_t(loff_t, end - iocb->ki_pos,
+ PAGE_SIZE - offset);
+ unsigned int copied;
+
+ if (page->index * PAGE_SIZE >= end)
+ break;
+
+ if (!PageUptodate(page)) {
+ error = -EFAULT;
+ break;
+ }
+
+ if (writably_mapped)
+ flush_dcache_page(page);
+
+ copied = copy_page_to_iter(page, offset, bytes, iter);
+
+ iocb->ki_pos += copied;
+ if (copied < bytes) {
+ error = -EFAULT;
+ break;
+ }
+ }
+
+ for (i = 0; i < pv.nr; i++)
+ put_page(pv.pages[i]);
+ }
+
+}
+
/*
* Initiate a read from the cache.
*/
@@ -155,8 +211,19 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
trace_cachefiles_read(object, file_inode(file), ki->iocb.ki_pos, len - skipped);
old_nofs = memalloc_nofs_save();
ret = cachefiles_inject_read_error();
- if (ret == 0)
+ if (ret == 0) {
+ // for ondemand mode try to fill iter form pagecache first
+ if (cachefiles_in_ondemand_mode(object->volume->cache)) {
+ cachefiles_page_copy(ki, iter);
+ if (!iov_iter_count(iter)) {
+ memalloc_nofs_restore(old_nofs);
+ ki->was_async = false;
+ cachefiles_read_complete(&ki->iocb, len - skipped);
+ goto in_progress;
+ }
+ }
ret = vfs_iocb_iter_read(file, &ki->iocb, iter);
+ }
memalloc_nofs_restore(old_nofs);
switch (ret) {
case -EIOCBQUEUED:
@@ -308,7 +375,10 @@ int __cachefiles_write(struct cachefiles_object *object,
refcount_set(&ki->ki_refcnt, 2);
ki->iocb.ki_filp = file;
ki->iocb.ki_pos = start_pos;
- ki->iocb.ki_flags = IOCB_DIRECT | IOCB_WRITE;
+ if (cachefiles_in_ondemand_mode(cache))
+ ki->iocb.ki_flags = IOCB_WRITE;
+ else
+ ki->iocb.ki_flags = IOCB_DIRECT | IOCB_WRITE;
ki->iocb.ki_ioprio = get_current_ioprio();
ki->object = object;
ki->start = start_pos;
--
2.11.0

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +0,0 @@
[package]
name = "nydus-backend-proxy"
version = "0.2.0"
authors = ["The Nydus Developers"]
description = "A simple HTTP server to provide a fake container registry for nydusd"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
license = "Apache-2.0"
[dependencies]
rocket = "0.5.0"
http-range = "0.1.5"
nix = { version = "0.28", features = ["uio"] }
clap = "4.4"
once_cell = "1.19.0"
lazy_static = "1.4"
[workspace]

View File

@ -1,23 +0,0 @@
all:.format build
current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
rust_arch := $(shell uname -p)
.musl_target:
$(eval CARGO_BUILD_FLAGS += --target ${rust_arch}-unknown-linux-musl)
.release_version:
$(eval CARGO_BUILD_FLAGS += --release)
.format:
cargo fmt -- --check
build:
cargo build $(CARGO_BUILD_FLAGS)
release: .format .release_version build
static-release: .musl_target .format .release_version build
clean:
cargo clean

View File

@ -1,104 +0,0 @@
# nydus-backend-proxy
A simple HTTP server to serve a local directory as blob backend for nydusd.
In some scenarios such as [sealer](https://github.com/alibaba/sealer), it uses nydus to boost up cluster image distribution. There is no registry (OCI distribution) or OSS service available for blob storage, so we need a simple HTTP server to serve a local directory as blob backend for nydusd. This server exposes OCI distribution like API to handle HTTP HEAD and range GET requests from nydusd for checking and fetching blob.
## Definition for response
support the following APIs:
```bash
HEAD /$namespace/$repo/blobs/sha256:xxx ### Check Blob
GET /$namespace/$repo/blobs/sha256:xxx ### Fetch Blob
```
### Check Blob
```
HEAD /v2/<name>/blobs/<digest>
```
On Success: OK
```
200 OK
Content-Length: <length of blob>
Docker-Content-Digest: <digest>
```
### Fetch Blob
```
GET /v2/<name>/blobs/<digest>
Host: <registry host>
```
On Success: OK
```
200 OK
Content-Length: <length>
Docker-Content-Digest: <digest>
Content-Type: application/octet-stream
<blob binary data>
```
On Failure: Not Found
```
404 Not Found
```
### Fetch Blob in Chunks
```
GET /v2/<name>/blobs/<digest>
Host: <registry host>
Range: bytes=<start>-<end>
```
On Success: OK
```
200 OK
Content-Length: <length>
Docker-Content-Digest: <digest>
Content-Range: bytes <start>-<end>/<size>
Content-Type: application/octet-stream
<blob binary data>
```
On Failure: Not Found
```
404 Not Found
```
On Failure: Range Not Satisfiable
```
416 Range Not Satisfiable
```
## How to use
### Run nydus-backend-proxy
```bash
./nydus-backend-proxy --blobsdir /path/to/nydus/blobs/dir
```
### Nydusd config
reuse nydusd registry backend
```bash
#cat httpserver.json
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "xxx.xxx.xxx.xxx:8000",
"repo": "xxxx"
}
},
"cache": {
"type": "blobcache",
"config": {
"work_dir": "./cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"enable_xattr": true,
"fs_prefetch": {
"enable": true,
"threads_count": 2,
"merging_size": 131072,
"bandwidth_rate":10485760
}
}
```

View File

@ -1,301 +0,0 @@
// Copyright (C) 2022 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
use std::collections::HashMap;
use std::env;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::{fs, io};
use clap::*;
use http_range::HttpRange;
use lazy_static::lazy_static;
use nix::sys::uio;
use rocket::fs::{FileServer, NamedFile};
use rocket::futures::lock::{Mutex, MutexGuard};
use rocket::http::Status;
use rocket::request::{self, FromRequest, Outcome};
use rocket::response::{self, stream::ReaderStream, Responder};
use rocket::*;
lazy_static! {
static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend {
root: PathBuf::default(),
blobs: HashMap::new()
});
}
async fn blob_backend_mut() -> MutexGuard<'static, BlobBackend> {
BLOB_BACKEND.lock().await
}
async fn init_blob_backend(root: &Path) {
let mut b = BlobBackend {
root: root.to_path_buf(),
blobs: HashMap::new(),
};
b.populate_blobs_map();
*BLOB_BACKEND.lock().await = b;
}
#[derive(Debug)]
struct BlobBackend {
root: PathBuf,
blobs: HashMap<String, Arc<fs::File>>,
}
impl BlobBackend {
fn populate_blobs_map(&mut self) {
for entry in self
.root
.read_dir()
.expect("read blobsdir failed")
.flatten()
{
let filepath = entry.path();
if filepath.is_file() {
// Collaborating system should put files with valid name which
// can also be converted to UTF-8
let digest = filepath.file_name().unwrap().to_string_lossy();
if self.blobs.contains_key(digest.as_ref()) {
continue;
}
match fs::File::open(&filepath) {
Ok(f) => {
self.blobs.insert(digest.into_owned(), Arc::new(f));
}
Err(e) => warn!("failed to open file {}, {}", digest, e),
}
} else {
debug!("%s: Not regular file");
}
}
}
}
#[derive(Debug)]
struct HeaderData {
_host: String,
range: String,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for HeaderData {
type Error = Status;
async fn from_request(req: &'r Request<'_>) -> request::Outcome<HeaderData, Self::Error> {
let headers = req.headers();
let _host = headers.get_one("Host").unwrap_or_default().to_string();
let range = headers.get_one("Range").unwrap_or_default().to_string();
Outcome::Success(HeaderData { _host, range })
}
}
#[rocket::head("/<_namespace>/<_repo>/blobs/<digest>")]
async fn check(
_namespace: PathBuf,
_repo: PathBuf,
digest: String,
) -> Result<Option<FileStream>, Status> {
if !digest.starts_with("sha256:") {
return Err(Status::BadRequest);
}
// Trim "sha256:" prefix
let dis = &digest[7..];
let backend = blob_backend_mut();
let path = backend.await.root.join(&dis);
NamedFile::open(path)
.await
.map_err(|_e| Status::NotFound)
.map(|nf| Some(FileStream(nf, dis.to_string())))
}
/* fetch blob response
* NamedFile: blob data
* String: Docker-Content-Digest
*/
struct FileStream(NamedFile, String);
impl<'r> Responder<'r, 'static> for FileStream {
fn respond_to(self, req: &'r Request<'_>) -> response::Result<'static> {
let res = self.0.respond_to(req)?;
Response::build_from(res)
.raw_header("Docker-Content-Digest", self.1)
.raw_header("Content-Type", "application/octet-stream")
.ok()
}
}
/* fetch blob part response(stream)
* path: path of blob
* dis: Docker-Content-Digest
* start & end: "Content-Range: bytes <start>-<end>/<size>"
*/
struct RangeStream {
dis: String,
start: u64,
len: u64,
file: Arc<fs::File>,
}
impl RangeStream {
fn get_rangestr(&self) -> String {
let endpos = self.start + self.len - 1;
format!("bytes {}-{}/{}", self.start, endpos, self.len)
}
}
impl<'r> Responder<'r, 'static> for RangeStream {
fn respond_to(self, _req: &'r Request<'_>) -> response::Result<'static> {
const BUFSIZE: usize = 4096;
let mut buf = vec![0; BUFSIZE];
let mut read = 0u64;
let startpos = self.start as i64;
let size = self.len;
let file = self.file.clone();
Response::build()
.streamed_body(ReaderStream! {
while read < size {
match uio::pread(file.as_ref(), &mut buf, startpos + read as i64) {
Ok(mut n) => {
n = std::cmp::min(n, (size - read) as usize);
read += n as u64;
if n == 0 {
break;
} else if n < BUFSIZE {
yield io::Cursor::new(buf[0..n].to_vec());
} else {
yield io::Cursor::new(buf.clone());
}
}
Err(err) => {
eprintln!("ReaderStream Error: {}", err);
break;
}
}
}
})
.raw_header("Content-Range", self.get_rangestr())
.raw_header("Docker-Content-Digest", self.dis)
.raw_header("Content-Type", "application/octet-stream")
.ok()
}
}
#[derive(Responder)]
enum StoredData {
AllFile(FileStream),
Range(RangeStream),
}
#[get("/<_namespace>/<_repo>/blobs/<digest>")]
async fn fetch(
_namespace: PathBuf,
_repo: PathBuf,
digest: String,
header_data: HeaderData,
) -> Result<StoredData, Status> {
if !digest.starts_with("sha256:") {
return Err(Status::BadRequest);
}
// Trim "sha256:" prefix
let dis = &digest[7..];
//if no range in Request header,return fetch blob response
if header_data.range.is_empty() {
let filepath = blob_backend_mut().await.root.join(&dis);
NamedFile::open(filepath)
.await
.map_err(|_e| Status::NotFound)
.map(|nf| StoredData::AllFile(FileStream(nf, dis.to_string())))
} else {
let mut guard = blob_backend_mut().await;
let blob_file = if let Some(f) = guard.blobs.get(dis) {
f.clone()
} else {
trace!("Blob object not found: {}", dis);
// Re-populate blobs map by `readdir()` again to scan if files are newly added.
guard.populate_blobs_map();
trace!("re-populating to search blob {}", dis);
guard.blobs.get(dis).cloned().ok_or_else(|| {
error!("Blob {} not found finally!", dis);
Status::NotFound
})?
};
drop(guard);
let metadata = match blob_file.metadata() {
Ok(meta) => meta,
Err(e) => {
eprintln!("Get file metadata failed! Error: {}", e);
return Err(Status::InternalServerError);
}
};
let ranges = match HttpRange::parse(&header_data.range, metadata.len()) {
Ok(r) => r,
Err(e) => {
eprintln!("HttpRange parse failed! Error: {:#?}", e);
return Err(Status::RangeNotSatisfiable);
}
};
let start_pos = ranges[0].start as u64;
let size = ranges[0].length;
Ok(StoredData::Range(RangeStream {
dis: dis.to_string(),
len: size,
start: start_pos,
file: blob_file,
}))
}
}
#[rocket::main]
async fn main() {
let cmd = Command::new("nydus-backend-proxy")
.author(env!("CARGO_PKG_AUTHORS"))
.version(env!("CARGO_PKG_VERSION"))
.about("A simple HTTP server to provide a fake container registry for nydusd.")
.arg(
Arg::new("blobsdir")
.short('b')
.long("blobsdir")
.required(true)
.help("path to directory hosting nydus blob files"),
)
.help_template(
"\
{before-help}{name} {version}
{author-with-newline}{about-with-newline}
{usage-heading} {usage}
{all-args}{after-help}
",
)
.get_matches();
// Safe to unwrap() because `blobsdir` takes a value.
let path = cmd
.get_one::<String>("blobsdir")
.expect("required argument");
init_blob_backend(Path::new(path)).await;
if let Err(e) = rocket::build()
.mount("/", rocket::routes![check, fetch])
.mount("/", FileServer::from(&path))
.launch()
.await
{
error!("Rocket failed to launch, {:#?}", e);
std::process::exit(-1);
}
}

Some files were not shown because too many files have changed in this diff Show More