Compare commits
30 Commits
Author | SHA1 | Date |
---|---|---|
|
f7d513844d | |
|
29dc8ec5c8 | |
|
7886e1868f | |
|
e1dffec213 | |
|
cc62dd6890 | |
|
d140d60bea | |
|
f323c7f6e3 | |
|
5c8299c7f7 | |
|
14c0062cee | |
|
d3bbc3e509 | |
|
80f80dda0e | |
|
a26c7bf99c | |
|
72b1955387 | |
|
d589292ebc | |
|
344a208e86 | |
|
9645820222 | |
|
d36295a21e | |
|
c048fcc45f | |
|
67bf8b8283 | |
|
d74629233b | |
|
21206e75b3 | |
|
c288169c1a | |
|
23fdda1020 | |
|
9b915529a9 | |
|
96c3e5569a | |
|
44069d6091 | |
|
31c8e896f0 | |
|
8593498dbd | |
|
6161868e41 | |
|
871e1c6e4f |
|
@ -0,0 +1,250 @@
|
|||
# GitHub Copilot Instructions for Nydus
|
||||
|
||||
## Project Overview
|
||||
|
||||
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
|
||||
|
||||
### Key Components
|
||||
|
||||
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
|
||||
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
|
||||
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
|
||||
- **nydusctl**: CLI client for managing and querying nydusd daemon
|
||||
- **nydus-service**: Library crate for integrating Nydus services into other projects
|
||||
|
||||
## Architecture Guidelines
|
||||
|
||||
### Crate Structure
|
||||
```
|
||||
- api/ # Nydus Image Service APIs and data structures
|
||||
- builder/ # Image building and conversion logic
|
||||
- rafs/ # RAFS filesystem implementation
|
||||
- service/ # Daemon and service management framework
|
||||
- storage/ # Core storage subsystem with backends and caching
|
||||
- utils/ # Common utilities and helper functions
|
||||
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
|
||||
```
|
||||
|
||||
### Key Technologies
|
||||
- **Language**: Rust with memory safety focus
|
||||
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
|
||||
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
|
||||
- **Compression**: LZ4, Gzip, Zstd
|
||||
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
|
||||
|
||||
## Code Style and Patterns
|
||||
|
||||
### Rust Conventions
|
||||
- Use `#![deny(warnings)]` in all binary crates
|
||||
- Follow standard Rust naming conventions (snake_case, PascalCase)
|
||||
- Prefer `anyhow::Result` for error handling in applications
|
||||
- Use custom error types with `thiserror` for libraries
|
||||
- Apply `#[macro_use]` for frequently used external crates like `log`
|
||||
- Always format the code with `cargo fmt`
|
||||
- Use `clippy` for linting and follow its suggestions
|
||||
|
||||
### Error Handling
|
||||
```rust
|
||||
// Prefer anyhow for applications
|
||||
use anyhow::{bail, Context, Result};
|
||||
|
||||
// Use custom error types for libraries
|
||||
use thiserror::Error;
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
pub enum NydusError {
|
||||
#[error("Invalid arguments: {0}")]
|
||||
InvalidArguments(String),
|
||||
#[error("IO error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
}
|
||||
```
|
||||
|
||||
### Logging Patterns
|
||||
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
|
||||
- Include context in error messages: `.with_context(|| "description")`
|
||||
- Use `info!`, `warn!`, `error!` macros consistently
|
||||
|
||||
### Configuration Management
|
||||
- Use `serde` for JSON configuration serialization/deserialization
|
||||
- Support both file-based and environment variable configuration
|
||||
- Validate configurations at startup with clear error messages
|
||||
- Follow the `ConfigV2` pattern for versioned configurations
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
### Storage Backend Development
|
||||
- When implementing new storage backends:
|
||||
- - Implement the `BlobBackend` trait
|
||||
- - Support timeout, retry, and connection management
|
||||
- - Add configuration in the backend config structure
|
||||
- - Consider proxy support for high availability
|
||||
- - Implement proper error handling and logging
|
||||
|
||||
### Daemon Service Development
|
||||
- Use the `NydusDaemon` trait for service implementations
|
||||
- Support save/restore for hot upgrade functionality
|
||||
- Implement proper state machine transitions
|
||||
- Use `DaemonController` for lifecycle management
|
||||
|
||||
### RAFS Filesystem Features
|
||||
- Support both RAFS v5 and v6 formats
|
||||
- Implement chunk-level deduplication
|
||||
- Handle prefetch optimization for container startup
|
||||
- Support overlay filesystem operations
|
||||
- Maintain POSIX compatibility
|
||||
|
||||
### API Development
|
||||
- Use versioned APIs (v1, v2) with backward compatibility
|
||||
- Implement HTTP endpoints with proper error handling
|
||||
- Support both Unix socket and TCP communication
|
||||
- Follow OpenAPI specification patterns
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
### Unit Tests
|
||||
- Test individual functions and modules in isolation
|
||||
- Use `#[cfg(test)]` modules within source files
|
||||
- Mock external dependencies when necessary
|
||||
- Focus on error conditions and edge cases
|
||||
|
||||
### Integration Tests
|
||||
- Place integration tests in `tests/` directory
|
||||
- Test complete workflows and component interactions
|
||||
- Use temporary directories for filesystem operations
|
||||
- Clean up resources properly in test teardown
|
||||
|
||||
### Smoke Tests
|
||||
- Located in `smoke/` directory using Go
|
||||
- Test real-world scenarios with actual images
|
||||
- Verify performance and functionality
|
||||
- Use Bats framework for shell-based testing
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### I/O Optimization
|
||||
- Use async I/O patterns with Tokio
|
||||
- Implement prefetching for predictable access patterns
|
||||
- Optimize chunk size (default 1MB) for workload characteristics
|
||||
- Consider io-uring for high-performance scenarios
|
||||
|
||||
### Memory Management
|
||||
- Use `Arc<T>` for shared ownership of large objects
|
||||
- Implement lazy loading for metadata structures
|
||||
- Consider memory mapping for large files
|
||||
- Profile memory usage in performance-critical paths
|
||||
|
||||
### Caching Strategy
|
||||
- Implement blob caching with configurable backends
|
||||
- Support compression in cache to save space
|
||||
- Use chunk-level caching with efficient eviction policies
|
||||
- Consider cache warming strategies for frequently accessed data
|
||||
|
||||
## Security Guidelines
|
||||
|
||||
### Data Integrity
|
||||
- Implement end-to-end digest validation
|
||||
- Support multiple hash algorithms (SHA256, Blake3)
|
||||
- Verify chunk integrity on read operations
|
||||
- Detect and prevent supply chain attacks
|
||||
|
||||
### Authentication
|
||||
- Support registry authentication (basic auth, bearer tokens)
|
||||
- Handle credential rotation and refresh
|
||||
- Implement secure credential storage
|
||||
- Support mutual TLS for backend connections
|
||||
|
||||
## Specific Code Patterns
|
||||
|
||||
### Configuration Loading
|
||||
```rust
|
||||
// Standard pattern for configuration loading
|
||||
let config = match config_path {
|
||||
Some(path) => ConfigV2::from_file(path)?,
|
||||
None => ConfigV2::default(),
|
||||
};
|
||||
|
||||
// Environment variable override
|
||||
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
|
||||
config.update_registry_auth_info(&auth);
|
||||
}
|
||||
```
|
||||
|
||||
### Daemon Lifecycle
|
||||
```rust
|
||||
// Standard daemon initialization pattern
|
||||
let daemon = create_daemon(config, build_info)?;
|
||||
DAEMON_CONTROLLER.set_daemon(daemon);
|
||||
|
||||
// Event loop management
|
||||
if DAEMON_CONTROLLER.is_active() {
|
||||
DAEMON_CONTROLLER.run_loop();
|
||||
}
|
||||
|
||||
// Graceful shutdown
|
||||
DAEMON_CONTROLLER.shutdown();
|
||||
```
|
||||
|
||||
### Blob Access Pattern
|
||||
```rust
|
||||
// Standard blob read pattern
|
||||
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
|
||||
let blob_device = factory.get_device(&blob_info)?;
|
||||
blob_device.read(&mut bio)?;
|
||||
```
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### Code Documentation
|
||||
- Document all public APIs with `///` comments
|
||||
- Include examples in documentation
|
||||
- Document safety requirements for unsafe code
|
||||
- Explain complex algorithms and data structures
|
||||
|
||||
### Architecture Documentation
|
||||
- Maintain design documents in `docs/` directory
|
||||
- Update documentation when adding new features
|
||||
- Include diagrams for complex interactions
|
||||
- Document configuration options comprehensively
|
||||
|
||||
### Release Notes
|
||||
- Document breaking changes clearly
|
||||
- Include migration guides for major versions
|
||||
- Highlight performance improvements
|
||||
- List new features and bug fixes
|
||||
|
||||
## Container and Cloud Native Patterns
|
||||
|
||||
### OCI Compatibility
|
||||
- Maintain compatibility with OCI image spec
|
||||
- Support standard container runtimes (runc, Kata)
|
||||
- Implement proper layer handling and manifest generation
|
||||
- Support multi-architecture images
|
||||
|
||||
### Kubernetes Integration
|
||||
- Design for Kubernetes CRI integration
|
||||
- Support containerd snapshotter pattern
|
||||
- Handle pod lifecycle events appropriately
|
||||
- Implement proper resource cleanup
|
||||
|
||||
### Cloud Storage Integration
|
||||
- Support major cloud providers (AWS S3, Alibaba OSS)
|
||||
- Implement proper credential management
|
||||
- Handle network interruptions gracefully
|
||||
- Support cross-region replication patterns
|
||||
|
||||
## Build and Release
|
||||
|
||||
### Build Configuration
|
||||
- Use `Cargo.toml` workspace configuration
|
||||
- Support cross-compilation for multiple architectures
|
||||
- Implement proper feature flags for optional components
|
||||
- Use consistent dependency versioning
|
||||
|
||||
### Release Process
|
||||
- Tag releases with semantic versioning
|
||||
- Generate release binaries for supported platforms
|
||||
- Update documentation with release notes
|
||||
- Validate release artifacts before publishing
|
||||
|
||||
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.
|
|
@ -0,0 +1,45 @@
|
|||
name: Miri Test
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: ["**", "stable/**"]
|
||||
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
|
||||
pull_request:
|
||||
branches: ["**", "stable/**"]
|
||||
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
|
||||
schedule:
|
||||
# Run daily sanity check at 03:00 clock UTC
|
||||
- cron: "0 03 * * *"
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
nydus-unit-test-with-miri:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: Rust Cache
|
||||
uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
cache-on-failure: true
|
||||
shared-key: Linux-cargo-amd64
|
||||
save-if: ${{ github.ref == 'refs/heads/master' }}
|
||||
- name: Install cargo nextest
|
||||
uses: taiki-e/install-action@nextest
|
||||
- name: Fscache Setup
|
||||
run: sudo bash misc/fscache/setup.sh
|
||||
- name: Install Miri
|
||||
run: |
|
||||
rustup toolchain install nightly --component miri
|
||||
rustup override set nightly
|
||||
cargo miri setup
|
||||
- name: Unit Test with Miri
|
||||
run: |
|
||||
CARGO_HOME=${HOME}/.cargo
|
||||
CARGO_BIN=$(which cargo)
|
||||
RUSTUP_BIN=$(which rustup)
|
||||
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
|
||||
grep -C 2 'Undefined Behavior' miri-ut.log
|
|
@ -239,3 +239,87 @@ jobs:
|
|||
generate_release_notes: true
|
||||
files: |
|
||||
${{ env.tarballs }}
|
||||
|
||||
|
||||
goreleaser:
|
||||
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
|
||||
strategy:
|
||||
matrix:
|
||||
arch: [amd64, arm64]
|
||||
os: [linux]
|
||||
needs: [nydus-linux, contrib-linux]
|
||||
permissions:
|
||||
contents: write
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
outputs:
|
||||
hashes: ${{ steps.hash.outputs.hashes }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||
with:
|
||||
fetch-depth: 0
|
||||
submodules: recursive
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version-file: 'go.work'
|
||||
cache-dependency-path: "**/*.sum"
|
||||
- name: download artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
|
||||
merge-multiple: true
|
||||
path: nydus-static
|
||||
- name: prepare context
|
||||
run: |
|
||||
chmod +x nydus-static/*
|
||||
export GOARCH=${{ matrix.arch }}
|
||||
echo "GOARCH: $GOARCH"
|
||||
sh ./goreleaser.sh
|
||||
- name: Check GoReleaser config
|
||||
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
|
||||
with:
|
||||
version: latest
|
||||
args: check
|
||||
|
||||
- name: Run GoReleaser
|
||||
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
|
||||
id: run-goreleaser
|
||||
with:
|
||||
version: latest
|
||||
args: release --clean
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Generate subject
|
||||
id: hash
|
||||
env:
|
||||
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
|
||||
if test "$hashes" = ""; then # goreleaser < v1.13.0
|
||||
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
|
||||
hashes=$(cat $checksum_file | base64 -w0)
|
||||
fi
|
||||
echo "hashes=$hashes" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Set tag output
|
||||
id: tag
|
||||
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
provenance:
|
||||
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
|
||||
needs: [goreleaser]
|
||||
permissions:
|
||||
actions: read # To read the workflow path.
|
||||
id-token: write # To sign the provenance.
|
||||
contents: write # To add assets to a release.
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
|
||||
with:
|
||||
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
|
||||
upload-assets: true # upload to a new release
|
||||
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
|
||||
draft-release: true
|
|
@ -57,7 +57,7 @@ jobs:
|
|||
- name: Lint
|
||||
uses: golangci/golangci-lint-action@v6
|
||||
with:
|
||||
version: v1.61
|
||||
version: v1.64
|
||||
working-directory: ${{ matrix.path }}
|
||||
args: --timeout=10m --verbose
|
||||
|
||||
|
@ -193,6 +193,21 @@ jobs:
|
|||
with:
|
||||
go-version-file: 'go.work'
|
||||
cache-dependency-path: "**/*.sum"
|
||||
- name: Free Disk Space
|
||||
uses: jlumbroso/free-disk-space@main
|
||||
with:
|
||||
# this might remove tools that are actually needed,
|
||||
# if set to "true" but frees about 6 GB
|
||||
tool-cache: false
|
||||
|
||||
# all of these default to true, but feel free to set to
|
||||
# "false" if necessary for your workflow
|
||||
android: true
|
||||
dotnet: true
|
||||
haskell: true
|
||||
large-packages: true
|
||||
docker-images: true
|
||||
swap-storage: true
|
||||
- name: Integration Test
|
||||
run: |
|
||||
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
|
||||
|
@ -213,7 +228,7 @@ jobs:
|
|||
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
|
||||
done
|
||||
|
||||
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.61.0
|
||||
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
|
||||
sudo -E make smoke-only
|
||||
|
||||
nydus-unit-test:
|
||||
|
@ -235,7 +250,8 @@ jobs:
|
|||
run: |
|
||||
CARGO_HOME=${HOME}/.cargo
|
||||
CARGO_BIN=$(which cargo)
|
||||
sudo -E CARGO=${CARGO_BIN} make ut-nextest
|
||||
RUSTUP_BIN=$(which rustup)
|
||||
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
|
||||
|
||||
contrib-unit-test-coverage:
|
||||
runs-on: ubuntu-latest
|
||||
|
@ -277,7 +293,8 @@ jobs:
|
|||
run: |
|
||||
CARGO_HOME=${HOME}/.cargo
|
||||
CARGO_BIN=$(which cargo)
|
||||
sudo -E CARGO=${CARGO_BIN} make coverage-codecov
|
||||
RUSTUP_BIN=$(which rustup)
|
||||
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
|
||||
- name: Upload nydus coverage file
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
name: Close stale issues and PRs
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: "0 0 * * *"
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
|
||||
id: stale
|
||||
with:
|
||||
delete-branch: true
|
||||
days-before-close: 7
|
||||
days-before-stale: 60
|
||||
days-before-pr-close: 7
|
||||
days-before-pr-stale: 60
|
||||
stale-issue-label: "stale"
|
||||
exempt-issue-labels: bug,wip
|
||||
exempt-pr-labels: bug,wip
|
||||
exempt-all-milestones: true
|
||||
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
|
||||
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
|
||||
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
|
||||
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'
|
|
@ -7,3 +7,8 @@
|
|||
__pycache__
|
||||
.DS_Store
|
||||
go.work.sum
|
||||
dist/
|
||||
nydus-static/
|
||||
.goreleaser.yml
|
||||
metadata.db
|
||||
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a
|
||||
|
|
|
@ -209,6 +209,17 @@ dependencies = [
|
|||
"generic-array",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "bstr"
|
||||
version = "1.9.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c48f0051a4b4c5e0b6d365cd04af53aeaa209e3cc15ec2cdb69e73cc87fbd0dc"
|
||||
dependencies = [
|
||||
"memchr",
|
||||
"regex-automata",
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "bumpalo"
|
||||
version = "3.16.0"
|
||||
|
@ -234,7 +245,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||
checksum = "190baaad529bcfbde9e1a19022c42781bdb6ff9de25721abdb8fd98c0807730b"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -398,7 +409,7 @@ version = "0.1.1"
|
|||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "543711b94b4bc1437d2ebb45f856452e96a45a67ab39f8dcf8c887c2a3701004"
|
||||
dependencies = [
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -409,7 +420,7 @@ checksum = "0315cb247b6726d92c74b955b7d79b4be475be58335fee120f19292740132eb1"
|
|||
dependencies = [
|
||||
"displaydoc",
|
||||
"libc",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
"versionize",
|
||||
"versionize_derive",
|
||||
]
|
||||
|
@ -542,7 +553,7 @@ dependencies = [
|
|||
"log",
|
||||
"nu-ansi-term",
|
||||
"regex",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -687,6 +698,85 @@ version = "0.31.1"
|
|||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
|
||||
|
||||
[[package]]
|
||||
name = "gix-attributes"
|
||||
version = "0.25.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e4e25825e0430aa11096f8b65ced6780d4a96a133f81904edceebb5344c8dd7f"
|
||||
dependencies = [
|
||||
"bstr",
|
||||
"gix-glob",
|
||||
"gix-path",
|
||||
"gix-quote",
|
||||
"gix-trace",
|
||||
"kstring",
|
||||
"smallvec",
|
||||
"thiserror 2.0.11",
|
||||
"unicode-bom",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gix-features"
|
||||
version = "0.41.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "016d6050219458d14520fe22bdfdeb9cb71631dec9bc2724767c983f60109634"
|
||||
dependencies = [
|
||||
"gix-trace",
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gix-glob"
|
||||
version = "0.19.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "20972499c03473e773a2099e5fd0c695b9b72465837797a51a43391a1635a030"
|
||||
dependencies = [
|
||||
"bitflags 2.8.0",
|
||||
"bstr",
|
||||
"gix-features",
|
||||
"gix-path",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gix-path"
|
||||
version = "0.10.15"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f910668e2f6b2a55ff35a1f04df88a1a049f7b868507f4cbeeaa220eaba7be87"
|
||||
dependencies = [
|
||||
"bstr",
|
||||
"gix-trace",
|
||||
"home",
|
||||
"once_cell",
|
||||
"thiserror 2.0.11",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gix-quote"
|
||||
version = "0.5.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1b005c550bf84de3b24aa5e540a23e6146a1c01c7d30470e35d75a12f827f969"
|
||||
dependencies = [
|
||||
"bstr",
|
||||
"gix-utils",
|
||||
"thiserror 2.0.11",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gix-trace"
|
||||
version = "0.1.12"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7c396a2036920c69695f760a65e7f2677267ccf483f25046977d87e4cb2665f7"
|
||||
|
||||
[[package]]
|
||||
name = "gix-utils"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "189f8724cf903e7fd57cfe0b7bc209db255cacdcb22c781a022f52c3a774f8d0"
|
||||
dependencies = [
|
||||
"fastrand",
|
||||
"unicode-normalization",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "glob"
|
||||
version = "0.3.2"
|
||||
|
@ -776,6 +866,15 @@ dependencies = [
|
|||
"digest",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "home"
|
||||
version = "0.5.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e3d1354bf6b7235cb4a0576c2619fd4ed18183f689b12b006a0ee7329eeff9a5"
|
||||
dependencies = [
|
||||
"windows-sys 0.52.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "http"
|
||||
version = "0.2.12"
|
||||
|
@ -1090,6 +1189,15 @@ dependencies = [
|
|||
"wasm-bindgen",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "kstring"
|
||||
version = "2.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ec3066350882a1cd6d950d055997f379ac37fd39f81cd4d8ed186032eb3c5747"
|
||||
dependencies = [
|
||||
"static_assertions",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lazy_static"
|
||||
version = "1.5.0"
|
||||
|
@ -1322,7 +1430,7 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "nydus-api"
|
||||
version = "0.3.1"
|
||||
version = "0.4.0"
|
||||
dependencies = [
|
||||
"backtrace",
|
||||
"dbs-uhttp",
|
||||
|
@ -1333,7 +1441,7 @@ dependencies = [
|
|||
"mio 0.8.11",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
"toml",
|
||||
"url",
|
||||
"vmm-sys-util",
|
||||
|
@ -1341,10 +1449,11 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "nydus-builder"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"base64",
|
||||
"gix-attributes",
|
||||
"hex",
|
||||
"indexmap",
|
||||
"libc",
|
||||
|
@ -1354,6 +1463,7 @@ dependencies = [
|
|||
"nydus-rafs",
|
||||
"nydus-storage",
|
||||
"nydus-utils",
|
||||
"parse-size",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"sha2",
|
||||
|
@ -1376,7 +1486,7 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "nydus-rafs"
|
||||
version = "0.3.2"
|
||||
version = "0.4.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"arc-swap",
|
||||
|
@ -1392,7 +1502,7 @@ dependencies = [
|
|||
"nydus-utils",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
"vm-memory",
|
||||
"vmm-sys-util",
|
||||
]
|
||||
|
@ -1439,7 +1549,7 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "nydus-service"
|
||||
version = "0.3.0"
|
||||
version = "0.4.0"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"dbs-allocator",
|
||||
|
@ -1453,10 +1563,11 @@ dependencies = [
|
|||
"nydus-storage",
|
||||
"nydus-upgrade",
|
||||
"nydus-utils",
|
||||
"procfs",
|
||||
"rust-fsm",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
"time",
|
||||
"tokio",
|
||||
"tokio-uring",
|
||||
|
@ -1472,7 +1583,7 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "nydus-storage"
|
||||
version = "0.6.4"
|
||||
version = "0.7.0"
|
||||
dependencies = [
|
||||
"arc-swap",
|
||||
"base64",
|
||||
|
@ -1512,20 +1623,21 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "nydus-upgrade"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
dependencies = [
|
||||
"dbs-snapshot",
|
||||
"sendfd",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
"versionize",
|
||||
"versionize_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nydus-utils"
|
||||
version = "0.4.3"
|
||||
version = "0.5.0"
|
||||
dependencies = [
|
||||
"blake3",
|
||||
"crc",
|
||||
"flate2",
|
||||
"httpdate",
|
||||
"lazy_static",
|
||||
|
@ -1541,7 +1653,7 @@ dependencies = [
|
|||
"serde_json",
|
||||
"sha2",
|
||||
"tar",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
"tokio",
|
||||
"vmm-sys-util",
|
||||
"zstd",
|
||||
|
@ -1564,9 +1676,9 @@ checksum = "1261fe7e33c73b354eab43b1273a57c8f967d0391e80353e51f764ac02cf6775"
|
|||
|
||||
[[package]]
|
||||
name = "openssl"
|
||||
version = "0.10.70"
|
||||
version = "0.10.72"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "61cfb4e166a8bb8c9b55c500bc2308550148ece889be90f609377e58140f42c6"
|
||||
checksum = "fedfea7d58a1f73118430a55da6a286e7b044961736ce96a16a17068ea25e5da"
|
||||
dependencies = [
|
||||
"bitflags 2.8.0",
|
||||
"cfg-if",
|
||||
|
@ -1605,9 +1717,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "openssl-sys"
|
||||
version = "0.9.105"
|
||||
version = "0.9.107"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8b22d5b84be05a8d6947c7cb71f7c849aa0f112acd4bf51c2a7c1c988ac0a9dc"
|
||||
checksum = "8288979acd84749c744a9014b4382d42b8f7b2592847b5afb2ed29e5d16ede07"
|
||||
dependencies = [
|
||||
"cc",
|
||||
"libc",
|
||||
|
@ -1639,6 +1751,12 @@ dependencies = [
|
|||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "parse-size"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "487f2ccd1e17ce8c1bfab3a65c89525af41cfad4c8659021a1e9a2aacd73b89b"
|
||||
|
||||
[[package]]
|
||||
name = "percent-encoding"
|
||||
version = "2.3.1"
|
||||
|
@ -1707,6 +1825,31 @@ dependencies = [
|
|||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "procfs"
|
||||
version = "0.17.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cc5b72d8145275d844d4b5f6d4e1eef00c8cd889edb6035c21675d1bb1f45c9f"
|
||||
dependencies = [
|
||||
"bitflags 2.8.0",
|
||||
"chrono",
|
||||
"flate2",
|
||||
"hex",
|
||||
"procfs-core",
|
||||
"rustix",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "procfs-core"
|
||||
version = "0.17.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "239df02d8349b06fc07398a3a1697b06418223b1c7725085e801e7c0fc6a12ec"
|
||||
dependencies = [
|
||||
"bitflags 2.8.0",
|
||||
"chrono",
|
||||
"hex",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "quote"
|
||||
version = "1.0.38"
|
||||
|
@ -2113,6 +2256,12 @@ version = "1.2.0"
|
|||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
|
||||
|
||||
[[package]]
|
||||
name = "static_assertions"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
|
||||
|
||||
[[package]]
|
||||
name = "strsim"
|
||||
version = "0.11.1"
|
||||
|
@ -2216,7 +2365,16 @@ version = "1.0.69"
|
|||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52"
|
||||
dependencies = [
|
||||
"thiserror-impl",
|
||||
"thiserror-impl 1.0.69",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "thiserror"
|
||||
version = "2.0.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d452f284b73e6d76dd36758a0c8684b1d5be31f92b89d07fd5822175732206fc"
|
||||
dependencies = [
|
||||
"thiserror-impl 2.0.11",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -2230,6 +2388,17 @@ dependencies = [
|
|||
"syn 2.0.96",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "thiserror-impl"
|
||||
version = "2.0.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "26afc1baea8a989337eeb52b6e72a039780ce45c3edfcc9c5b9d112feeb173c2"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.96",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "time"
|
||||
version = "0.3.37"
|
||||
|
@ -2272,10 +2441,25 @@ dependencies = [
|
|||
]
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.43.0"
|
||||
name = "tinyvec"
|
||||
version = "1.8.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3d61fa4ffa3de412bfea335c6ecff681de2b609ba3c77ef3e00e521813a9ed9e"
|
||||
checksum = "022db8904dfa342efe721985167e9fcd16c29b226db4397ed752a761cfce81e8"
|
||||
dependencies = [
|
||||
"tinyvec_macros",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tinyvec_macros"
|
||||
version = "0.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.44.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
|
||||
dependencies = [
|
||||
"backtrace",
|
||||
"bytes",
|
||||
|
@ -2393,12 +2577,27 @@ version = "1.17.0"
|
|||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-bom"
|
||||
version = "2.0.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7eec5d1121208364f6793f7d2e222bf75a915c19557537745b195b253dd64217"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-ident"
|
||||
version = "1.0.15"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "11cd88e12b17c6494200a9c1b683a04fcac9573ed74cd1b62aeb2727c5592243"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-normalization"
|
||||
version = "0.1.24"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5033c97c4262335cded6d6fc3e5c18ab755e1a3dc96376350f3d8e9f009ad956"
|
||||
dependencies = [
|
||||
"tinyvec",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "url"
|
||||
version = "2.5.4"
|
||||
|
@ -2537,7 +2736,7 @@ checksum = "3c3aba5064cc5f6f7740cddc8dae34d2d9a311cac69b60d942af7f3ab8fc49f4"
|
|||
dependencies = [
|
||||
"arc-swap",
|
||||
"libc",
|
||||
"thiserror",
|
||||
"thiserror 1.0.69",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
|
|
14
Cargo.toml
14
Cargo.toml
|
@ -53,21 +53,21 @@ tar = "0.4.40"
|
|||
tokio = { version = "1.35.1", features = ["macros"] }
|
||||
|
||||
# Build static linked openssl library
|
||||
openssl = { version = '0.10.70', features = ["vendored"] }
|
||||
openssl = { version = '0.10.72', features = ["vendored"] }
|
||||
|
||||
nydus-api = { version = "0.3.0", path = "api", features = [
|
||||
nydus-api = { version = "0.4.0", path = "api", features = [
|
||||
"error-backtrace",
|
||||
"handler",
|
||||
] }
|
||||
nydus-builder = { version = "0.1.0", path = "builder" }
|
||||
nydus-rafs = { version = "0.3.1", path = "rafs" }
|
||||
nydus-service = { version = "0.3.0", path = "service", features = [
|
||||
nydus-builder = { version = "0.2.0", path = "builder" }
|
||||
nydus-rafs = { version = "0.4.0", path = "rafs" }
|
||||
nydus-service = { version = "0.4.0", path = "service", features = [
|
||||
"block-device",
|
||||
] }
|
||||
nydus-storage = { version = "0.6.3", path = "storage", features = [
|
||||
nydus-storage = { version = "0.7.0", path = "storage", features = [
|
||||
"prefetch-rate-limit",
|
||||
] }
|
||||
nydus-utils = { version = "0.4.2", path = "utils" }
|
||||
nydus-utils = { version = "0.5.0", path = "utils" }
|
||||
|
||||
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
|
||||
vhost-user-backend = { version = "0.15.0", optional = true }
|
||||
|
|
|
@ -0,0 +1,15 @@
|
|||
# Maintainers
|
||||
|
||||
<!-- markdownlint-disable -->
|
||||
|
||||
| GitHub ID | Name | Email | Company |
|
||||
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
|
||||
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
|
||||
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
|
||||
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
|
||||
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
|
||||
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
|
||||
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
|
||||
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
|
||||
|
||||
<!-- markdownlint-restore -->
|
8
Makefile
8
Makefile
|
@ -108,7 +108,11 @@ ut: .release_version
|
|||
|
||||
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
|
||||
ut-nextest: .release_version
|
||||
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --test-threads 8
|
||||
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
|
||||
|
||||
# install miri first from https://github.com/rust-lang/miri/
|
||||
miri-ut-nextest: .release_version
|
||||
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
|
||||
|
||||
# install test dependencies
|
||||
pre-coverage:
|
||||
|
@ -121,7 +125,7 @@ coverage: pre-coverage
|
|||
|
||||
# write unit teset coverage to codecov.json, used for Github CI
|
||||
coverage-codecov:
|
||||
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
|
||||
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
|
||||
|
||||
smoke-only:
|
||||
make -C smoke test
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
[](https://crates.io/crates/nydus-rs)
|
||||
[](https://twitter.com/dragonfly_oss)
|
||||
[](https://github.com/dragonflyoss/nydus)
|
||||
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
|
||||
|
||||
[](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
|
||||
[](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
|
||||
|
@ -154,6 +155,8 @@ Using the key features of nydus as native in your project without preparing and
|
|||
|
||||
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
|
||||
|
||||
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
|
||||
|
||||
## Community
|
||||
|
||||
Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
[package]
|
||||
name = "nydus-api"
|
||||
version = "0.3.1"
|
||||
version = "0.4.0"
|
||||
description = "APIs for Nydus Image Service"
|
||||
authors = ["The Nydus Developers"]
|
||||
license = "Apache-2.0 OR BSD-3-Clause"
|
||||
|
|
|
@ -25,6 +25,9 @@ pub struct ConfigV2 {
|
|||
pub id: String,
|
||||
/// Configuration information for storage backend.
|
||||
pub backend: Option<BackendConfigV2>,
|
||||
/// Configuration for external storage backends, order insensitivity.
|
||||
#[serde(default)]
|
||||
pub external_backends: Vec<ExternalBackendConfig>,
|
||||
/// Configuration information for local cache system.
|
||||
pub cache: Option<CacheConfigV2>,
|
||||
/// Configuration information for RAFS filesystem.
|
||||
|
@ -42,6 +45,7 @@ impl Default for ConfigV2 {
|
|||
version: 2,
|
||||
id: String::new(),
|
||||
backend: None,
|
||||
external_backends: Vec::new(),
|
||||
cache: None,
|
||||
rafs: None,
|
||||
overlay: None,
|
||||
|
@ -57,6 +61,7 @@ impl ConfigV2 {
|
|||
version: 2,
|
||||
id: id.to_string(),
|
||||
backend: None,
|
||||
external_backends: Vec::new(),
|
||||
cache: None,
|
||||
rafs: None,
|
||||
overlay: None,
|
||||
|
@ -514,9 +519,6 @@ pub struct OssConfig {
|
|||
/// Enable HTTP proxy for the read request.
|
||||
#[serde(default)]
|
||||
pub proxy: ProxyConfig,
|
||||
/// Enable mirrors for the read request.
|
||||
#[serde(default)]
|
||||
pub mirrors: Vec<MirrorConfig>,
|
||||
}
|
||||
|
||||
/// S3 configuration information to access blobs.
|
||||
|
@ -558,9 +560,6 @@ pub struct S3Config {
|
|||
/// Enable HTTP proxy for the read request.
|
||||
#[serde(default)]
|
||||
pub proxy: ProxyConfig,
|
||||
/// Enable mirrors for the read request.
|
||||
#[serde(default)]
|
||||
pub mirrors: Vec<MirrorConfig>,
|
||||
}
|
||||
|
||||
/// Http proxy configuration information to access blobs.
|
||||
|
@ -587,9 +586,6 @@ pub struct HttpProxyConfig {
|
|||
/// Enable HTTP proxy for the read request.
|
||||
#[serde(default)]
|
||||
pub proxy: ProxyConfig,
|
||||
/// Enable mirrors for the read request.
|
||||
#[serde(default)]
|
||||
pub mirrors: Vec<MirrorConfig>,
|
||||
}
|
||||
|
||||
/// Container registry configuration information to access blobs.
|
||||
|
@ -630,9 +626,6 @@ pub struct RegistryConfig {
|
|||
/// Enable HTTP proxy for the read request.
|
||||
#[serde(default)]
|
||||
pub proxy: ProxyConfig,
|
||||
/// Enable mirrors for the read request.
|
||||
#[serde(default)]
|
||||
pub mirrors: Vec<MirrorConfig>,
|
||||
}
|
||||
|
||||
/// Configuration information for blob cache manager.
|
||||
|
@ -925,41 +918,6 @@ impl Default for ProxyConfig {
|
|||
}
|
||||
}
|
||||
|
||||
/// Configuration for registry mirror.
|
||||
#[derive(Clone, Debug, Deserialize, Eq, PartialEq, Serialize)]
|
||||
pub struct MirrorConfig {
|
||||
/// Mirror server URL, for example http://127.0.0.1:65001.
|
||||
pub host: String,
|
||||
/// Ping URL to check mirror server health.
|
||||
#[serde(default)]
|
||||
pub ping_url: String,
|
||||
/// HTTP request headers to be passed to mirror server.
|
||||
#[serde(default)]
|
||||
pub headers: HashMap<String, String>,
|
||||
/// Interval for mirror health checking, in seconds.
|
||||
#[serde(default = "default_check_interval")]
|
||||
pub health_check_interval: u64,
|
||||
/// Maximum number of failures before marking a mirror as unusable.
|
||||
#[serde(default = "default_failure_limit")]
|
||||
pub failure_limit: u8,
|
||||
/// Elapsed time to pause mirror health check when the request is inactive, in seconds.
|
||||
#[serde(default = "default_check_pause_elapsed")]
|
||||
pub health_check_pause_elapsed: u64,
|
||||
}
|
||||
|
||||
impl Default for MirrorConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
host: String::new(),
|
||||
headers: HashMap::new(),
|
||||
health_check_interval: 5,
|
||||
failure_limit: 5,
|
||||
ping_url: String::new(),
|
||||
health_check_pause_elapsed: 300,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration information for a cached blob`.
|
||||
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
|
||||
pub struct BlobCacheEntryConfigV2 {
|
||||
|
@ -971,6 +929,9 @@ pub struct BlobCacheEntryConfigV2 {
|
|||
/// Configuration information for storage backend.
|
||||
#[serde(default)]
|
||||
pub backend: BackendConfigV2,
|
||||
/// Configuration for external storage backends, order insensitivity.
|
||||
#[serde(default)]
|
||||
pub external_backends: Vec<ExternalBackendConfig>,
|
||||
/// Configuration information for local cache system.
|
||||
#[serde(default)]
|
||||
pub cache: CacheConfigV2,
|
||||
|
@ -1034,6 +995,7 @@ impl From<&BlobCacheEntryConfigV2> for ConfigV2 {
|
|||
version: c.version,
|
||||
id: c.id.clone(),
|
||||
backend: Some(c.backend.clone()),
|
||||
external_backends: c.external_backends.clone(),
|
||||
cache: Some(c.cache.clone()),
|
||||
rafs: None,
|
||||
overlay: None,
|
||||
|
@ -1203,10 +1165,6 @@ fn default_check_pause_elapsed() -> u64 {
|
|||
300
|
||||
}
|
||||
|
||||
fn default_failure_limit() -> u8 {
|
||||
5
|
||||
}
|
||||
|
||||
fn default_work_dir() -> String {
|
||||
".".to_string()
|
||||
}
|
||||
|
@ -1302,13 +1260,26 @@ struct CacheConfig {
|
|||
#[serde(default, rename = "config")]
|
||||
pub cache_config: Value,
|
||||
/// Whether to validate data read from the cache.
|
||||
#[serde(skip_serializing, skip_deserializing)]
|
||||
#[serde(default, rename = "validate")]
|
||||
pub cache_validate: bool,
|
||||
/// Configuration for blob data prefetching.
|
||||
#[serde(skip_serializing, skip_deserializing)]
|
||||
pub prefetch_config: BlobPrefetchConfig,
|
||||
}
|
||||
|
||||
/// Additional configuration information for external backend, its items
|
||||
/// will be merged to the configuration from image.
|
||||
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
|
||||
pub struct ExternalBackendConfig {
|
||||
/// External backend identifier to merge.
|
||||
pub patch: HashMap<String, String>,
|
||||
/// External backend type.
|
||||
#[serde(rename = "type")]
|
||||
pub kind: String,
|
||||
/// External backend config items to merge.
|
||||
pub config: HashMap<String, String>,
|
||||
}
|
||||
|
||||
impl TryFrom<&CacheConfig> for CacheConfigV2 {
|
||||
type Error = std::io::Error;
|
||||
|
||||
|
@ -1350,6 +1321,9 @@ struct FactoryConfig {
|
|||
pub id: String,
|
||||
/// Configuration for storage backend.
|
||||
pub backend: BackendConfig,
|
||||
/// Configuration for external storage backends, order insensitivity.
|
||||
#[serde(default)]
|
||||
pub external_backends: Vec<ExternalBackendConfig>,
|
||||
/// Configuration for blob cache manager.
|
||||
#[serde(default)]
|
||||
pub cache: CacheConfig,
|
||||
|
@ -1410,6 +1384,7 @@ impl TryFrom<RafsConfig> for ConfigV2 {
|
|||
version: 2,
|
||||
id: v.device.id,
|
||||
backend: Some(backend),
|
||||
external_backends: v.device.external_backends,
|
||||
cache: Some(cache),
|
||||
rafs: Some(rafs),
|
||||
overlay: None,
|
||||
|
@ -1500,6 +1475,9 @@ pub(crate) struct BlobCacheEntryConfig {
|
|||
///
|
||||
/// Possible value: `LocalFsConfig`, `RegistryConfig`, `OssConfig`, `LocalDiskConfig`.
|
||||
backend_config: Value,
|
||||
/// Configuration for external storage backends, order insensitivity.
|
||||
#[serde(default)]
|
||||
external_backends: Vec<ExternalBackendConfig>,
|
||||
/// Type of blob cache, corresponding to `FactoryConfig::CacheConfig::cache_type`.
|
||||
///
|
||||
/// Possible value: "fscache", "filecache".
|
||||
|
@ -1535,6 +1513,7 @@ impl TryFrom<&BlobCacheEntryConfig> for BlobCacheEntryConfigV2 {
|
|||
version: 2,
|
||||
id: v.id.clone(),
|
||||
backend: (&backend_config).try_into()?,
|
||||
external_backends: v.external_backends.clone(),
|
||||
cache: (&cache_config).try_into()?,
|
||||
metadata_path: v.metadata_path.clone(),
|
||||
})
|
||||
|
@ -1856,11 +1835,6 @@ mod tests {
|
|||
fallback = true
|
||||
check_interval = 10
|
||||
use_http = true
|
||||
[[backend.oss.mirrors]]
|
||||
host = "http://127.0.0.1:65001"
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
health_check_interval = 10
|
||||
failure_limit = 10
|
||||
"#;
|
||||
let config: ConfigV2 = toml::from_str(content).unwrap();
|
||||
assert_eq!(config.version, 2);
|
||||
|
@ -1887,14 +1861,6 @@ mod tests {
|
|||
assert_eq!(oss.proxy.check_interval, 10);
|
||||
assert!(oss.proxy.fallback);
|
||||
assert!(oss.proxy.use_http);
|
||||
|
||||
assert_eq!(oss.mirrors.len(), 1);
|
||||
let mirror = &oss.mirrors[0];
|
||||
assert_eq!(mirror.host, "http://127.0.0.1:65001");
|
||||
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
|
||||
assert!(mirror.headers.is_empty());
|
||||
assert_eq!(mirror.health_check_interval, 10);
|
||||
assert_eq!(mirror.failure_limit, 10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
@ -1920,11 +1886,6 @@ mod tests {
|
|||
fallback = true
|
||||
check_interval = 10
|
||||
use_http = true
|
||||
[[backend.registry.mirrors]]
|
||||
host = "http://127.0.0.1:65001"
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
health_check_interval = 10
|
||||
failure_limit = 10
|
||||
"#;
|
||||
let config: ConfigV2 = toml::from_str(content).unwrap();
|
||||
assert_eq!(config.version, 2);
|
||||
|
@ -1953,14 +1914,6 @@ mod tests {
|
|||
assert_eq!(registry.proxy.check_interval, 10);
|
||||
assert!(registry.proxy.fallback);
|
||||
assert!(registry.proxy.use_http);
|
||||
|
||||
assert_eq!(registry.mirrors.len(), 1);
|
||||
let mirror = ®istry.mirrors[0];
|
||||
assert_eq!(mirror.host, "http://127.0.0.1:65001");
|
||||
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
|
||||
assert!(mirror.headers.is_empty());
|
||||
assert_eq!(mirror.health_check_interval, 10);
|
||||
assert_eq!(mirror.failure_limit, 10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
@ -2367,15 +2320,6 @@ mod tests {
|
|||
assert!(res);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_default_mirror_config() {
|
||||
let cfg = MirrorConfig::default();
|
||||
assert_eq!(cfg.host, "");
|
||||
assert_eq!(cfg.health_check_interval, 5);
|
||||
assert_eq!(cfg.failure_limit, 5);
|
||||
assert_eq!(cfg.ping_url, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_config_v2_from_file() {
|
||||
let content = r#"version=2
|
||||
|
@ -2585,7 +2529,6 @@ mod tests {
|
|||
#[test]
|
||||
fn test_default_value() {
|
||||
assert!(default_true());
|
||||
assert_eq!(default_failure_limit(), 5);
|
||||
assert_eq!(default_prefetch_batch_size(), 1024 * 1024);
|
||||
assert_eq!(default_prefetch_threads_count(), 8);
|
||||
}
|
||||
|
|
148
api/src/error.rs
148
api/src/error.rs
|
@ -86,6 +86,8 @@ define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
|
|||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::io::{Error, ErrorKind};
|
||||
|
||||
fn check_size(size: usize) -> std::io::Result<()> {
|
||||
if size > 0x1000 {
|
||||
return Err(einval!());
|
||||
|
@ -101,4 +103,150 @@ mod tests {
|
|||
std::io::Error::from_raw_os_error(libc::EINVAL).kind()
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_make_error() {
|
||||
let original_error = Error::new(ErrorKind::Other, "test error");
|
||||
let debug_info = "debug information";
|
||||
let file = "test.rs";
|
||||
let line = 42;
|
||||
|
||||
let result_error = super::make_error(original_error, debug_info, file, line);
|
||||
assert_eq!(result_error.kind(), ErrorKind::Other);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_libc_error_macros() {
|
||||
// Test einval macro
|
||||
let err = einval!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
|
||||
|
||||
// Test enoent macro
|
||||
let err = enoent!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
|
||||
|
||||
// Test ebadf macro
|
||||
let err = ebadf!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
|
||||
|
||||
// Test eacces macro
|
||||
let err = eacces!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
|
||||
|
||||
// Test enotdir macro
|
||||
let err = enotdir!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
|
||||
|
||||
// Test eisdir macro
|
||||
let err = eisdir!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
|
||||
|
||||
// Test ealready macro
|
||||
let err = ealready!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
|
||||
|
||||
// Test enosys macro
|
||||
let err = enosys!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
|
||||
|
||||
// Test epipe macro
|
||||
let err = epipe!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
|
||||
|
||||
// Test eio macro
|
||||
let err = eio!();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_libc_error_macros_with_context() {
|
||||
let test_msg = "test context";
|
||||
|
||||
// Test einval macro with context
|
||||
let err = einval!(test_msg);
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
|
||||
|
||||
// Test enoent macro with context
|
||||
let err = enoent!(test_msg);
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
|
||||
|
||||
// Test eio macro with context
|
||||
let err = eio!(test_msg);
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_custom_error_macros() {
|
||||
// Test last_error macro
|
||||
let err = last_error!();
|
||||
// We can't predict the exact error, but we can check it's a valid error
|
||||
assert!(!err.to_string().is_empty());
|
||||
|
||||
// Test eother macro
|
||||
let err = eother!();
|
||||
assert_eq!(err.kind(), ErrorKind::Other);
|
||||
|
||||
// Test eother macro with context
|
||||
let err = eother!("custom context");
|
||||
assert_eq!(err.kind(), ErrorKind::Other);
|
||||
}
|
||||
|
||||
fn test_bail_einval_function() -> std::io::Result<()> {
|
||||
bail_einval!("test error message");
|
||||
}
|
||||
|
||||
fn test_bail_eio_function() -> std::io::Result<()> {
|
||||
bail_eio!("test error message");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bail_macros() {
|
||||
// Test bail_einval macro
|
||||
let result = test_bail_einval_function();
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
|
||||
// The error message format is controlled by the macro, so just check it's not empty
|
||||
assert!(!err.to_string().is_empty());
|
||||
|
||||
// Test bail_eio macro
|
||||
let result = test_bail_eio_function();
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
|
||||
// The error message format is controlled by the macro, so just check it's not empty
|
||||
assert!(!err.to_string().is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bail_macros_with_formatting() {
|
||||
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
|
||||
if code == 1 {
|
||||
bail_einval!("error code: {}", code);
|
||||
} else if code == 2 {
|
||||
bail_eio!("I/O error with code: {}", code);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Test bail_einval with formatting
|
||||
let result = test_bail_with_format(1);
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
|
||||
// The error message format is controlled by the macro, so just check it's not empty
|
||||
assert!(!err.to_string().is_empty());
|
||||
|
||||
// Test bail_eio with formatting
|
||||
let result = test_bail_with_format(2);
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
|
||||
// The error message format is controlled by the macro, so just check it's not empty
|
||||
assert!(!err.to_string().is_empty());
|
||||
|
||||
// Test success case
|
||||
let result = test_bail_with_format(3);
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
[package]
|
||||
name = "nydus-builder"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
description = "Nydus Image Builder"
|
||||
authors = ["The Nydus Developers"]
|
||||
license = "Apache-2.0"
|
||||
|
@ -22,11 +22,13 @@ sha2 = "0.10.2"
|
|||
tar = "0.4.40"
|
||||
vmm-sys-util = "0.12.1"
|
||||
xattr = "1.0.1"
|
||||
parse-size = "1.1.0"
|
||||
|
||||
nydus-api = { version = "0.3", path = "../api" }
|
||||
nydus-rafs = { version = "0.3", path = "../rafs" }
|
||||
nydus-storage = { version = "0.6", path = "../storage", features = ["backend-localfs"] }
|
||||
nydus-utils = { version = "0.4", path = "../utils" }
|
||||
nydus-api = { version = "0.4.0", path = "../api" }
|
||||
nydus-rafs = { version = "0.4.0", path = "../rafs" }
|
||||
nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
|
||||
nydus-utils = { version = "0.5.0", path = "../utils" }
|
||||
gix-attributes = "0.25.0"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
|
|
|
@ -0,0 +1,189 @@
|
|||
// Copyright 2024 Nydus Developers. All rights reserved.
|
||||
//
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::{fs, path};
|
||||
|
||||
use anyhow::Result;
|
||||
use gix_attributes::parse;
|
||||
use gix_attributes::parse::Kind;
|
||||
|
||||
const KEY_TYPE: &str = "type";
|
||||
const KEY_CRCS: &str = "crcs";
|
||||
const VAL_EXTERNAL: &str = "external";
|
||||
|
||||
pub struct Parser {}
|
||||
|
||||
#[derive(Clone, Debug, Eq, PartialEq, Default)]
|
||||
pub struct Item {
|
||||
pub pattern: PathBuf,
|
||||
pub attributes: HashMap<String, String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Eq, PartialEq, Default)]
|
||||
pub struct Attributes {
|
||||
pub items: HashMap<PathBuf, HashMap<String, String>>,
|
||||
pub crcs: HashMap<PathBuf, Vec<u32>>,
|
||||
}
|
||||
|
||||
impl Attributes {
|
||||
/// Parse nydus attributes from a file.
|
||||
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
|
||||
let content = fs::read(path)?;
|
||||
let _items = parse(&content);
|
||||
|
||||
let mut items = HashMap::new();
|
||||
let mut crcs = HashMap::new();
|
||||
for _item in _items {
|
||||
let _item = _item?;
|
||||
if let Kind::Pattern(pattern) = _item.0 {
|
||||
let mut path = PathBuf::from(pattern.text.to_string());
|
||||
if !path.is_absolute() {
|
||||
path = path::Path::new("/").join(path);
|
||||
}
|
||||
let mut current_path = path.clone();
|
||||
let mut attributes = HashMap::new();
|
||||
let mut _type = String::new();
|
||||
let mut _crcs = vec![];
|
||||
for line in _item.1 {
|
||||
let line = line?;
|
||||
let name = line.name.as_str();
|
||||
let state = line.state.as_bstr().unwrap_or_default();
|
||||
if name == KEY_TYPE {
|
||||
_type = state.to_string();
|
||||
}
|
||||
if name == KEY_CRCS {
|
||||
_crcs = state
|
||||
.to_string()
|
||||
.split(',')
|
||||
.map(|s| {
|
||||
let trimmed = s.trim();
|
||||
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
|
||||
stripped
|
||||
} else {
|
||||
trimmed
|
||||
};
|
||||
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
|
||||
})
|
||||
.collect::<Result<Vec<u32>, _>>()?;
|
||||
}
|
||||
attributes.insert(name.to_string(), state.to_string());
|
||||
}
|
||||
crcs.insert(path.clone(), _crcs);
|
||||
items.insert(path, attributes);
|
||||
|
||||
// process parent directory
|
||||
while let Some(parent) = current_path.parent() {
|
||||
if parent == Path::new("/") {
|
||||
break;
|
||||
}
|
||||
let mut attributes = HashMap::new();
|
||||
if !items.contains_key(parent) {
|
||||
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
|
||||
items.insert(parent.to_path_buf(), attributes);
|
||||
}
|
||||
current_path = parent.to_path_buf();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Attributes { items, crcs })
|
||||
}
|
||||
|
||||
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
|
||||
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
|
||||
}
|
||||
|
||||
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
|
||||
if let Some(attributes) = self.items.get(path.as_ref()) {
|
||||
return self.check_external(attributes);
|
||||
}
|
||||
false
|
||||
}
|
||||
|
||||
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
|
||||
self.items
|
||||
.iter()
|
||||
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
|
||||
}
|
||||
|
||||
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
|
||||
if let Some(attributes) = self.items.get(path.as_ref()) {
|
||||
return attributes.get(key.as_ref()).map(|s| s.to_string());
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
|
||||
self.items.get(path.as_ref())
|
||||
}
|
||||
|
||||
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
|
||||
self.crcs.get(path.as_ref())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::{collections::HashMap, fs, path::PathBuf};
|
||||
|
||||
use super::{Attributes, Item};
|
||||
use vmm_sys_util::tempfile::TempFile;
|
||||
|
||||
#[test]
|
||||
fn test_attribute_parse() {
|
||||
let file = TempFile::new().unwrap();
|
||||
fs::write(
|
||||
file.as_path(),
|
||||
"/foo type=external crcs=0x1234,0x5678
|
||||
/bar type=external crcs=0x1234,0x5678
|
||||
/models/foo/bar type=external",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let attributes = Attributes::from(file.as_path()).unwrap();
|
||||
let _attributes_base: HashMap<String, String> =
|
||||
[("type".to_string(), "external".to_string())]
|
||||
.iter()
|
||||
.cloned()
|
||||
.collect();
|
||||
let _attributes: HashMap<String, String> = [
|
||||
("type".to_string(), "external".to_string()),
|
||||
("crcs".to_string(), "0x1234,0x5678".to_string()),
|
||||
]
|
||||
.iter()
|
||||
.cloned()
|
||||
.collect();
|
||||
|
||||
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
|
||||
Item {
|
||||
pattern: PathBuf::from("/foo"),
|
||||
attributes: _attributes.clone(),
|
||||
},
|
||||
Item {
|
||||
pattern: PathBuf::from("/bar"),
|
||||
attributes: _attributes.clone(),
|
||||
},
|
||||
Item {
|
||||
pattern: PathBuf::from("/models"),
|
||||
attributes: _attributes_base.clone(),
|
||||
},
|
||||
Item {
|
||||
pattern: PathBuf::from("/models/foo"),
|
||||
attributes: _attributes_base.clone(),
|
||||
},
|
||||
Item {
|
||||
pattern: PathBuf::from("/models/foo/bar"),
|
||||
attributes: _attributes_base.clone(),
|
||||
},
|
||||
]
|
||||
.into_iter()
|
||||
.map(|item| (item.pattern, item.attributes))
|
||||
.collect();
|
||||
|
||||
assert_eq!(attributes.items, items_map);
|
||||
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
|
||||
}
|
||||
}
|
|
@ -38,6 +38,7 @@ pub struct ChunkdictChunkInfo {
|
|||
pub version: String,
|
||||
pub chunk_blob_id: String,
|
||||
pub chunk_digest: String,
|
||||
pub chunk_crc32: u32,
|
||||
pub chunk_compressed_size: u32,
|
||||
pub chunk_uncompressed_size: u32,
|
||||
pub chunk_compressed_offset: u64,
|
||||
|
@ -88,7 +89,7 @@ impl Generator {
|
|||
let storage = &mut bootstrap_mgr.bootstrap_storage;
|
||||
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
|
||||
|
||||
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
|
||||
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
|
||||
}
|
||||
|
||||
/// Validate tree.
|
||||
|
@ -270,6 +271,7 @@ impl Generator {
|
|||
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
|
||||
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
|
||||
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
|
||||
chunk.set_crc32(chunk_info.chunk_crc32);
|
||||
|
||||
node.chunks.push(NodeChunk {
|
||||
source: ChunkSource::Build,
|
||||
|
|
|
@ -21,6 +21,7 @@ use nydus_utils::{digest, try_round_up_4k};
|
|||
use serde::{Deserialize, Serialize};
|
||||
use sha2::Digest;
|
||||
|
||||
use crate::attributes::Attributes;
|
||||
use crate::core::context::Artifact;
|
||||
|
||||
use super::core::blob::Blob;
|
||||
|
@ -293,7 +294,7 @@ impl BlobCompactor {
|
|||
version,
|
||||
states: vec![Default::default(); ori_blobs_number],
|
||||
ori_blob_mgr,
|
||||
new_blob_mgr: BlobManager::new(digester),
|
||||
new_blob_mgr: BlobManager::new(digester, false),
|
||||
c2nodes: HashMap::new(),
|
||||
b2nodes: HashMap::new(),
|
||||
backend,
|
||||
|
@ -555,7 +556,8 @@ impl BlobCompactor {
|
|||
info!("compactor: delete compacted blob {}", ori_blob_ids[idx]);
|
||||
}
|
||||
State::Rebuild(cs) => {
|
||||
let blob_storage = ArtifactStorage::FileDir(PathBuf::from(dir));
|
||||
let blob_storage =
|
||||
ArtifactStorage::FileDir((PathBuf::from(dir), String::new()));
|
||||
let mut blob_ctx = BlobContext::new(
|
||||
String::from(""),
|
||||
0,
|
||||
|
@ -565,6 +567,7 @@ impl BlobCompactor {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
blob_ctx.set_meta_info_enabled(self.is_v6());
|
||||
let blob_idx = self.new_blob_mgr.alloc_index()?;
|
||||
|
@ -617,14 +620,16 @@ impl BlobCompactor {
|
|||
PathBuf::from(""),
|
||||
Default::default(),
|
||||
None,
|
||||
None,
|
||||
false,
|
||||
Features::new(),
|
||||
false,
|
||||
Attributes::default(),
|
||||
);
|
||||
let mut bootstrap_mgr =
|
||||
BootstrapManager::new(Some(ArtifactStorage::SingleFile(d_bootstrap)), None);
|
||||
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
|
||||
let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester());
|
||||
let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester(), false);
|
||||
ori_blob_mgr.extend_from_blob_table(&build_ctx, rs.superblock.get_blob_infos())?;
|
||||
if let Some(dict) = chunk_dict {
|
||||
ori_blob_mgr.set_chunk_dict(dict);
|
||||
|
@ -663,7 +668,9 @@ impl BlobCompactor {
|
|||
|
||||
Ok(Some(BuildOutput::new(
|
||||
&compactor.new_blob_mgr,
|
||||
None,
|
||||
&bootstrap_mgr.bootstrap_storage,
|
||||
&None,
|
||||
)?))
|
||||
}
|
||||
}
|
||||
|
@ -709,8 +716,7 @@ mod tests {
|
|||
pub uncompress_offset: u64,
|
||||
pub file_offset: u64,
|
||||
pub index: u32,
|
||||
#[allow(unused)]
|
||||
pub reserved: u32,
|
||||
pub crc32: u32,
|
||||
}
|
||||
|
||||
impl BlobChunkInfo for MockChunkInfo {
|
||||
|
@ -732,6 +738,18 @@ mod tests {
|
|||
false
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
self.flags.contains(BlobChunkFlags::HAS_CRC32)
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
if self.has_crc32() {
|
||||
self.crc32
|
||||
} else {
|
||||
0
|
||||
}
|
||||
}
|
||||
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
@ -815,7 +833,7 @@ mod tests {
|
|||
uncompress_offset: 0x1000,
|
||||
file_offset: 0x1000,
|
||||
index: 1,
|
||||
reserved: 0,
|
||||
crc32: 0,
|
||||
}) as Arc<dyn BlobChunkInfo>;
|
||||
let cw = ChunkWrapper::Ref(chunk);
|
||||
ChunkKey::from(&cw);
|
||||
|
@ -864,6 +882,7 @@ mod tests {
|
|||
crypt::Algorithm::Aes256Xts,
|
||||
Arc::new(cipher_object),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
let ori_blob_ids = ["1".to_owned(), "2".to_owned()];
|
||||
let backend = Arc::new(MockBackend {
|
||||
|
@ -976,7 +995,7 @@ mod tests {
|
|||
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
|
||||
.unwrap();
|
||||
|
||||
let mut ori_blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
|
||||
let mut ori_blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
|
||||
ori_blob_mgr.set_chunk_dict(dict);
|
||||
|
||||
let backend = Arc::new(MockBackend {
|
||||
|
@ -991,6 +1010,7 @@ mod tests {
|
|||
tmpfile.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)?;
|
||||
|
@ -1080,6 +1100,7 @@ mod tests {
|
|||
tmpfile.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)?;
|
||||
|
@ -1091,6 +1112,7 @@ mod tests {
|
|||
tmpfile2.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)?;
|
||||
|
@ -1138,9 +1160,11 @@ mod tests {
|
|||
PathBuf::from(tmp_dir.as_path()),
|
||||
Default::default(),
|
||||
None,
|
||||
None,
|
||||
false,
|
||||
Features::new(),
|
||||
false,
|
||||
Attributes::default(),
|
||||
);
|
||||
|
||||
let mut compactor = blob_compactor_load_and_dedup_chunks().unwrap();
|
||||
|
@ -1154,6 +1178,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
let blob_ctx2 = BlobContext::new(
|
||||
"blob_id2".to_owned(),
|
||||
|
@ -1164,6 +1189,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
let blob_ctx3 = BlobContext::new(
|
||||
"blob_id3".to_owned(),
|
||||
|
@ -1174,6 +1200,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
let blob_ctx4 = BlobContext::new(
|
||||
"blob_id4".to_owned(),
|
||||
|
@ -1184,6 +1211,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
let blob_ctx5 = BlobContext::new(
|
||||
"blob_id5".to_owned(),
|
||||
|
@ -1194,6 +1222,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
compactor.ori_blob_mgr.add_blob(blob_ctx1);
|
||||
compactor.ori_blob_mgr.add_blob(blob_ctx2);
|
||||
|
@ -1235,9 +1264,11 @@ mod tests {
|
|||
PathBuf::from(tmp_dir.as_path()),
|
||||
Default::default(),
|
||||
None,
|
||||
None,
|
||||
false,
|
||||
Features::new(),
|
||||
false,
|
||||
Attributes::default(),
|
||||
);
|
||||
let mut blob_ctx1 = BlobContext::new(
|
||||
"blob_id1".to_owned(),
|
||||
|
@ -1248,6 +1279,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
blob_ctx1.compressed_blob_size = 2;
|
||||
let mut blob_ctx2 = BlobContext::new(
|
||||
|
@ -1259,6 +1291,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
blob_ctx2.compressed_blob_size = 0;
|
||||
let blob_ctx3 = BlobContext::new(
|
||||
|
@ -1270,6 +1303,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
let blob_ctx4 = BlobContext::new(
|
||||
"blob_id4".to_owned(),
|
||||
|
@ -1280,6 +1314,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
let blob_ctx5 = BlobContext::new(
|
||||
"blob_id5".to_owned(),
|
||||
|
@ -1290,6 +1325,7 @@ mod tests {
|
|||
build_ctx.cipher,
|
||||
Default::default(),
|
||||
None,
|
||||
false,
|
||||
);
|
||||
compactor.ori_blob_mgr.add_blob(blob_ctx1);
|
||||
compactor.ori_blob_mgr.add_blob(blob_ctx2);
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
use std::borrow::Cow;
|
||||
use std::slice;
|
||||
|
||||
use anyhow::{Context, Result};
|
||||
use anyhow::{bail, Context, Result};
|
||||
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
|
||||
use nydus_storage::device::BlobFeatures;
|
||||
use nydus_storage::meta::{toc, BlobMetaChunkArray};
|
||||
|
@ -18,6 +18,8 @@ use super::node::Node;
|
|||
use crate::core::context::Artifact;
|
||||
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
|
||||
|
||||
const VALID_BLOB_ID_LENGTH: usize = 64;
|
||||
|
||||
/// Generator for RAFS data blob.
|
||||
pub(crate) struct Blob {}
|
||||
|
||||
|
@ -120,6 +122,9 @@ impl Blob {
|
|||
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
|
||||
{
|
||||
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
|
||||
if blob_ctx.external {
|
||||
return Ok(());
|
||||
}
|
||||
blob_ctx.write_tar_header(
|
||||
blob_writer,
|
||||
toc::TOC_ENTRY_BLOB_RAW,
|
||||
|
@ -141,6 +146,20 @@ impl Blob {
|
|||
}
|
||||
}
|
||||
|
||||
// check blobs to make sure all blobs are valid.
|
||||
if blob_mgr.external {
|
||||
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
|
||||
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
|
||||
bail!(
|
||||
"invalid blob id:{}, length:{}, index:{}",
|
||||
blob_ctx.blob_id,
|
||||
blob_ctx.blob_id.len(),
|
||||
index
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
|
|
@ -75,7 +75,9 @@ impl Bootstrap {
|
|||
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
|
||||
let name = digest.to_string();
|
||||
bootstrap_ctx.writer.finalize(Some(name.clone()))?;
|
||||
*bootstrap_storage = Some(ArtifactStorage::SingleFile(p.join(name)));
|
||||
let mut path = p.0.join(name);
|
||||
path.set_extension(&p.1);
|
||||
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
|
||||
Ok(())
|
||||
} else {
|
||||
bootstrap_ctx.writer.finalize(Some(String::default()))
|
||||
|
|
|
@ -19,7 +19,7 @@ use nydus_utils::digest::{self, RafsDigest};
|
|||
use crate::Tree;
|
||||
|
||||
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
|
||||
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32);
|
||||
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
|
||||
|
||||
/// Trait to manage chunk cache for chunk deduplication.
|
||||
pub trait ChunkDict: Sync + Send + 'static {
|
||||
|
|
|
@ -19,6 +19,7 @@ use std::sync::{Arc, Mutex};
|
|||
use std::{fmt, fs};
|
||||
|
||||
use anyhow::{anyhow, Context, Error, Result};
|
||||
use nydus_utils::crc32;
|
||||
use nydus_utils::crypt::{self, Cipher, CipherContext};
|
||||
use sha2::{Digest, Sha256};
|
||||
use tar::{EntryType, Header};
|
||||
|
@ -45,6 +46,7 @@ use nydus_utils::digest::DigestData;
|
|||
use nydus_utils::{compress, digest, div_round_up, round_down, try_round_up_4k, BufReaderInfo};
|
||||
|
||||
use super::node::ChunkSource;
|
||||
use crate::attributes::Attributes;
|
||||
use crate::core::tree::TreeNode;
|
||||
use crate::{ChunkDict, Feature, Features, HashChunkDict, Prefetch, PrefetchPolicy, WhiteoutSpec};
|
||||
|
||||
|
@ -139,7 +141,7 @@ pub enum ArtifactStorage {
|
|||
// Won't rename user's specification
|
||||
SingleFile(PathBuf),
|
||||
// Will rename it from tmp file as user didn't specify a name.
|
||||
FileDir(PathBuf),
|
||||
FileDir((PathBuf, String)),
|
||||
}
|
||||
|
||||
impl ArtifactStorage {
|
||||
|
@ -147,7 +149,16 @@ impl ArtifactStorage {
|
|||
pub fn display(&self) -> Display {
|
||||
match self {
|
||||
ArtifactStorage::SingleFile(p) => p.display(),
|
||||
ArtifactStorage::FileDir(p) => p.display(),
|
||||
ArtifactStorage::FileDir(p) => p.0.display(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn add_suffix(&mut self, suffix: &str) {
|
||||
match self {
|
||||
ArtifactStorage::SingleFile(p) => {
|
||||
p.set_extension(suffix);
|
||||
}
|
||||
ArtifactStorage::FileDir(p) => p.1 = String::from(suffix),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -336,8 +347,8 @@ impl ArtifactWriter {
|
|||
ArtifactStorage::FileDir(ref p) => {
|
||||
// Better we can use open(2) O_TMPFILE, but for compatibility sake, we delay this job.
|
||||
// TODO: Blob dir existence?
|
||||
let tmp = TempFile::new_in(p)
|
||||
.with_context(|| format!("failed to create temp file in {}", p.display()))?;
|
||||
let tmp = TempFile::new_in(&p.0)
|
||||
.with_context(|| format!("failed to create temp file in {}", p.0.display()))?;
|
||||
let tmp2 = tmp.as_file().try_clone()?;
|
||||
let reader = OpenOptions::new()
|
||||
.read(true)
|
||||
|
@ -369,7 +380,10 @@ impl Artifact for ArtifactWriter {
|
|||
|
||||
if let Some(n) = name {
|
||||
if let ArtifactStorage::FileDir(s) = &self.storage {
|
||||
let path = Path::new(s).join(n);
|
||||
let mut path = Path::new(&s.0).join(n);
|
||||
if !s.1.is_empty() {
|
||||
path.set_extension(&s.1);
|
||||
}
|
||||
if !path.exists() {
|
||||
if let Some(tmp_file) = &self.tmp_file {
|
||||
rename(tmp_file.as_path(), &path).with_context(|| {
|
||||
|
@ -511,6 +525,9 @@ pub struct BlobContext {
|
|||
/// Cipher to encrypt the RAFS blobs.
|
||||
pub cipher_object: Arc<Cipher>,
|
||||
pub cipher_ctx: Option<CipherContext>,
|
||||
|
||||
/// Whether the blob is from external storage backend.
|
||||
pub external: bool,
|
||||
}
|
||||
|
||||
impl BlobContext {
|
||||
|
@ -525,6 +542,7 @@ impl BlobContext {
|
|||
cipher: crypt::Algorithm,
|
||||
cipher_object: Arc<Cipher>,
|
||||
cipher_ctx: Option<CipherContext>,
|
||||
external: bool,
|
||||
) -> Self {
|
||||
let blob_meta_info = if features.contains(BlobFeatures::CHUNK_INFO_V2) {
|
||||
BlobMetaChunkArray::new_v2()
|
||||
|
@ -561,6 +579,8 @@ impl BlobContext {
|
|||
entry_list: toc::TocEntryList::new(),
|
||||
cipher_object,
|
||||
cipher_ctx,
|
||||
|
||||
external,
|
||||
};
|
||||
|
||||
blob_ctx
|
||||
|
@ -602,6 +622,9 @@ impl BlobContext {
|
|||
blob_ctx
|
||||
.blob_meta_header
|
||||
.set_is_chunkdict_generated(features.contains(BlobFeatures::IS_CHUNKDICT_GENERATED));
|
||||
blob_ctx
|
||||
.blob_meta_header
|
||||
.set_external(features.contains(BlobFeatures::EXTERNAL));
|
||||
|
||||
blob_ctx
|
||||
}
|
||||
|
@ -701,6 +724,7 @@ impl BlobContext {
|
|||
cipher,
|
||||
cipher_object,
|
||||
cipher_ctx,
|
||||
false,
|
||||
);
|
||||
blob_ctx.blob_prefetch_size = blob.prefetch_size();
|
||||
blob_ctx.chunk_count = blob.chunk_count();
|
||||
|
@ -784,6 +808,10 @@ impl BlobContext {
|
|||
info.set_uncompressed_offset(chunk.uncompressed_offset());
|
||||
self.blob_meta_info.add_v2_info(info);
|
||||
} else {
|
||||
let mut data: u64 = 0;
|
||||
if chunk.has_crc32() {
|
||||
data = chunk.crc32() as u64;
|
||||
}
|
||||
self.blob_meta_info.add_v2(
|
||||
chunk.compressed_offset(),
|
||||
chunk.compressed_size(),
|
||||
|
@ -791,8 +819,9 @@ impl BlobContext {
|
|||
chunk.uncompressed_size(),
|
||||
chunk.is_compressed(),
|
||||
chunk.is_encrypted(),
|
||||
chunk.has_crc32(),
|
||||
chunk.is_batch(),
|
||||
0,
|
||||
data,
|
||||
);
|
||||
}
|
||||
self.blob_chunk_digest.push(chunk.id().data);
|
||||
|
@ -887,16 +916,19 @@ pub struct BlobManager {
|
|||
/// Used for chunk data de-duplication between layers (with `--parent-bootstrap`)
|
||||
/// or within layer (with `--inline-bootstrap`).
|
||||
pub(crate) layered_chunk_dict: HashChunkDict,
|
||||
// Whether the managed blobs is from external storage backend.
|
||||
pub external: bool,
|
||||
}
|
||||
|
||||
impl BlobManager {
|
||||
/// Create a new instance of [BlobManager].
|
||||
pub fn new(digester: digest::Algorithm) -> Self {
|
||||
pub fn new(digester: digest::Algorithm, external: bool) -> Self {
|
||||
Self {
|
||||
blobs: Vec::new(),
|
||||
current_blob_index: None,
|
||||
global_chunk_dict: Arc::new(()),
|
||||
layered_chunk_dict: HashChunkDict::new(digester),
|
||||
external,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -905,7 +937,7 @@ impl BlobManager {
|
|||
self.current_blob_index = Some(index as u32)
|
||||
}
|
||||
|
||||
fn new_blob_ctx(ctx: &BuildContext) -> Result<BlobContext> {
|
||||
pub fn new_blob_ctx(&self, ctx: &BuildContext) -> Result<BlobContext> {
|
||||
let (cipher_object, cipher_ctx) = match ctx.cipher {
|
||||
crypt::Algorithm::None => (Default::default(), None),
|
||||
crypt::Algorithm::Aes128Xts => {
|
||||
|
@ -924,15 +956,22 @@ impl BlobManager {
|
|||
)))
|
||||
}
|
||||
};
|
||||
let mut blob_features = ctx.blob_features;
|
||||
let mut compressor = ctx.compressor;
|
||||
if self.external {
|
||||
blob_features.insert(BlobFeatures::EXTERNAL);
|
||||
compressor = compress::Algorithm::None;
|
||||
}
|
||||
let mut blob_ctx = BlobContext::new(
|
||||
ctx.blob_id.clone(),
|
||||
ctx.blob_offset,
|
||||
ctx.blob_features,
|
||||
ctx.compressor,
|
||||
blob_features,
|
||||
compressor,
|
||||
ctx.digester,
|
||||
ctx.cipher,
|
||||
Arc::new(cipher_object),
|
||||
cipher_ctx,
|
||||
self.external,
|
||||
);
|
||||
blob_ctx.set_chunk_size(ctx.chunk_size);
|
||||
blob_ctx.set_meta_info_enabled(
|
||||
|
@ -948,7 +987,7 @@ impl BlobManager {
|
|||
ctx: &BuildContext,
|
||||
) -> Result<(u32, &mut BlobContext)> {
|
||||
if self.current_blob_index.is_none() {
|
||||
let blob_ctx = Self::new_blob_ctx(ctx)?;
|
||||
let blob_ctx = self.new_blob_ctx(ctx)?;
|
||||
self.current_blob_index = Some(self.alloc_index()?);
|
||||
self.add_blob(blob_ctx);
|
||||
}
|
||||
|
@ -956,6 +995,21 @@ impl BlobManager {
|
|||
Ok(self.get_current_blob().unwrap())
|
||||
}
|
||||
|
||||
pub fn get_or_create_blob_by_idx(
|
||||
&mut self,
|
||||
ctx: &BuildContext,
|
||||
blob_idx: u32,
|
||||
) -> Result<(u32, &mut BlobContext)> {
|
||||
let blob_idx = blob_idx as usize;
|
||||
if blob_idx >= self.blobs.len() {
|
||||
for _ in self.blobs.len()..=blob_idx {
|
||||
let blob_ctx = self.new_blob_ctx(ctx)?;
|
||||
self.add_blob(blob_ctx);
|
||||
}
|
||||
}
|
||||
Ok((blob_idx as u32, &mut self.blobs[blob_idx as usize]))
|
||||
}
|
||||
|
||||
/// Get the current blob object.
|
||||
pub fn get_current_blob(&mut self) -> Option<(u32, &mut BlobContext)> {
|
||||
if let Some(idx) = self.current_blob_index {
|
||||
|
@ -971,8 +1025,9 @@ impl BlobManager {
|
|||
ctx: &BuildContext,
|
||||
id: &str,
|
||||
) -> Result<(u32, &mut BlobContext)> {
|
||||
let blob_mgr = Self::new(ctx.digester, false);
|
||||
if self.get_blob_idx_by_id(id).is_none() {
|
||||
let blob_ctx = Self::new_blob_ctx(ctx)?;
|
||||
let blob_ctx = blob_mgr.new_blob_ctx(ctx)?;
|
||||
self.current_blob_index = Some(self.alloc_index()?);
|
||||
self.add_blob(blob_ctx);
|
||||
} else {
|
||||
|
@ -1260,6 +1315,7 @@ impl BootstrapContext {
|
|||
}
|
||||
|
||||
/// BootstrapManager is used to hold the parent bootstrap reader and create new bootstrap context.
|
||||
#[derive(Clone)]
|
||||
pub struct BootstrapManager {
|
||||
pub(crate) f_parent_path: Option<PathBuf>,
|
||||
pub(crate) bootstrap_storage: Option<ArtifactStorage>,
|
||||
|
@ -1296,6 +1352,7 @@ pub struct BuildContext {
|
|||
pub digester: digest::Algorithm,
|
||||
/// Blob encryption algorithm flag.
|
||||
pub cipher: crypt::Algorithm,
|
||||
pub crc32_algorithm: crc32::Algorithm,
|
||||
/// Save host uid gid in each inode.
|
||||
pub explicit_uidgid: bool,
|
||||
/// whiteout spec: overlayfs or oci
|
||||
|
@ -1321,6 +1378,7 @@ pub struct BuildContext {
|
|||
|
||||
/// Storage writing blob to single file or a directory.
|
||||
pub blob_storage: Option<ArtifactStorage>,
|
||||
pub external_blob_storage: Option<ArtifactStorage>,
|
||||
pub blob_zran_generator: Option<Mutex<ZranContextGenerator<File>>>,
|
||||
pub blob_batch_generator: Option<Mutex<BatchContextGenerator>>,
|
||||
pub blob_tar_reader: Option<BufReaderInfo<File>>,
|
||||
|
@ -1334,6 +1392,8 @@ pub struct BuildContext {
|
|||
|
||||
/// Whether is chunkdict.
|
||||
pub is_chunkdict_generated: bool,
|
||||
/// Nydus attributes for different build behavior.
|
||||
pub attributes: Attributes,
|
||||
}
|
||||
|
||||
impl BuildContext {
|
||||
|
@ -1350,9 +1410,11 @@ impl BuildContext {
|
|||
source_path: PathBuf,
|
||||
prefetch: Prefetch,
|
||||
blob_storage: Option<ArtifactStorage>,
|
||||
external_blob_storage: Option<ArtifactStorage>,
|
||||
blob_inline_meta: bool,
|
||||
features: Features,
|
||||
encrypt: bool,
|
||||
attributes: Attributes,
|
||||
) -> Self {
|
||||
// It's a flag for images built with new nydus-image 2.2 and newer.
|
||||
let mut blob_features = BlobFeatures::CAP_TAR_TOC;
|
||||
|
@ -1373,6 +1435,8 @@ impl BuildContext {
|
|||
} else {
|
||||
crypt::Algorithm::None
|
||||
};
|
||||
|
||||
let crc32_algorithm = crc32::Algorithm::Crc32Iscsi;
|
||||
BuildContext {
|
||||
blob_id,
|
||||
aligned_chunk,
|
||||
|
@ -1380,6 +1444,7 @@ impl BuildContext {
|
|||
compressor,
|
||||
digester,
|
||||
cipher,
|
||||
crc32_algorithm,
|
||||
explicit_uidgid,
|
||||
whiteout_spec,
|
||||
|
||||
|
@ -1392,6 +1457,7 @@ impl BuildContext {
|
|||
|
||||
prefetch,
|
||||
blob_storage,
|
||||
external_blob_storage,
|
||||
blob_zran_generator: None,
|
||||
blob_batch_generator: None,
|
||||
blob_tar_reader: None,
|
||||
|
@ -1403,6 +1469,8 @@ impl BuildContext {
|
|||
configuration: Arc::new(ConfigV2::default()),
|
||||
blob_cache_generator: None,
|
||||
is_chunkdict_generated: false,
|
||||
|
||||
attributes,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1436,6 +1504,7 @@ impl Default for BuildContext {
|
|||
compressor: compress::Algorithm::default(),
|
||||
digester: digest::Algorithm::default(),
|
||||
cipher: crypt::Algorithm::None,
|
||||
crc32_algorithm: crc32::Algorithm::default(),
|
||||
explicit_uidgid: true,
|
||||
whiteout_spec: WhiteoutSpec::default(),
|
||||
|
||||
|
@ -1448,6 +1517,7 @@ impl Default for BuildContext {
|
|||
|
||||
prefetch: Prefetch::default(),
|
||||
blob_storage: None,
|
||||
external_blob_storage: None,
|
||||
blob_zran_generator: None,
|
||||
blob_batch_generator: None,
|
||||
blob_tar_reader: None,
|
||||
|
@ -1458,6 +1528,8 @@ impl Default for BuildContext {
|
|||
configuration: Arc::new(ConfigV2::default()),
|
||||
blob_cache_generator: None,
|
||||
is_chunkdict_generated: false,
|
||||
|
||||
attributes: Attributes::default(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1469,8 +1541,12 @@ pub struct BuildOutput {
|
|||
pub blobs: Vec<String>,
|
||||
/// The size of output blob in this build.
|
||||
pub blob_size: Option<u64>,
|
||||
/// External blob ids in the blob table of external bootstrap.
|
||||
pub external_blobs: Vec<String>,
|
||||
/// File path for the metadata blob.
|
||||
pub bootstrap_path: Option<String>,
|
||||
/// File path for the external metadata blob.
|
||||
pub external_bootstrap_path: Option<String>,
|
||||
}
|
||||
|
||||
impl fmt::Display for BuildOutput {
|
||||
|
@ -1485,7 +1561,17 @@ impl fmt::Display for BuildOutput {
|
|||
"data blob size: 0x{:x}",
|
||||
self.blob_size.unwrap_or_default()
|
||||
)?;
|
||||
if self.external_blobs.is_empty() {
|
||||
write!(f, "data blobs: {:?}", self.blobs)?;
|
||||
} else {
|
||||
writeln!(f, "data blobs: {:?}", self.blobs)?;
|
||||
writeln!(
|
||||
f,
|
||||
"external meta blob path: {}",
|
||||
self.external_bootstrap_path.as_deref().unwrap_or("<none>")
|
||||
)?;
|
||||
write!(f, "external data blobs: {:?}", self.external_blobs)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
@ -1494,20 +1580,28 @@ impl BuildOutput {
|
|||
/// Create a new instance of [BuildOutput].
|
||||
pub fn new(
|
||||
blob_mgr: &BlobManager,
|
||||
external_blob_mgr: Option<&BlobManager>,
|
||||
bootstrap_storage: &Option<ArtifactStorage>,
|
||||
external_bootstrap_storage: &Option<ArtifactStorage>,
|
||||
) -> Result<BuildOutput> {
|
||||
let blobs = blob_mgr.get_blob_ids();
|
||||
let blob_size = blob_mgr.get_last_blob().map(|b| b.compressed_blob_size);
|
||||
let bootstrap_path = if let Some(ArtifactStorage::SingleFile(p)) = bootstrap_storage {
|
||||
Some(p.display().to_string())
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let bootstrap_path = bootstrap_storage
|
||||
.as_ref()
|
||||
.map(|stor| stor.display().to_string());
|
||||
let external_bootstrap_path = external_bootstrap_storage
|
||||
.as_ref()
|
||||
.map(|stor| stor.display().to_string());
|
||||
let external_blobs = external_blob_mgr
|
||||
.map(|mgr| mgr.get_blob_ids())
|
||||
.unwrap_or_default();
|
||||
|
||||
Ok(Self {
|
||||
blobs,
|
||||
external_blobs,
|
||||
blob_size,
|
||||
bootstrap_path,
|
||||
external_bootstrap_path,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
@ -1560,6 +1654,7 @@ mod tests {
|
|||
registry: None,
|
||||
http_proxy: None,
|
||||
}),
|
||||
external_backends: Vec::new(),
|
||||
id: "id".to_owned(),
|
||||
cache: None,
|
||||
rafs: None,
|
||||
|
|
|
@ -25,8 +25,9 @@ use nydus_rafs::metadata::{Inode, RafsVersion};
|
|||
use nydus_storage::device::BlobFeatures;
|
||||
use nydus_storage::meta::{BlobChunkInfoV2Ondisk, BlobMetaChunkInfo};
|
||||
use nydus_utils::digest::{DigestHasher, RafsDigest};
|
||||
use nydus_utils::{compress, crypt};
|
||||
use nydus_utils::{compress, crc32, crypt};
|
||||
use nydus_utils::{div_round_up, event_tracer, root_tracer, try_round_up_4k, ByteSize};
|
||||
use parse_size::parse_size;
|
||||
use sha2::digest::Digest;
|
||||
|
||||
use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, Overlay};
|
||||
|
@ -275,6 +276,88 @@ impl Node {
|
|||
None
|
||||
};
|
||||
|
||||
if blob_mgr.external {
|
||||
let external_values = ctx.attributes.get_values(self.target()).unwrap();
|
||||
let external_blob_index = external_values
|
||||
.get("blob_index")
|
||||
.and_then(|v| v.parse::<u32>().ok())
|
||||
.ok_or_else(|| anyhow!("failed to parse blob_index"))?;
|
||||
let external_blob_id = external_values
|
||||
.get("blob_id")
|
||||
.ok_or_else(|| anyhow!("failed to parse blob_id"))?;
|
||||
let external_chunk_size = external_values
|
||||
.get("chunk_size")
|
||||
.and_then(|v| parse_size(v).ok())
|
||||
.ok_or_else(|| anyhow!("failed to parse chunk_size"))?;
|
||||
let mut external_compressed_offset = external_values
|
||||
.get("chunk_0_compressed_offset")
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.ok_or_else(|| anyhow!("failed to parse chunk_0_compressed_offset"))?;
|
||||
let external_compressed_size = external_values
|
||||
.get("compressed_size")
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.ok_or_else(|| anyhow!("failed to parse compressed_size"))?;
|
||||
let (_, external_blob_ctx) =
|
||||
blob_mgr.get_or_create_blob_by_idx(ctx, external_blob_index)?;
|
||||
external_blob_ctx.blob_id = external_blob_id.to_string();
|
||||
external_blob_ctx.compressed_blob_size = external_compressed_size;
|
||||
external_blob_ctx.uncompressed_blob_size = external_compressed_size;
|
||||
let chunk_count = self
|
||||
.chunk_count(external_chunk_size as u64)
|
||||
.with_context(|| {
|
||||
format!("failed to get chunk count for {}", self.path().display())
|
||||
})?;
|
||||
self.inode.set_child_count(chunk_count);
|
||||
|
||||
info!(
|
||||
"target {:?}, file_size {}, blob_index {}, blob_id {}, chunk_size {}, chunk_count {}",
|
||||
self.target(),
|
||||
self.inode.size(),
|
||||
external_blob_index,
|
||||
external_blob_id,
|
||||
external_chunk_size,
|
||||
chunk_count
|
||||
);
|
||||
for i in 0..self.inode.child_count() {
|
||||
let mut chunk = self.inode.create_chunk();
|
||||
let file_offset = i as u64 * external_chunk_size as u64;
|
||||
let compressed_size = if i == self.inode.child_count() - 1 {
|
||||
self.inode.size() - (external_chunk_size * i as u64)
|
||||
} else {
|
||||
external_chunk_size
|
||||
} as u32;
|
||||
chunk.set_blob_index(external_blob_index);
|
||||
chunk.set_index(external_blob_ctx.alloc_chunk_index()?);
|
||||
chunk.set_compressed_offset(external_compressed_offset);
|
||||
chunk.set_compressed_size(compressed_size);
|
||||
chunk.set_uncompressed_offset(external_compressed_offset);
|
||||
chunk.set_uncompressed_size(compressed_size);
|
||||
chunk.set_compressed(false);
|
||||
chunk.set_file_offset(file_offset);
|
||||
external_compressed_offset += compressed_size as u64;
|
||||
external_blob_ctx.chunk_size = external_chunk_size as u32;
|
||||
|
||||
if ctx.crc32_algorithm != crc32::Algorithm::None {
|
||||
self.set_external_chunk_crc32(ctx, &mut chunk, i)?
|
||||
}
|
||||
|
||||
if let Some(h) = inode_hasher.as_mut() {
|
||||
h.digest_update(chunk.id().as_ref());
|
||||
}
|
||||
|
||||
self.chunks.push(NodeChunk {
|
||||
source: ChunkSource::Build,
|
||||
inner: Arc::new(chunk),
|
||||
});
|
||||
}
|
||||
|
||||
if let Some(h) = inode_hasher {
|
||||
self.inode.set_digest(h.digest_finalize());
|
||||
}
|
||||
|
||||
return Ok(0);
|
||||
}
|
||||
|
||||
// `child_count` of regular file is reused as `chunk_count`.
|
||||
for i in 0..self.inode.child_count() {
|
||||
let chunk_size = ctx.chunk_size;
|
||||
|
@ -286,13 +369,14 @@ impl Node {
|
|||
};
|
||||
|
||||
let chunk_data = &mut data_buf[0..uncompressed_size as usize];
|
||||
let (mut chunk, mut chunk_info) = self.read_file_chunk(ctx, reader, chunk_data)?;
|
||||
let (mut chunk, mut chunk_info) =
|
||||
self.read_file_chunk(ctx, reader, chunk_data, blob_mgr.external)?;
|
||||
if let Some(h) = inode_hasher.as_mut() {
|
||||
h.digest_update(chunk.id().as_ref());
|
||||
}
|
||||
|
||||
// No need to perform chunk deduplication for tar-tarfs case.
|
||||
if ctx.conversion_type != ConversionType::TarToTarfs {
|
||||
// No need to perform chunk deduplication for tar-tarfs/external blob case.
|
||||
if ctx.conversion_type != ConversionType::TarToTarfs && !blob_mgr.external {
|
||||
chunk = match self.deduplicate_chunk(
|
||||
ctx,
|
||||
blob_mgr,
|
||||
|
@ -347,20 +431,43 @@ impl Node {
|
|||
Ok(blob_size)
|
||||
}
|
||||
|
||||
fn set_external_chunk_crc32(
|
||||
&self,
|
||||
ctx: &BuildContext,
|
||||
chunk: &mut ChunkWrapper,
|
||||
i: u32,
|
||||
) -> Result<()> {
|
||||
if let Some(crcs) = ctx.attributes.get_crcs(self.target()) {
|
||||
if (i as usize) >= crcs.len() {
|
||||
return Err(anyhow!(
|
||||
"invalid crc index {} for file {}",
|
||||
i,
|
||||
self.target().display()
|
||||
));
|
||||
}
|
||||
chunk.set_has_crc32(true);
|
||||
chunk.set_crc32(crcs[i as usize]);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn read_file_chunk<R: Read>(
|
||||
&self,
|
||||
ctx: &BuildContext,
|
||||
reader: &mut R,
|
||||
buf: &mut [u8],
|
||||
external: bool,
|
||||
) -> Result<(ChunkWrapper, Option<BlobChunkInfoV2Ondisk>)> {
|
||||
let mut chunk = self.inode.create_chunk();
|
||||
let mut chunk_info = None;
|
||||
if let Some(ref zran) = ctx.blob_zran_generator {
|
||||
let mut zran = zran.lock().unwrap();
|
||||
zran.start_chunk(ctx.chunk_size as u64)?;
|
||||
if !external {
|
||||
reader
|
||||
.read_exact(buf)
|
||||
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
|
||||
}
|
||||
let info = zran.finish_chunk()?;
|
||||
chunk.set_compressed_offset(info.compressed_offset());
|
||||
chunk.set_compressed_size(info.compressed_size());
|
||||
|
@ -372,21 +479,27 @@ impl Node {
|
|||
chunk.set_compressed_offset(pos);
|
||||
chunk.set_compressed_size(buf.len() as u32);
|
||||
chunk.set_compressed(false);
|
||||
if !external {
|
||||
reader
|
||||
.read_exact(buf)
|
||||
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
|
||||
} else {
|
||||
}
|
||||
} else if !external {
|
||||
reader
|
||||
.read_exact(buf)
|
||||
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
|
||||
}
|
||||
|
||||
// For tar-tarfs case, no need to compute chunk id.
|
||||
if ctx.conversion_type != ConversionType::TarToTarfs {
|
||||
if ctx.conversion_type != ConversionType::TarToTarfs && !external {
|
||||
chunk.set_id(RafsDigest::from_buf(buf, ctx.digester));
|
||||
if ctx.crc32_algorithm != crc32::Algorithm::None {
|
||||
chunk.set_has_crc32(true);
|
||||
chunk.set_crc32(crc32::Crc32::new(ctx.crc32_algorithm).from_buf(buf));
|
||||
}
|
||||
}
|
||||
|
||||
if ctx.cipher != crypt::Algorithm::None {
|
||||
if ctx.cipher != crypt::Algorithm::None && !external {
|
||||
chunk.set_encrypted(true);
|
||||
}
|
||||
|
||||
|
@ -495,12 +608,12 @@ impl Node {
|
|||
}
|
||||
|
||||
pub fn write_chunk_data(
|
||||
ctx: &BuildContext,
|
||||
_ctx: &BuildContext,
|
||||
blob_ctx: &mut BlobContext,
|
||||
blob_writer: &mut dyn Artifact,
|
||||
chunk_data: &[u8],
|
||||
) -> Result<(u64, u32, bool)> {
|
||||
let (compressed, is_compressed) = compress::compress(chunk_data, ctx.compressor)
|
||||
let (compressed, is_compressed) = compress::compress(chunk_data, blob_ctx.blob_compressor)
|
||||
.with_context(|| "failed to compress node file".to_string())?;
|
||||
let encrypted = crypt::encrypt_with_context(
|
||||
&compressed,
|
||||
|
@ -510,10 +623,14 @@ impl Node {
|
|||
)?;
|
||||
let compressed_size = encrypted.len() as u32;
|
||||
let pre_compressed_offset = blob_ctx.current_compressed_offset;
|
||||
if !blob_ctx.external {
|
||||
// For the external blob, both compressor and encrypter should
|
||||
// be none, and we don't write data into blob file.
|
||||
blob_writer
|
||||
.write_all(&encrypted)
|
||||
.context("failed to write blob")?;
|
||||
blob_ctx.blob_hash.update(&encrypted);
|
||||
}
|
||||
blob_ctx.current_compressed_offset += compressed_size as u64;
|
||||
blob_ctx.compressed_blob_size += compressed_size as u64;
|
||||
|
||||
|
@ -588,6 +705,7 @@ impl Node {
|
|||
|
||||
// build node object from a filesystem object.
|
||||
impl Node {
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
/// Create a new instance of [Node] from a filesystem object.
|
||||
pub fn from_fs_object(
|
||||
version: RafsVersion,
|
||||
|
@ -595,6 +713,7 @@ impl Node {
|
|||
path: PathBuf,
|
||||
overlay: Overlay,
|
||||
chunk_size: u32,
|
||||
file_size: u64,
|
||||
explicit_uidgid: bool,
|
||||
v6_force_extended_inode: bool,
|
||||
) -> Result<Node> {
|
||||
|
@ -627,7 +746,7 @@ impl Node {
|
|||
v6_dirents: Vec::new(),
|
||||
};
|
||||
|
||||
node.build_inode(chunk_size)
|
||||
node.build_inode(chunk_size, file_size)
|
||||
.context("failed to build Node from fs object")?;
|
||||
if version.is_v6() {
|
||||
node.v6_set_inode_compact();
|
||||
|
@ -667,7 +786,7 @@ impl Node {
|
|||
Ok(())
|
||||
}
|
||||
|
||||
fn build_inode_stat(&mut self) -> Result<()> {
|
||||
fn build_inode_stat(&mut self, file_size: u64) -> Result<()> {
|
||||
let meta = self
|
||||
.meta()
|
||||
.with_context(|| format!("failed to get metadata of {}", self.path().display()))?;
|
||||
|
@ -702,7 +821,13 @@ impl Node {
|
|||
// directory entries, so let's ignore the value provided by source filesystem and
|
||||
// calculate it later by ourself.
|
||||
if !self.is_dir() {
|
||||
// If the file size is not 0, and the meta size is 0, it means the file is an
|
||||
// external dummy file. We need to set the size to file_size.
|
||||
if file_size != 0 && meta.st_size() == 0 {
|
||||
self.inode.set_size(file_size);
|
||||
} else {
|
||||
self.inode.set_size(meta.st_size());
|
||||
}
|
||||
self.v5_set_inode_blocks();
|
||||
}
|
||||
self.info = Arc::new(info);
|
||||
|
@ -710,7 +835,7 @@ impl Node {
|
|||
Ok(())
|
||||
}
|
||||
|
||||
fn build_inode(&mut self, chunk_size: u32) -> Result<()> {
|
||||
fn build_inode(&mut self, chunk_size: u32, file_size: u64) -> Result<()> {
|
||||
let size = self.name().byte_size();
|
||||
if size > u16::MAX as usize {
|
||||
bail!("file name length 0x{:x} is too big", size,);
|
||||
|
@ -720,7 +845,7 @@ impl Node {
|
|||
// NOTE: Always retrieve xattr before attr so that we can know the size of xattr pairs.
|
||||
self.build_inode_xattr()
|
||||
.with_context(|| format!("failed to get xattr for {}", self.path().display()))?;
|
||||
self.build_inode_stat()
|
||||
self.build_inode_stat(file_size)
|
||||
.with_context(|| format!("failed to build inode {}", self.path().display()))?;
|
||||
|
||||
if self.is_reg() {
|
||||
|
@ -895,12 +1020,12 @@ impl Node {
|
|||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::io::BufReader;
|
||||
use std::{collections::HashMap, io::BufReader};
|
||||
|
||||
use nydus_utils::{digest, BufReaderInfo};
|
||||
use vmm_sys_util::tempfile::TempFile;
|
||||
|
||||
use crate::{ArtifactWriter, BlobCacheGenerator, HashChunkDict};
|
||||
use crate::{attributes::Attributes, ArtifactWriter, BlobCacheGenerator, HashChunkDict};
|
||||
|
||||
use super::*;
|
||||
|
||||
|
@ -972,7 +1097,7 @@ mod tests {
|
|||
.unwrap(),
|
||||
);
|
||||
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
|
||||
let mut chunk_dict = HashChunkDict::new(digest::Algorithm::Sha256);
|
||||
let mut chunk_wrapper = ChunkWrapper::new(RafsVersion::V5);
|
||||
chunk_wrapper.set_id(RafsDigest {
|
||||
|
@ -1108,4 +1233,43 @@ mod tests {
|
|||
node.remove_xattr(OsStr::new("system.posix_acl_default.key"));
|
||||
assert!(!node.inode.has_xattr());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_external_chunk_crc32() {
|
||||
let mut ctx = BuildContext {
|
||||
crc32_algorithm: crc32::Algorithm::Crc32Iscsi,
|
||||
attributes: Attributes {
|
||||
crcs: HashMap::new(),
|
||||
..Default::default()
|
||||
},
|
||||
..Default::default()
|
||||
};
|
||||
let target = PathBuf::from("/test_file");
|
||||
ctx.attributes
|
||||
.crcs
|
||||
.insert(target.clone(), vec![0x12345678, 0x87654321]);
|
||||
|
||||
let node = Node::new(
|
||||
InodeWrapper::new(RafsVersion::V5),
|
||||
NodeInfo {
|
||||
path: target.clone(),
|
||||
target: target.clone(),
|
||||
..Default::default()
|
||||
},
|
||||
1,
|
||||
);
|
||||
|
||||
let mut chunk = node.inode.create_chunk();
|
||||
print!("target: {}", node.target().display());
|
||||
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 1);
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(chunk.crc32(), 0x87654321);
|
||||
assert!(chunk.has_crc32());
|
||||
|
||||
// test invalid crc index
|
||||
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 2);
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err().to_string();
|
||||
assert!(err.contains("invalid crc index 2 for file /test_file"));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -423,6 +423,7 @@ mod tests {
|
|||
tmpfile.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)
|
||||
|
@ -439,6 +440,7 @@ mod tests {
|
|||
tmpfile.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)
|
||||
|
@ -458,6 +460,7 @@ mod tests {
|
|||
tmpfile.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)
|
||||
|
@ -471,6 +474,7 @@ mod tests {
|
|||
tmpfile2.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)
|
||||
|
@ -485,6 +489,7 @@ mod tests {
|
|||
tmpfile3.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
true,
|
||||
false,
|
||||
)
|
||||
|
|
|
@ -21,7 +21,7 @@ use nydus_rafs::metadata::layout::v6::{
|
|||
};
|
||||
use nydus_rafs::metadata::RafsStore;
|
||||
use nydus_rafs::RafsIoWrite;
|
||||
use nydus_storage::device::BlobFeatures;
|
||||
use nydus_storage::device::{BlobFeatures, BlobInfo};
|
||||
use nydus_utils::{root_tracer, round_down, round_up, timing_tracer};
|
||||
|
||||
use super::chunk_dict::DigestWithBlobIndex;
|
||||
|
@ -41,6 +41,7 @@ impl Node {
|
|||
orig_meta_addr: u64,
|
||||
meta_addr: u64,
|
||||
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
|
||||
blobs: &[Arc<BlobInfo>],
|
||||
) -> Result<()> {
|
||||
let xattr_inline_count = self.info.xattrs.count_v6();
|
||||
ensure!(
|
||||
|
@ -70,7 +71,7 @@ impl Node {
|
|||
if self.is_dir() {
|
||||
self.v6_dump_dir(ctx, f_bootstrap, meta_addr, meta_offset, &mut inode)?;
|
||||
} else if self.is_reg() {
|
||||
self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode)?;
|
||||
self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode, &blobs)?;
|
||||
} else if self.is_symlink() {
|
||||
self.v6_dump_symlink(ctx, f_bootstrap, &mut inode)?;
|
||||
} else {
|
||||
|
@ -452,6 +453,7 @@ impl Node {
|
|||
f_bootstrap: &mut dyn RafsIoWrite,
|
||||
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
|
||||
inode: &mut Box<dyn RafsV6OndiskInode>,
|
||||
blobs: &[Arc<BlobInfo>],
|
||||
) -> Result<()> {
|
||||
let mut is_continuous = true;
|
||||
let mut prev = None;
|
||||
|
@ -473,8 +475,15 @@ impl Node {
|
|||
v6_chunk.set_block_addr(blk_addr);
|
||||
|
||||
chunks.extend(v6_chunk.as_ref());
|
||||
let external =
|
||||
blobs[chunk.inner.blob_index() as usize].has_feature(BlobFeatures::EXTERNAL);
|
||||
let chunk_index = if external {
|
||||
Some(chunk.inner.index())
|
||||
} else {
|
||||
None
|
||||
};
|
||||
chunk_cache.insert(
|
||||
DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1),
|
||||
DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1, chunk_index),
|
||||
chunk.inner.clone(),
|
||||
);
|
||||
if let Some((prev_idx, prev_pos)) = prev {
|
||||
|
@ -709,6 +718,7 @@ impl Bootstrap {
|
|||
orig_meta_addr,
|
||||
meta_addr,
|
||||
&mut chunk_cache,
|
||||
&blobs,
|
||||
)
|
||||
})
|
||||
},
|
||||
|
@ -910,6 +920,7 @@ mod tests {
|
|||
pa_aa.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
@ -937,6 +948,7 @@ mod tests {
|
|||
pa.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
@ -1033,6 +1045,7 @@ mod tests {
|
|||
pa_reg.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
@ -1046,6 +1059,7 @@ mod tests {
|
|||
pa_pyc.as_path().to_path_buf(),
|
||||
Overlay::UpperAddition,
|
||||
RAFS_DEFAULT_CHUNK_SIZE as u32,
|
||||
0,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
|
|
@ -5,14 +5,15 @@
|
|||
use std::fs;
|
||||
use std::fs::DirEntry;
|
||||
|
||||
use anyhow::{Context, Result};
|
||||
use anyhow::{anyhow, Context, Result};
|
||||
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
|
||||
|
||||
use crate::core::context::{Artifact, NoopArtifactWriter};
|
||||
use crate::core::prefetch;
|
||||
|
||||
use super::core::blob::Blob;
|
||||
use super::core::context::{
|
||||
ArtifactWriter, BlobManager, BootstrapContext, BootstrapManager, BuildContext, BuildOutput,
|
||||
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
|
||||
};
|
||||
use super::core::node::Node;
|
||||
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
|
||||
|
@ -29,14 +30,14 @@ impl FilesystemTreeBuilder {
|
|||
fn load_children(
|
||||
&self,
|
||||
ctx: &mut BuildContext,
|
||||
bootstrap_ctx: &mut BootstrapContext,
|
||||
parent: &TreeNode,
|
||||
layer_idx: u16,
|
||||
) -> Result<Vec<Tree>> {
|
||||
let mut result = Vec::new();
|
||||
) -> Result<(Vec<Tree>, Vec<Tree>)> {
|
||||
let mut trees = Vec::new();
|
||||
let mut external_trees = Vec::new();
|
||||
let parent = parent.borrow();
|
||||
if !parent.is_dir() {
|
||||
return Ok(result);
|
||||
return Ok((trees.clone(), external_trees));
|
||||
}
|
||||
|
||||
let children = fs::read_dir(parent.path())
|
||||
|
@ -46,12 +47,26 @@ impl FilesystemTreeBuilder {
|
|||
event_tracer!("load_from_directory", +children.len());
|
||||
for child in children {
|
||||
let path = child.path();
|
||||
let target = Node::generate_target(&path, &ctx.source_path);
|
||||
let mut file_size: u64 = 0;
|
||||
if ctx.attributes.is_external(&target) {
|
||||
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
|
||||
file_size = value.parse::<u64>().ok().ok_or_else(|| {
|
||||
anyhow!(
|
||||
"failed to parse file_size for external file {}",
|
||||
&target.display()
|
||||
)
|
||||
})?;
|
||||
}
|
||||
}
|
||||
|
||||
let mut child = Node::from_fs_object(
|
||||
ctx.fs_version,
|
||||
ctx.source_path.clone(),
|
||||
path.clone(),
|
||||
Overlay::UpperAddition,
|
||||
ctx.chunk_size,
|
||||
file_size,
|
||||
parent.info.explicit_uidgid,
|
||||
true,
|
||||
)
|
||||
|
@ -60,24 +75,41 @@ impl FilesystemTreeBuilder {
|
|||
|
||||
// as per OCI spec, whiteout file should not be present within final image
|
||||
// or filesystem, only existed in layers.
|
||||
if !bootstrap_ctx.layered
|
||||
if layer_idx == 0
|
||||
&& child.whiteout_type(ctx.whiteout_spec).is_some()
|
||||
&& !child.is_overlayfs_opaque(ctx.whiteout_spec)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
let mut child = Tree::new(child);
|
||||
child.children = self.load_children(ctx, bootstrap_ctx, &child.node, layer_idx)?;
|
||||
let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
|
||||
let (child_children, external_children) =
|
||||
self.load_children(ctx, &child.node, layer_idx)?;
|
||||
child.children = child_children;
|
||||
external_child.children = external_children;
|
||||
child
|
||||
.borrow_mut_node()
|
||||
.v5_set_dir_size(ctx.fs_version, &child.children);
|
||||
result.push(child);
|
||||
external_child
|
||||
.borrow_mut_node()
|
||||
.v5_set_dir_size(ctx.fs_version, &external_child.children);
|
||||
|
||||
if ctx.attributes.is_external(&target) {
|
||||
external_trees.push(external_child);
|
||||
} else {
|
||||
// TODO: need to implement type=ignore for nydus attributes,
|
||||
// let's ignore the tree for workaround.
|
||||
trees.push(child.clone());
|
||||
if ctx.attributes.is_prefix_external(target) {
|
||||
external_trees.push(external_child);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
result.sort_unstable_by(|a, b| a.name().cmp(b.name()));
|
||||
trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
|
||||
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
|
||||
|
||||
Ok(result)
|
||||
Ok((trees, external_trees))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -90,57 +122,46 @@ impl DirectoryBuilder {
|
|||
}
|
||||
|
||||
/// Build node tree from a filesystem directory
|
||||
fn build_tree(
|
||||
&mut self,
|
||||
ctx: &mut BuildContext,
|
||||
bootstrap_ctx: &mut BootstrapContext,
|
||||
layer_idx: u16,
|
||||
) -> Result<Tree> {
|
||||
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
|
||||
let node = Node::from_fs_object(
|
||||
ctx.fs_version,
|
||||
ctx.source_path.clone(),
|
||||
ctx.source_path.clone(),
|
||||
Overlay::UpperAddition,
|
||||
ctx.chunk_size,
|
||||
0,
|
||||
ctx.explicit_uidgid,
|
||||
true,
|
||||
)?;
|
||||
let mut tree = Tree::new(node);
|
||||
let mut tree = Tree::new(node.clone());
|
||||
let mut external_tree = Tree::new(node);
|
||||
let tree_builder = FilesystemTreeBuilder::new();
|
||||
|
||||
tree.children = timing_tracer!(
|
||||
{ tree_builder.load_children(ctx, bootstrap_ctx, &tree.node, layer_idx) },
|
||||
let (tree_children, external_tree_children) = timing_tracer!(
|
||||
{ tree_builder.load_children(ctx, &tree.node, layer_idx) },
|
||||
"load_from_directory"
|
||||
)?;
|
||||
tree.children = tree_children;
|
||||
external_tree.children = external_tree_children;
|
||||
tree.borrow_mut_node()
|
||||
.v5_set_dir_size(ctx.fs_version, &tree.children);
|
||||
external_tree
|
||||
.borrow_mut_node()
|
||||
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
|
||||
|
||||
Ok(tree)
|
||||
}
|
||||
Ok((tree, external_tree))
|
||||
}
|
||||
|
||||
impl Builder for DirectoryBuilder {
|
||||
fn build(
|
||||
fn one_build(
|
||||
&mut self,
|
||||
ctx: &mut BuildContext,
|
||||
bootstrap_mgr: &mut BootstrapManager,
|
||||
blob_mgr: &mut BlobManager,
|
||||
blob_writer: &mut Box<dyn Artifact>,
|
||||
tree: Tree,
|
||||
) -> Result<BuildOutput> {
|
||||
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
|
||||
let layer_idx = u16::from(bootstrap_ctx.layered);
|
||||
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
|
||||
Box::new(ArtifactWriter::new(blob_stor)?)
|
||||
} else {
|
||||
Box::<NoopArtifactWriter>::default()
|
||||
};
|
||||
|
||||
// Scan source directory to build upper layer tree.
|
||||
let tree = timing_tracer!(
|
||||
{ self.build_tree(ctx, &mut bootstrap_ctx, layer_idx) },
|
||||
"build_tree"
|
||||
)?;
|
||||
|
||||
// Build bootstrap
|
||||
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
|
||||
let mut bootstrap = timing_tracer!(
|
||||
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
|
||||
"build_bootstrap"
|
||||
|
@ -192,6 +213,55 @@ impl Builder for DirectoryBuilder {
|
|||
|
||||
lazy_drop(bootstrap_ctx);
|
||||
|
||||
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
|
||||
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
|
||||
}
|
||||
}
|
||||
|
||||
impl Builder for DirectoryBuilder {
|
||||
fn build(
|
||||
&mut self,
|
||||
ctx: &mut BuildContext,
|
||||
bootstrap_mgr: &mut BootstrapManager,
|
||||
blob_mgr: &mut BlobManager,
|
||||
) -> Result<BuildOutput> {
|
||||
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
|
||||
|
||||
// Scan source directory to build upper layer tree.
|
||||
let (tree, external_tree) =
|
||||
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
|
||||
|
||||
// Build for tree
|
||||
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
|
||||
Box::new(ArtifactWriter::new(blob_stor)?)
|
||||
} else {
|
||||
Box::<NoopArtifactWriter>::default()
|
||||
};
|
||||
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
|
||||
|
||||
// Build for external tree
|
||||
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
|
||||
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
|
||||
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
|
||||
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
|
||||
stor.add_suffix("external")
|
||||
}
|
||||
|
||||
let mut external_blob_writer: Box<dyn Artifact> =
|
||||
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
|
||||
Box::new(ArtifactWriter::new(blob_stor)?)
|
||||
} else {
|
||||
Box::<NoopArtifactWriter>::default()
|
||||
};
|
||||
let external_output = self.one_build(
|
||||
ctx,
|
||||
&mut external_bootstrap_mgr,
|
||||
&mut external_blob_mgr,
|
||||
&mut external_blob_writer,
|
||||
external_tree,
|
||||
)?;
|
||||
output.external_bootstrap_path = external_output.bootstrap_path;
|
||||
output.external_blobs = external_output.blobs;
|
||||
|
||||
Ok(output)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -46,6 +46,7 @@ pub use self::optimize_prefetch::OptimizePrefetch;
|
|||
pub use self::stargz::StargzBuilder;
|
||||
pub use self::tarball::TarballBuilder;
|
||||
|
||||
pub mod attributes;
|
||||
mod chunkdict_generator;
|
||||
mod compact;
|
||||
mod core;
|
||||
|
@ -120,9 +121,14 @@ fn dump_bootstrap(
|
|||
if ctx.blob_inline_meta {
|
||||
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
|
||||
// Ensure the blob object is created in case of no chunks generated for the blob.
|
||||
let blob_ctx = if blob_mgr.external {
|
||||
&mut blob_mgr.new_blob_ctx(ctx)?
|
||||
} else {
|
||||
let (_, blob_ctx) = blob_mgr
|
||||
.get_or_create_current_blob(ctx)
|
||||
.map_err(|_e| anyhow!("failed to get current blob object"))?;
|
||||
blob_ctx
|
||||
};
|
||||
let bootstrap_offset = blob_writer.pos()?;
|
||||
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
|
||||
let uncompressed_size = uncompressed_bootstrap.len();
|
||||
|
|
|
@ -129,7 +129,7 @@ impl Merger {
|
|||
}
|
||||
|
||||
let mut tree: Option<Tree> = None;
|
||||
let mut blob_mgr = BlobManager::new(ctx.digester);
|
||||
let mut blob_mgr = BlobManager::new(ctx.digester, false);
|
||||
let mut blob_idx_map = HashMap::new();
|
||||
let mut parent_layers = 0;
|
||||
|
||||
|
@ -308,7 +308,7 @@ impl Merger {
|
|||
// referenced blobs, as the upper tree might have deleted some files
|
||||
// or directories by opaques, and some blobs are dereferenced.
|
||||
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
|
||||
let mut used_blob_mgr = BlobManager::new(ctx.digester);
|
||||
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
|
||||
let origin_blobs = blob_mgr.get_blobs();
|
||||
tree.walk_bfs(true, &mut |n| {
|
||||
let mut node = n.borrow_mut_node();
|
||||
|
@ -337,7 +337,7 @@ impl Merger {
|
|||
bootstrap
|
||||
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
|
||||
.context(format!("dump bootstrap to {:?}", target.display()))?;
|
||||
BuildOutput::new(&used_blob_mgr, &bootstrap_storage)
|
||||
BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -54,9 +54,10 @@ impl PrefetchBlobState {
|
|||
blob_info.set_separated_with_prefetch_files_feature(true);
|
||||
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
|
||||
blob_ctx.blob_meta_info_enabled = true;
|
||||
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir(
|
||||
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
|
||||
blobs_dir_path.to_path_buf(),
|
||||
))
|
||||
String::new(),
|
||||
)))
|
||||
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
|
||||
Ok(Self {
|
||||
blob_info,
|
||||
|
@ -100,7 +101,7 @@ impl OptimizePrefetch {
|
|||
debug!("prefetch blob id: {}", ctx.blob_id);
|
||||
|
||||
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
|
||||
BuildOutput::new(&blob_mgr, &bootstrap_mgr.bootstrap_storage)
|
||||
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
|
||||
}
|
||||
|
||||
fn build_dump_bootstrap(
|
||||
|
@ -142,7 +143,7 @@ impl OptimizePrefetch {
|
|||
}
|
||||
}
|
||||
|
||||
let mut blob_mgr = BlobManager::new(ctx.digester);
|
||||
let mut blob_mgr = BlobManager::new(ctx.digester, false);
|
||||
blob_mgr.add_blob(blob_state.blob_ctx.clone());
|
||||
blob_mgr.set_current_blob_index(0);
|
||||
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
|
||||
|
|
|
@ -456,7 +456,7 @@ impl StargzBuilder {
|
|||
uncompressed_offset: self.uncompressed_offset,
|
||||
file_offset: entry.chunk_offset as u64,
|
||||
index: 0,
|
||||
reserved: 0,
|
||||
crc32: 0,
|
||||
});
|
||||
let chunk = NodeChunk {
|
||||
source: ChunkSource::Build,
|
||||
|
@ -904,14 +904,16 @@ impl Builder for StargzBuilder {
|
|||
|
||||
lazy_drop(bootstrap_ctx);
|
||||
|
||||
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
|
||||
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec};
|
||||
use crate::{
|
||||
attributes::Attributes, ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn test_build_stargz_toc() {
|
||||
|
@ -932,16 +934,20 @@ mod tests {
|
|||
ConversionType::EStargzIndexToRef,
|
||||
source_path,
|
||||
prefetch,
|
||||
Some(ArtifactStorage::FileDir(tmp_dir.clone())),
|
||||
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
|
||||
None,
|
||||
false,
|
||||
Features::new(),
|
||||
false,
|
||||
Attributes::default(),
|
||||
);
|
||||
ctx.fs_version = RafsVersion::V6;
|
||||
ctx.conversion_type = ConversionType::EStargzToRafs;
|
||||
let mut bootstrap_mgr =
|
||||
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir.clone())), None);
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
|
||||
let mut bootstrap_mgr = BootstrapManager::new(
|
||||
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
|
||||
None,
|
||||
);
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
|
||||
let mut builder = StargzBuilder::new(0x1000000, &ctx);
|
||||
|
||||
let builder = builder.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr);
|
||||
|
|
|
@ -659,13 +659,14 @@ impl Builder for TarballBuilder {
|
|||
|
||||
lazy_drop(bootstrap_ctx);
|
||||
|
||||
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
|
||||
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::attributes::Attributes;
|
||||
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
|
||||
use nydus_utils::{compress, digest};
|
||||
|
||||
|
@ -687,14 +688,18 @@ mod tests {
|
|||
ConversionType::TarToTarfs,
|
||||
source_path,
|
||||
prefetch,
|
||||
Some(ArtifactStorage::FileDir(tmp_dir.clone())),
|
||||
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
|
||||
None,
|
||||
false,
|
||||
Features::new(),
|
||||
false,
|
||||
Attributes::default(),
|
||||
);
|
||||
let mut bootstrap_mgr =
|
||||
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None);
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
|
||||
let mut bootstrap_mgr = BootstrapManager::new(
|
||||
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
|
||||
None,
|
||||
);
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
|
||||
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
|
||||
builder
|
||||
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
|
||||
|
@ -719,14 +724,18 @@ mod tests {
|
|||
ConversionType::TarToTarfs,
|
||||
source_path,
|
||||
prefetch,
|
||||
Some(ArtifactStorage::FileDir(tmp_dir.clone())),
|
||||
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
|
||||
None,
|
||||
false,
|
||||
Features::new(),
|
||||
true,
|
||||
Attributes::default(),
|
||||
);
|
||||
let mut bootstrap_mgr =
|
||||
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None);
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
|
||||
let mut bootstrap_mgr = BootstrapManager::new(
|
||||
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
|
||||
None,
|
||||
);
|
||||
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
|
||||
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
|
||||
builder
|
||||
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
|
||||
|
|
|
@ -16,9 +16,9 @@ crate-type = ["cdylib", "staticlib"]
|
|||
libc = "0.2.137"
|
||||
log = "0.4.17"
|
||||
fuse-backend-rs = "^0.12.0"
|
||||
nydus-api = { version = "0.3", path = "../api" }
|
||||
nydus-rafs = { version = "0.3.1", path = "../rafs" }
|
||||
nydus-storage = { version = "0.6.3", path = "../storage" }
|
||||
nydus-api = { version = "0.4.0", path = "../api" }
|
||||
nydus-rafs = { version = "0.4.0", path = "../rafs" }
|
||||
nydus-storage = { version = "0.7.0", path = "../storage" }
|
||||
|
||||
[features]
|
||||
baekend-s3 = ["nydus-storage/backend-s3"]
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
|
||||
func main() {
|
||||
fmt.Println("Hello, World!")
|
||||
}
|
|
@ -8,7 +8,7 @@ import (
|
|||
"syscall"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/urfave/cli/v2"
|
||||
cli "github.com/urfave/cli/v2"
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
|
|
|
@ -193,22 +193,7 @@ func main() {
|
|||
}
|
||||
|
||||
// global options
|
||||
app.Flags = []cli.Flag{
|
||||
&cli.BoolFlag{
|
||||
Name: "debug",
|
||||
Aliases: []string{"D"},
|
||||
Required: false,
|
||||
Value: false,
|
||||
Usage: "Enable debug log level, overwrites the 'log-level' option",
|
||||
EnvVars: []string{"DEBUG_LOG_LEVEL"}},
|
||||
&cli.StringFlag{
|
||||
Name: "log-level",
|
||||
Aliases: []string{"l"},
|
||||
Value: "info",
|
||||
Usage: "Set log level (panic, fatal, error, warn, info, debug, trace)",
|
||||
EnvVars: []string{"LOG_LEVEL"},
|
||||
},
|
||||
}
|
||||
app.Flags = getGlobalFlags()
|
||||
|
||||
app.Commands = []*cli.Command{
|
||||
{
|
||||
|
@ -227,6 +212,18 @@ func main() {
|
|||
Usage: "Target (Nydus) image reference",
|
||||
EnvVars: []string{"TARGET"},
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "source-backend-type",
|
||||
Value: "",
|
||||
Usage: "Type of storage backend, possible values: 'oss', 's3'",
|
||||
EnvVars: []string{"BACKEND_TYPE"},
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "source-backend-config",
|
||||
Value: "",
|
||||
Usage: "Json configuration string for storage backend",
|
||||
EnvVars: []string{"BACKEND_CONFIG"},
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "target-suffix",
|
||||
Required: false,
|
||||
|
@ -401,7 +398,7 @@ func main() {
|
|||
&cli.StringFlag{
|
||||
Name: "fs-chunk-size",
|
||||
Value: "0x100000",
|
||||
Usage: "size of nydus image data chunk, must be power of two and between 0x1000-0x100000, [default: 0x100000]",
|
||||
Usage: "size of nydus image data chunk, must be power of two and between 0x1000-0x10000000, [default: 0x4000000]",
|
||||
EnvVars: []string{"FS_CHUNK_SIZE"},
|
||||
Aliases: []string{"chunk-size"},
|
||||
},
|
||||
|
@ -429,6 +426,24 @@ func main() {
|
|||
Usage: "File path to save the metrics collected during conversion in JSON format, for example: './output.json'",
|
||||
EnvVars: []string{"OUTPUT_JSON"},
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: "plain-http",
|
||||
Value: false,
|
||||
Usage: "Enable plain http for Nydus image push",
|
||||
EnvVars: []string{"PLAIN_HTTP"},
|
||||
},
|
||||
&cli.IntFlag{
|
||||
Name: "push-retry-count",
|
||||
Value: 3,
|
||||
Usage: "Number of retries when pushing to registry fails",
|
||||
EnvVars: []string{"PUSH_RETRY_COUNT"},
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "push-retry-delay",
|
||||
Value: "5s",
|
||||
Usage: "Delay between push retries (e.g. 5s, 1m, 1h)",
|
||||
EnvVars: []string{"PUSH_RETRY_DELAY"},
|
||||
},
|
||||
},
|
||||
Action: func(c *cli.Context) error {
|
||||
setupLogLevel(c)
|
||||
|
@ -494,6 +509,8 @@ func main() {
|
|||
WorkDir: c.String("work-dir"),
|
||||
NydusImagePath: c.String("nydus-image"),
|
||||
|
||||
SourceBackendType: c.String("source-backend-type"),
|
||||
SourceBackendConfig: c.String("source-backend-config"),
|
||||
Source: c.String("source"),
|
||||
Target: targetRef,
|
||||
SourceInsecure: c.Bool("source-insecure"),
|
||||
|
@ -526,6 +543,9 @@ func main() {
|
|||
Platforms: c.String("platform"),
|
||||
|
||||
OutputJSON: c.String("output-json"),
|
||||
WithPlainHTTP: c.Bool("plain-http"),
|
||||
PushRetryCount: c.Int("push-retry-count"),
|
||||
PushRetryDelay: c.String("push-retry-delay"),
|
||||
}
|
||||
|
||||
return converter.Convert(context.Background(), opt)
|
||||
|
@ -1418,4 +1438,39 @@ func setupLogLevel(c *cli.Context) {
|
|||
}
|
||||
|
||||
logrus.SetLevel(logLevel)
|
||||
|
||||
if c.String("log-file") != "" {
|
||||
f, err := os.OpenFile(c.String("log-file"), os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
|
||||
if err != nil {
|
||||
logrus.Errorf("failed to open log file: %+v", err)
|
||||
return
|
||||
}
|
||||
logrus.SetOutput(f)
|
||||
}
|
||||
}
|
||||
|
||||
func getGlobalFlags() []cli.Flag {
|
||||
return []cli.Flag{
|
||||
&cli.BoolFlag{
|
||||
Name: "debug",
|
||||
Aliases: []string{"D"},
|
||||
Required: false,
|
||||
Value: false,
|
||||
Usage: "Enable debug log level, overwrites the 'log-level' option",
|
||||
EnvVars: []string{"DEBUG_LOG_LEVEL"},
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "log-level",
|
||||
Aliases: []string{"l"},
|
||||
Value: "info",
|
||||
Usage: "Set log level (panic, fatal, error, warn, info, debug, trace)",
|
||||
EnvVars: []string{"LOG_LEVEL"},
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "log-file",
|
||||
Required: false,
|
||||
Usage: "Write logs to a file",
|
||||
EnvVars: []string{"LOG_FILE"},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
|
|
@ -12,6 +12,9 @@ import (
|
|||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/agiledragon/gomonkey/v2"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
|
@ -341,3 +344,50 @@ func TestGetPrefetchPatterns(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
require.Equal(t, "/", patterns)
|
||||
}
|
||||
|
||||
func TestGetGlobalFlags(t *testing.T) {
|
||||
flags := getGlobalFlags()
|
||||
require.Equal(t, 3, len(flags))
|
||||
}
|
||||
|
||||
func TestSetupLogLevelWithLogFile(t *testing.T) {
|
||||
logFilePath := "test_log_file.log"
|
||||
defer os.Remove(logFilePath)
|
||||
|
||||
c := &cli.Context{}
|
||||
|
||||
patches := gomonkey.ApplyMethodSeq(c, "String", []gomonkey.OutputCell{
|
||||
{Values: []interface{}{"info"}, Times: 1},
|
||||
{Values: []interface{}{"test_log_file.log"}, Times: 2},
|
||||
})
|
||||
defer patches.Reset()
|
||||
setupLogLevel(c)
|
||||
|
||||
file, err := os.Open(logFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, file)
|
||||
file.Close()
|
||||
|
||||
logrusOutput := logrus.StandardLogger().Out
|
||||
assert.NotNil(t, logrusOutput)
|
||||
|
||||
logrus.Info("This is a test log message")
|
||||
content, err := os.ReadFile(logFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.Contains(t, string(content), "This is a test log message")
|
||||
}
|
||||
|
||||
func TestSetupLogLevelWithInvalidLogFile(t *testing.T) {
|
||||
|
||||
c := &cli.Context{}
|
||||
|
||||
patches := gomonkey.ApplyMethodSeq(c, "String", []gomonkey.OutputCell{
|
||||
{Values: []interface{}{"info"}, Times: 1},
|
||||
{Values: []interface{}{"test/test_log_file.log"}, Times: 2},
|
||||
})
|
||||
defer patches.Reset()
|
||||
setupLogLevel(c)
|
||||
|
||||
logrusOutput := logrus.StandardLogger().Out
|
||||
assert.NotNil(t, logrusOutput)
|
||||
}
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
module github.com/dragonflyoss/nydus/contrib/nydusify
|
||||
|
||||
go 1.21
|
||||
go 1.23.1
|
||||
|
||||
toolchain go1.23.6
|
||||
|
||||
require (
|
||||
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
|
||||
|
@ -37,8 +39,11 @@ require (
|
|||
require (
|
||||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
|
||||
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 // indirect
|
||||
github.com/BraveY/snapshotter-converter v0.0.5 // indirect
|
||||
github.com/CloudNativeAI/model-spec v0.0.2 // indirect
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/Microsoft/hcsshim v0.11.5 // indirect
|
||||
github.com/agiledragon/gomonkey/v2 v2.13.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect
|
||||
|
|
|
@ -3,11 +3,21 @@ github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9
|
|||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
|
||||
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 h1:dIScnXFlF784X79oi7MzVT6GWqr/W1uUt0pB5CsDs9M=
|
||||
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2/go.mod h1:gCLVsLfv1egrcZu+GoJATN5ts75F2s62ih/457eWzOw=
|
||||
github.com/BraveY/snapshotter-converter v0.0.5 h1:h3zAB31u16EOkshS2J9Nx40RiWSjH6zd5baOSmjLCOg=
|
||||
github.com/BraveY/snapshotter-converter v0.0.5/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
|
||||
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409034316-66511579fa6d h1:00wAtig4otPLOMJN+CZHvG4MWm+g4NMY6j0K7eYEFNk=
|
||||
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409034316-66511579fa6d/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
|
||||
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409042404-e997e14906b7 h1:c9aFn0vSkXe1nrGe5mONSRs18/BXJKEiSiHvZyaXlBE=
|
||||
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409042404-e997e14906b7/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/CloudNativeAI/model-spec v0.0.2 h1:uCO86kMk8wwadn8vKs0wT4petig5crByTIngdO3L2cQ=
|
||||
github.com/CloudNativeAI/model-spec v0.0.2/go.mod h1:3U/4zubBfbUkW59ATSg41HnkYyKrKUcKFH/cVdoPQnk=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||
github.com/Microsoft/hcsshim v0.11.5 h1:haEcLNpj9Ka1gd3B3tAEs9CpE0c+1IhoL59w/exYU38=
|
||||
github.com/Microsoft/hcsshim v0.11.5/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA09d4bExKcU=
|
||||
github.com/agiledragon/gomonkey/v2 v2.13.0 h1:B24Jg6wBI1iB8EFR1c+/aoTg7QN/Cum7YffG8KMIyYo=
|
||||
github.com/agiledragon/gomonkey/v2 v2.13.0/go.mod h1:ap1AmDzcVOAz1YpeJ3TCzIgstoaWLA6jbbgxfB4w2iY=
|
||||
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g=
|
||||
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
|
||||
github.com/aws/aws-sdk-go-v2 v1.24.1 h1:xAojnj+ktS95YZlDf0zxWBkbFtymPeDP+rvUQIH3uAU=
|
||||
|
@ -146,6 +156,7 @@ github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN
|
|||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
|
||||
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-hclog v1.6.2 h1:NOtoftovWkDheyUM/8JW3QMiXyxJK3uHRK7wV04nD2I=
|
||||
|
@ -164,6 +175,7 @@ github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9Y
|
|||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
|
||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4=
|
||||
|
@ -232,6 +244,8 @@ github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf
|
|||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
||||
github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6 h1:pnnLyeX7o/5aX8qUQ69P/mLojDqwda8hFOCBTmP/6hw=
|
||||
github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6/go.mod h1:39R/xuhNgVhi+K0/zst4TLrJrVmbm6LVgl4A0+ZFS5M=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
|
@ -352,6 +366,7 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
|
|||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
|
|
|
@ -18,7 +18,7 @@ import (
|
|||
"github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/containerd/containerd/images"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/image-spec/specs-go"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
|
@ -295,23 +295,23 @@ func (cache *Cache) layerToRecord(layer *ocispec.Descriptor) *Record {
|
|||
return nil
|
||||
}
|
||||
|
||||
func mergeRecord(old, new *Record) *Record {
|
||||
if old == nil {
|
||||
old = &Record{
|
||||
SourceChainID: new.SourceChainID,
|
||||
func mergeRecord(oldRec, newRec *Record) *Record {
|
||||
if oldRec == nil {
|
||||
oldRec = &Record{
|
||||
SourceChainID: newRec.SourceChainID,
|
||||
}
|
||||
}
|
||||
|
||||
if new.NydusBootstrapDesc != nil {
|
||||
old.NydusBootstrapDesc = new.NydusBootstrapDesc
|
||||
old.NydusBootstrapDiffID = new.NydusBootstrapDiffID
|
||||
if newRec.NydusBootstrapDesc != nil {
|
||||
oldRec.NydusBootstrapDesc = newRec.NydusBootstrapDesc
|
||||
oldRec.NydusBootstrapDiffID = newRec.NydusBootstrapDiffID
|
||||
}
|
||||
|
||||
if new.NydusBlobDesc != nil {
|
||||
old.NydusBlobDesc = new.NydusBlobDesc
|
||||
if newRec.NydusBlobDesc != nil {
|
||||
oldRec.NydusBlobDesc = newRec.NydusBlobDesc
|
||||
}
|
||||
|
||||
return old
|
||||
return oldRec
|
||||
}
|
||||
|
||||
func (cache *Cache) importRecordsFromLayers(layers []ocispec.Descriptor) {
|
||||
|
|
|
@ -16,9 +16,11 @@ import (
|
|||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
func prettyDump(obj interface{}, name string) error {
|
||||
|
@ -113,13 +115,24 @@ func (checker *Checker) Output(
|
|||
}
|
||||
|
||||
diffIDs := parsed.NydusImage.Config.RootFS.DiffIDs
|
||||
if diffIDs[len(diffIDs)-1] != diffID.Digest() {
|
||||
manifest := parsed.NydusImage.Manifest
|
||||
if manifest.ArtifactType != modelspec.ArtifactTypeModelManifest && diffIDs[len(diffIDs)-1] != diffID.Digest() {
|
||||
return errors.Errorf(
|
||||
"invalid bootstrap layer diff id: %s (calculated) != %s (in image config)",
|
||||
diffID.Digest().String(),
|
||||
diffIDs[len(diffIDs)-1].String(),
|
||||
)
|
||||
}
|
||||
|
||||
if manifest.ArtifactType == modelspec.ArtifactTypeModelManifest {
|
||||
if manifest.Subject == nil {
|
||||
return errors.New("missing subject in manifest")
|
||||
}
|
||||
|
||||
if manifest.Subject.MediaType != ocispec.MediaTypeImageManifest {
|
||||
return errors.Errorf("invalid subject media type: %s", manifest.Subject.MediaType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
"reflect"
|
||||
"syscall"
|
||||
|
||||
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
|
||||
"github.com/distribution/reference"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
|
||||
|
@ -206,6 +207,7 @@ func (rule *FilesystemRule) mountNydusImage(image *Image, dir string) (func() er
|
|||
BackendType: backendType,
|
||||
BackendConfig: backendConfig,
|
||||
BootstrapPath: filepath.Join(rule.WorkDir, dir, "nydus_bootstrap/image/image.boot"),
|
||||
ExternalBackendConfigPath: filepath.Join(rule.WorkDir, dir, "nydus_bootstrap/image/backend.json"),
|
||||
ConfigPath: filepath.Join(nydusdDir, "config.json"),
|
||||
BlobCacheDir: filepath.Join(nydusdDir, "cache"),
|
||||
APISockPath: filepath.Join(nydusdDir, "api.sock"),
|
||||
|
@ -252,6 +254,12 @@ func (rule *FilesystemRule) mountNydusImage(image *Image, dir string) (func() er
|
|||
}
|
||||
}
|
||||
|
||||
if image.Parsed.NydusImage.Manifest.ArtifactType == modelspec.ArtifactTypeModelManifest {
|
||||
if err := utils.BuildRuntimeExternalBackendConfig(nydusdConfig.BackendConfig, nydusdConfig.ExternalBackendConfigPath); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to build external backend config file")
|
||||
}
|
||||
}
|
||||
|
||||
nydusd, err := tool.NewNydusd(nydusdConfig)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "create nydusd daemon")
|
||||
|
|
|
@ -12,6 +12,7 @@ import (
|
|||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
|
||||
|
@ -59,7 +60,8 @@ func (rule *ManifestRule) validateConfig(sourceImage, targetImage *parser.Image)
|
|||
func (rule *ManifestRule) validateOCI(image *parser.Image) error {
|
||||
// Check config diff IDs
|
||||
layers := image.Manifest.Layers
|
||||
if len(image.Config.RootFS.DiffIDs) != len(layers) {
|
||||
artifact := image.Manifest.ArtifactType
|
||||
if artifact != modelspec.ArtifactTypeModelManifest && len(image.Config.RootFS.DiffIDs) != len(layers) {
|
||||
return fmt.Errorf("invalid diff ids in image config: %d (diff ids) != %d (layers)", len(image.Config.RootFS.DiffIDs), len(layers))
|
||||
}
|
||||
|
||||
|
@ -69,21 +71,26 @@ func (rule *ManifestRule) validateOCI(image *parser.Image) error {
|
|||
func (rule *ManifestRule) validateNydus(image *parser.Image) error {
|
||||
// Check bootstrap and blob layers
|
||||
layers := image.Manifest.Layers
|
||||
manifestArtifact := image.Manifest.ArtifactType
|
||||
for i, layer := range layers {
|
||||
if i == len(layers)-1 {
|
||||
if layer.Annotations[utils.LayerAnnotationNydusBootstrap] != "true" {
|
||||
return errors.New("invalid bootstrap layer in nydus image manifest")
|
||||
}
|
||||
if manifestArtifact == modelspec.ArtifactTypeModelManifest && layer.Annotations[utils.LayerAnnotationNydusArtifactType] != manifestArtifact {
|
||||
return errors.New("invalid manifest artifact type in nydus image manifest")
|
||||
}
|
||||
} else {
|
||||
if layer.MediaType != utils.MediaTypeNydusBlob ||
|
||||
layer.Annotations[utils.LayerAnnotationNydusBlob] != "true" {
|
||||
if manifestArtifact != modelspec.ArtifactTypeModelManifest &&
|
||||
(layer.MediaType != utils.MediaTypeNydusBlob ||
|
||||
layer.Annotations[utils.LayerAnnotationNydusBlob] != "true") {
|
||||
return errors.New("invalid blob layer in nydus image manifest")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check config diff IDs
|
||||
if len(image.Config.RootFS.DiffIDs) != len(layers) {
|
||||
if manifestArtifact != modelspec.ArtifactTypeModelManifest && len(image.Config.RootFS.DiffIDs) != len(layers) {
|
||||
return fmt.Errorf("invalid diff ids in image config: %d (diff ids) != %d (layers)", len(image.Config.RootFS.DiffIDs), len(layers))
|
||||
}
|
||||
|
||||
|
|
|
@ -14,10 +14,12 @@ import (
|
|||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"text/template"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type NydusdConfig struct {
|
||||
|
@ -27,6 +29,8 @@ type NydusdConfig struct {
|
|||
ConfigPath string
|
||||
BackendType string
|
||||
BackendConfig string
|
||||
ExternalBackendConfigPath string
|
||||
ExternalBackendProxyCacheDir string
|
||||
BlobCacheDir string
|
||||
APISockPath string
|
||||
MountPath string
|
||||
|
@ -50,6 +54,9 @@ var configTpl = `
|
|||
"type": "{{.BackendType}}",
|
||||
"config": {{.BackendConfig}}
|
||||
},
|
||||
"external_backend": {
|
||||
"config_path": "{{.ExternalBackendConfigPath}}"
|
||||
},
|
||||
"cache": {
|
||||
"type": "blobcache",
|
||||
"config": {
|
||||
|
@ -178,6 +185,7 @@ func (nydusd *Nydusd) Mount() error {
|
|||
}
|
||||
|
||||
cmd := exec.Command(nydusd.NydusdPath, args...)
|
||||
logrus.Debugf("Command: %s %s", nydusd.NydusdPath, strings.Join(args, " "))
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ import (
|
|||
originprovider "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
"github.com/goharbor/acceleration-service/pkg/remote"
|
||||
|
||||
"github.com/containerd/nydus-snapshotter/pkg/converter"
|
||||
"github.com/BraveY/snapshotter-converter/converter"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
|
||||
"github.com/dustin/go-humanize"
|
||||
|
|
|
@ -21,11 +21,11 @@ import (
|
|||
|
||||
"github.com/containerd/containerd/labels"
|
||||
|
||||
"github.com/BraveY/snapshotter-converter/converter"
|
||||
"github.com/containerd/containerd"
|
||||
"github.com/containerd/containerd/content/local"
|
||||
"github.com/containerd/containerd/namespaces"
|
||||
"github.com/containerd/containerd/reference/docker"
|
||||
"github.com/containerd/nydus-snapshotter/pkg/converter"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/committer/diff"
|
||||
parserPkg "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
|
@ -230,6 +230,14 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
|
|||
return appendedEg.Wait()
|
||||
}
|
||||
|
||||
// Ensure filesystem changes are written to disk before committing
|
||||
// This prevents issues where changes are still in memory buffers
|
||||
// and not yet visible in the overlay filesystem's upper directory
|
||||
logrus.Infof("syncing filesystem before commit")
|
||||
if err := cm.syncFilesystem(ctx, opt.ContainerID); err != nil {
|
||||
return errors.Wrap(err, "failed to sync filesystem")
|
||||
}
|
||||
|
||||
if err := cm.pause(ctx, opt.ContainerID, commit); err != nil {
|
||||
return errors.Wrap(err, "pause container to commit")
|
||||
}
|
||||
|
@ -515,6 +523,36 @@ func (cm *Committer) pause(ctx context.Context, containerID string, handle func(
|
|||
return cm.manager.UnPause(ctx, containerID)
|
||||
}
|
||||
|
||||
// syncFilesystem forces filesystem sync to ensure all changes are written to disk.
|
||||
// This is crucial for overlay filesystems where changes may still be in memory
|
||||
// buffers and not yet visible in the upper directory when committing.
|
||||
func (cm *Committer) syncFilesystem(ctx context.Context, containerID string) error {
|
||||
inspect, err := cm.manager.Inspect(ctx, containerID)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "inspect container for sync")
|
||||
}
|
||||
|
||||
// Use nsenter to execute sync command in the container's namespace
|
||||
config := &Config{
|
||||
Mount: true,
|
||||
PID: true,
|
||||
Target: inspect.Pid,
|
||||
}
|
||||
|
||||
stderr, err := config.ExecuteContext(ctx, io.Discard, "sync")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, fmt.Sprintf("execute sync in container namespace: %s", strings.TrimSpace(stderr)))
|
||||
}
|
||||
|
||||
// Also sync the host filesystem to ensure overlay changes are written
|
||||
cmd := exec.CommandContext(ctx, "sync")
|
||||
if err := cmd.Run(); err != nil {
|
||||
return errors.Wrap(err, "execute host sync")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cm *Committer) pushManifest(
|
||||
ctx context.Context, nydusImage parserPkg.Image, bootstrapDiffID digest.Digest, targetRef, bootstrapName, fsversion string, upperBlob *Blob, mountBlobs []Blob, insecure bool,
|
||||
) error {
|
||||
|
|
|
@ -236,14 +236,14 @@ func Changes(ctx context.Context, appendMount func(path string), withPaths []str
|
|||
}
|
||||
|
||||
// checkDelete checks if the specified file is a whiteout
|
||||
func checkDelete(_ string, path string, base string, f os.FileInfo) (delete, skip bool, _ error) {
|
||||
func checkDelete(_ string, path string, base string, f os.FileInfo) (isDelete, skip bool, _ error) {
|
||||
if f.Mode()&os.ModeCharDevice != 0 {
|
||||
if _, ok := f.Sys().(*syscall.Stat_t); ok {
|
||||
maj, min, err := devices.DeviceInfo(f)
|
||||
maj, minor, err := devices.DeviceInfo(f)
|
||||
if err != nil {
|
||||
return false, false, errors.Wrapf(err, "failed to get device info")
|
||||
}
|
||||
if maj == 0 && min == 0 {
|
||||
if maj == 0 && minor == 0 {
|
||||
// This file is a whiteout (char 0/0) that indicates this is deleted from the base
|
||||
if _, err := os.Lstat(filepath.Join(base, path)); err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
|
|
|
@ -5,11 +5,37 @@
|
|||
package converter
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/content/local"
|
||||
"github.com/containerd/containerd/namespaces"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
|
||||
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
|
||||
snapConv "github.com/BraveY/snapshotter-converter/converter"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/external/modctl"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
|
||||
|
||||
"encoding/json"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/image-spec/specs-go"
|
||||
|
||||
"github.com/goharbor/acceleration-service/pkg/converter"
|
||||
"github.com/goharbor/acceleration-service/pkg/platformutil"
|
||||
"github.com/pkg/errors"
|
||||
|
@ -24,6 +50,9 @@ type Opt struct {
|
|||
Target string
|
||||
ChunkDictRef string
|
||||
|
||||
SourceBackendType string
|
||||
SourceBackendConfig string
|
||||
|
||||
SourceInsecure bool
|
||||
TargetInsecure bool
|
||||
ChunkDictInsecure bool
|
||||
|
@ -47,14 +76,31 @@ type Opt struct {
|
|||
PrefetchPatterns string
|
||||
OCIRef bool
|
||||
WithReferrer bool
|
||||
WithPlainHTTP bool
|
||||
|
||||
AllPlatforms bool
|
||||
Platforms string
|
||||
|
||||
OutputJSON string
|
||||
|
||||
PushRetryCount int
|
||||
PushRetryDelay string
|
||||
}
|
||||
|
||||
type SourceBackendConfig struct {
|
||||
Context string `json:"context"`
|
||||
WorkDir string `json:"work_dir"`
|
||||
}
|
||||
|
||||
func Convert(ctx context.Context, opt Opt) error {
|
||||
if opt.SourceBackendType == "modelfile" {
|
||||
return convertModelFile(ctx, opt)
|
||||
}
|
||||
|
||||
if opt.SourceBackendType == "model-artifact" {
|
||||
return convertModelArtifact(ctx, opt)
|
||||
}
|
||||
|
||||
ctx = namespaces.WithNamespace(ctx, "nydusify")
|
||||
platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms)
|
||||
if err != nil {
|
||||
|
@ -83,6 +129,15 @@ func Convert(ctx context.Context, opt Opt) error {
|
|||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
// Parse retry delay
|
||||
retryDelay, err := time.ParseDuration(opt.PushRetryDelay)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "parse push retry delay")
|
||||
}
|
||||
|
||||
// Set push retry configuration
|
||||
pvd.SetPushRetryConfig(opt.PushRetryCount, retryDelay)
|
||||
|
||||
cvt, err := converter.New(
|
||||
converter.WithProvider(pvd),
|
||||
converter.WithDriver("nydus", getConfig(opt)),
|
||||
|
@ -98,3 +153,413 @@ func Convert(ctx context.Context, opt Opt) error {
|
|||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func convertModelFile(ctx context.Context, opt Opt) error {
|
||||
if _, err := os.Stat(opt.WorkDir); err != nil {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
|
||||
return errors.Wrap(err, "prepare work directory")
|
||||
}
|
||||
// We should only clean up when the work directory not exists
|
||||
// before, otherwise it may delete user data by mistake.
|
||||
defer os.RemoveAll(opt.WorkDir)
|
||||
} else {
|
||||
return errors.Wrap(err, "stat work directory")
|
||||
}
|
||||
}
|
||||
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create temp directory")
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
attributesPath := filepath.Join(tmpDir, ".nydusattributes")
|
||||
backendMetaPath := filepath.Join(tmpDir, ".backend.meta")
|
||||
backendConfigPath := filepath.Join(tmpDir, ".backend.json")
|
||||
|
||||
var srcBkdCfg SourceBackendConfig
|
||||
if err := json.Unmarshal([]byte(opt.SourceBackendConfig), &srcBkdCfg); err != nil {
|
||||
return errors.Wrap(err, "unmarshal source backend config")
|
||||
}
|
||||
modctlHandler, err := newModctlHandler(opt, srcBkdCfg.WorkDir)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create modctl handler")
|
||||
}
|
||||
|
||||
if err := external.Handle(context.Background(), external.Options{
|
||||
Dir: srcBkdCfg.WorkDir,
|
||||
Handler: modctlHandler,
|
||||
MetaOutput: backendMetaPath,
|
||||
BackendOutput: backendConfigPath,
|
||||
AttributesOutput: attributesPath,
|
||||
}); err != nil {
|
||||
return errors.Wrap(err, "handle modctl")
|
||||
}
|
||||
|
||||
// Make nydus layer with external blob
|
||||
packOption := snapConv.PackOption{
|
||||
BuilderPath: opt.NydusImagePath,
|
||||
Compressor: opt.Compressor,
|
||||
FsVersion: opt.FsVersion,
|
||||
ChunkSize: opt.ChunkSize,
|
||||
FromDir: srcBkdCfg.Context,
|
||||
AttributesPath: attributesPath,
|
||||
}
|
||||
_, externalBlobDigest, err := packWithAttributes(ctx, packOption, tmpDir)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "pack to blob")
|
||||
}
|
||||
|
||||
bootStrapTarPath, err := packFinalBootstrap(tmpDir, backendConfigPath, externalBlobDigest)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "pack final bootstrap")
|
||||
}
|
||||
|
||||
modelCfg, err := buildModelConfig(modctlHandler)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "build model config")
|
||||
}
|
||||
|
||||
modelLayers := modctlHandler.GetLayers()
|
||||
|
||||
nydusImage := buildNydusImage()
|
||||
return pushManifest(context.Background(), opt, *modelCfg, modelLayers, *nydusImage, bootStrapTarPath)
|
||||
}
|
||||
|
||||
func convertModelArtifact(ctx context.Context, opt Opt) error {
|
||||
if _, err := os.Stat(opt.WorkDir); err != nil {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
|
||||
return errors.Wrap(err, "prepare work directory")
|
||||
}
|
||||
// We should only clean up when the work directory not exists
|
||||
// before, otherwise it may delete user data by mistake.
|
||||
defer os.RemoveAll(opt.WorkDir)
|
||||
} else {
|
||||
return errors.Wrap(err, "stat work directory")
|
||||
}
|
||||
}
|
||||
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create temp directory")
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
contextDir, err := os.MkdirTemp(tmpDir, "context-")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create temp directory")
|
||||
}
|
||||
defer os.RemoveAll(contextDir)
|
||||
|
||||
attributesPath := filepath.Join(tmpDir, ".nydusattributes")
|
||||
backendMetaPath := filepath.Join(tmpDir, ".backend.meta")
|
||||
backendConfigPath := filepath.Join(tmpDir, ".backend.json")
|
||||
|
||||
handler, err := modctl.NewRemoteHandler(ctx, opt.Source, opt.WithPlainHTTP)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create modctl handler")
|
||||
}
|
||||
if err := external.RemoteHandle(ctx, external.Options{
|
||||
ContextDir: contextDir,
|
||||
RemoteHandler: handler,
|
||||
MetaOutput: backendMetaPath,
|
||||
BackendOutput: backendConfigPath,
|
||||
AttributesOutput: attributesPath,
|
||||
}); err != nil {
|
||||
return errors.Wrap(err, "remote handle")
|
||||
}
|
||||
|
||||
// Make nydus layer with external blob
|
||||
packOption := snapConv.PackOption{
|
||||
BuilderPath: opt.NydusImagePath,
|
||||
Compressor: opt.Compressor,
|
||||
FsVersion: opt.FsVersion,
|
||||
ChunkSize: opt.ChunkSize,
|
||||
FromDir: contextDir,
|
||||
AttributesPath: attributesPath,
|
||||
}
|
||||
_, externalBlobDigest, err := packWithAttributes(ctx, packOption, tmpDir)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "pack to blob")
|
||||
}
|
||||
|
||||
bootStrapTarPath, err := packFinalBootstrap(tmpDir, backendConfigPath, externalBlobDigest)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "pack final bootstrap")
|
||||
}
|
||||
|
||||
modelCfg, err := handler.GetModelConfig()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "build model config")
|
||||
}
|
||||
|
||||
modelLayers := handler.GetLayers()
|
||||
|
||||
nydusImage := buildNydusImage()
|
||||
return pushManifest(context.Background(), opt, *modelCfg, modelLayers, *nydusImage, bootStrapTarPath)
|
||||
}
|
||||
|
||||
func newModctlHandler(opt Opt, workDir string) (*modctl.Handler, error) {
|
||||
chunkSizeStr := strings.TrimPrefix(opt.ChunkSize, "0x")
|
||||
chunkSize, err := strconv.ParseUint(chunkSizeStr, 16, 64)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "parse chunk size to uint64")
|
||||
}
|
||||
modctlOpt, err := modctl.GetOption(opt.Source, workDir, chunkSize)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "parse modctl option")
|
||||
}
|
||||
return modctl.NewHandler(*modctlOpt)
|
||||
}
|
||||
|
||||
func packWithAttributes(ctx context.Context, packOption snapConv.PackOption, blobDir string) (digest.Digest, digest.Digest, error) {
|
||||
blob, err := os.CreateTemp(blobDir, "blob-")
|
||||
if err != nil {
|
||||
return "", "", errors.Wrap(err, "create temp file for blob")
|
||||
}
|
||||
defer blob.Close()
|
||||
|
||||
externalBlob, err := os.CreateTemp(blobDir, "external-blob-")
|
||||
if err != nil {
|
||||
return "", "", errors.Wrap(err, "create temp file for external blob")
|
||||
}
|
||||
defer externalBlob.Close()
|
||||
|
||||
blobDigester := digest.Canonical.Digester()
|
||||
blobWriter := io.MultiWriter(blob, blobDigester.Hash())
|
||||
externalBlobDigester := digest.Canonical.Digester()
|
||||
packOption.ExternalBlobWriter = io.MultiWriter(externalBlob, externalBlobDigester.Hash())
|
||||
_, err = snapConv.Pack(ctx, blobWriter, packOption)
|
||||
if err != nil {
|
||||
return "", "", errors.Wrap(err, "pack to blob")
|
||||
}
|
||||
|
||||
blobDigest := blobDigester.Digest()
|
||||
err = os.Rename(blob.Name(), filepath.Join(blobDir, blobDigest.Hex()))
|
||||
if err != nil {
|
||||
return "", "", errors.Wrap(err, "rename blob file")
|
||||
}
|
||||
|
||||
externalBlobDigest := externalBlobDigester.Digest()
|
||||
err = os.Rename(externalBlob.Name(), filepath.Join(blobDir, externalBlobDigest.Hex()))
|
||||
if err != nil {
|
||||
return "", "", errors.Wrap(err, "rename external blob file")
|
||||
}
|
||||
|
||||
return blobDigest, externalBlobDigest, nil
|
||||
}
|
||||
|
||||
// Pack bootstrap and backend config into final bootstrap tar file.
|
||||
func packFinalBootstrap(workDir, backendConfigPath string, externalBlobDigest digest.Digest) (string, error) {
|
||||
bkdCfg, err := os.ReadFile(backendConfigPath)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "read backend config file")
|
||||
}
|
||||
bkdReader := bytes.NewReader(bkdCfg)
|
||||
files := []snapConv.File{
|
||||
{
|
||||
Name: "backend.json",
|
||||
Reader: bkdReader,
|
||||
Size: int64(len(bkdCfg)),
|
||||
},
|
||||
}
|
||||
|
||||
externalBlobRa, err := local.OpenReader(filepath.Join(workDir, externalBlobDigest.Hex()))
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "open reader for upper blob")
|
||||
}
|
||||
bootstrap, err := os.CreateTemp(workDir, "bootstrap-")
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "create temp file for bootstrap")
|
||||
}
|
||||
defer bootstrap.Close()
|
||||
|
||||
if _, err := snapConv.UnpackEntry(externalBlobRa, snapConv.EntryBootstrap, bootstrap); err != nil {
|
||||
return "", errors.Wrap(err, "unpack bootstrap from nydus")
|
||||
}
|
||||
|
||||
files = append(files, snapConv.File{
|
||||
Name: snapConv.EntryBootstrap,
|
||||
Reader: content.NewReader(externalBlobRa),
|
||||
Size: externalBlobRa.Size(),
|
||||
})
|
||||
|
||||
bootStrapTarPath := fmt.Sprintf("%s-final.tar", bootstrap.Name())
|
||||
bootstrapTar, err := os.Create(bootStrapTarPath)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "open bootstrap tar file")
|
||||
}
|
||||
defer bootstrap.Close()
|
||||
rc := snapConv.PackToTar(files, false)
|
||||
defer rc.Close()
|
||||
println("copy bootstrap to tar file")
|
||||
if _, err = io.Copy(bootstrapTar, rc); err != nil {
|
||||
return "", errors.Wrap(err, "copy merged bootstrap")
|
||||
}
|
||||
return bootStrapTarPath, nil
|
||||
}
|
||||
|
||||
func buildNydusImage() *parser.Image {
|
||||
manifest := ocispec.Manifest{
|
||||
Versioned: specs.Versioned{SchemaVersion: 2},
|
||||
MediaType: ocispec.MediaTypeImageManifest,
|
||||
ArtifactType: modelspec.ArtifactTypeModelManifest,
|
||||
Config: ocispec.Descriptor{
|
||||
MediaType: modelspec.MediaTypeModelConfig,
|
||||
},
|
||||
}
|
||||
desc := ocispec.Descriptor{
|
||||
MediaType: ocispec.MediaTypeImageManifest,
|
||||
}
|
||||
nydusImage := &parser.Image{
|
||||
Manifest: manifest,
|
||||
Desc: desc,
|
||||
}
|
||||
return nydusImage
|
||||
}
|
||||
|
||||
func buildModelConfig(modctlHandler *modctl.Handler) (*modelspec.Model, error) {
|
||||
cfgBytes, err := modctlHandler.GetConfig()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "get modctl config")
|
||||
}
|
||||
var modelCfg modelspec.Model
|
||||
if err := json.Unmarshal(cfgBytes, &modelCfg); err != nil {
|
||||
return nil, errors.Wrap(err, "unmarshal modctl config")
|
||||
}
|
||||
return &modelCfg, nil
|
||||
}
|
||||
|
||||
func pushManifest(
|
||||
ctx context.Context, opt Opt, modelCfg modelspec.Model, modelLayers []ocispec.Descriptor, nydusImage parser.Image, bootstrapTarPath string,
|
||||
) error {
|
||||
|
||||
// Push image config
|
||||
configBytes, configDesc, err := makeDesc(modelCfg, nydusImage.Manifest.Config)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "make config desc")
|
||||
}
|
||||
|
||||
remoter, err := pkgPvd.DefaultRemote(opt.Target, opt.TargetInsecure)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create remote")
|
||||
}
|
||||
|
||||
if opt.WithPlainHTTP {
|
||||
remoter.WithHTTP()
|
||||
}
|
||||
|
||||
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
|
||||
if utils.RetryWithHTTP(err) {
|
||||
remoter.MaybeWithHTTP(err)
|
||||
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
|
||||
return errors.Wrap(err, "push image config")
|
||||
}
|
||||
} else {
|
||||
return errors.Wrap(err, "push image config")
|
||||
}
|
||||
}
|
||||
|
||||
// Push bootstrap layer
|
||||
bootstrapTar, err := os.Open(bootstrapTarPath)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "open bootstrap tar file")
|
||||
}
|
||||
|
||||
bootstrapTarGzPath := bootstrapTarPath + ".gz"
|
||||
bootstrapTarGz, err := os.Create(bootstrapTarGzPath)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create bootstrap tar.gz file")
|
||||
}
|
||||
defer bootstrapTarGz.Close()
|
||||
|
||||
digester := digest.SHA256.Digester()
|
||||
gzWriter := gzip.NewWriter(io.MultiWriter(bootstrapTarGz, digester.Hash()))
|
||||
if _, err := io.Copy(gzWriter, bootstrapTar); err != nil {
|
||||
return errors.Wrap(err, "compress bootstrap tar to tar.gz")
|
||||
}
|
||||
if err := gzWriter.Close(); err != nil {
|
||||
return errors.Wrap(err, "close gzip writer")
|
||||
}
|
||||
|
||||
ra, err := local.OpenReader(bootstrapTarGzPath)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "open reader for upper blob")
|
||||
}
|
||||
defer ra.Close()
|
||||
|
||||
bootstrapDesc := ocispec.Descriptor{
|
||||
Digest: digester.Digest(),
|
||||
Size: ra.Size(),
|
||||
MediaType: ocispec.MediaTypeImageLayerGzip,
|
||||
Annotations: map[string]string{
|
||||
snapConv.LayerAnnotationFSVersion: opt.FsVersion,
|
||||
snapConv.LayerAnnotationNydusBootstrap: "true",
|
||||
snapConv.LayerAnnotationNydusArtifactType: modelspec.ArtifactTypeModelManifest,
|
||||
},
|
||||
}
|
||||
|
||||
bootstrapRc, err := os.Open(bootstrapTarGzPath)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "open bootstrap %s", bootstrapTarGzPath)
|
||||
}
|
||||
defer bootstrapRc.Close()
|
||||
if err := remoter.Push(ctx, bootstrapDesc, true, bootstrapRc); err != nil {
|
||||
return errors.Wrap(err, "push bootstrap layer")
|
||||
}
|
||||
|
||||
// Push image manifest
|
||||
layers := make([]ocispec.Descriptor, 0, len(modelLayers)+1)
|
||||
layers = append(layers, modelLayers...)
|
||||
layers = append(layers, bootstrapDesc)
|
||||
|
||||
subject, err := getSourceManifestSubject(ctx, opt.Source, opt.SourceInsecure, opt.WithPlainHTTP)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "get source manifest subject")
|
||||
}
|
||||
|
||||
nydusImage.Manifest.Config = *configDesc
|
||||
nydusImage.Manifest.Layers = layers
|
||||
nydusImage.Manifest.Subject = subject
|
||||
|
||||
manifestBytes, manifestDesc, err := makeDesc(nydusImage.Manifest, nydusImage.Desc)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "make manifest desc")
|
||||
}
|
||||
|
||||
if err := remoter.Push(ctx, *manifestDesc, false, bytes.NewReader(manifestBytes)); err != nil {
|
||||
return errors.Wrap(err, "push image manifest")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getSourceManifestSubject(ctx context.Context, sourceRef string, inscure, plainHTTP bool) (*ocispec.Descriptor, error) {
|
||||
remoter, err := pkgPvd.DefaultRemote(sourceRef, inscure)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "create remote")
|
||||
}
|
||||
if plainHTTP {
|
||||
remoter.WithHTTP()
|
||||
}
|
||||
desc, err := remoter.Resolve(ctx)
|
||||
if utils.RetryWithHTTP(err) {
|
||||
remoter.MaybeWithHTTP(err)
|
||||
desc, err = remoter.Resolve(ctx)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "resolve source manifest subject")
|
||||
}
|
||||
return desc, nil
|
||||
}
|
||||
|
||||
func makeDesc(x interface{}, oldDesc ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
|
||||
data, err := json.MarshalIndent(x, "", " ")
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "json marshal")
|
||||
}
|
||||
dgst := digest.SHA256.FromBytes(data)
|
||||
|
||||
newDesc := oldDesc
|
||||
newDesc.Size = int64(len(data))
|
||||
newDesc.Digest = dgst
|
||||
|
||||
return data, &newDesc, nil
|
||||
}
|
||||
|
|
|
@ -0,0 +1,604 @@
|
|||
package converter
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
snapConv "github.com/BraveY/snapshotter-converter/converter"
|
||||
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
|
||||
"github.com/agiledragon/gomonkey/v2"
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/content/local"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/external/modctl"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
|
||||
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestConvert(t *testing.T) {
|
||||
t.Run("convert modelfile", func(t *testing.T) {
|
||||
opt := Opt{
|
||||
WorkDir: "/tmp/nydusify",
|
||||
SourceBackendType: "modelfile",
|
||||
ChunkSize: "4MiB",
|
||||
SourceBackendConfig: "{}",
|
||||
}
|
||||
err := Convert(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
|
||||
opt.ChunkSize = "0x1000"
|
||||
opt.Source = "docker.io/library/busybox:latest"
|
||||
opt.Target = "docker.io/library/busybox:latest_nydus"
|
||||
err = Convert(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Convert model-artifact", func(t *testing.T) {
|
||||
opt := Opt{
|
||||
WorkDir: "/tmp/nydusify",
|
||||
SourceBackendType: "model-artifact",
|
||||
}
|
||||
err := Convert(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestConvertModelFile(t *testing.T) {
|
||||
opt := Opt{
|
||||
WorkDir: "/tmp/nydusify",
|
||||
SourceBackendConfig: "{}",
|
||||
Source: "docker.io/library/busybox:latest",
|
||||
Target: "docker.io/library/busybox:latest_nydus",
|
||||
ChunkSize: "0x100000",
|
||||
}
|
||||
t.Run("Run normal", func(t *testing.T) {
|
||||
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
|
||||
return &modctl.Handler{}, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", nil
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
|
||||
buildModelConfigPatches := gomonkey.ApplyFunc(buildModelConfig, func(*modctl.Handler) (*modelspec.Model, error) {
|
||||
return &modelspec.Model{}, nil
|
||||
})
|
||||
defer buildModelConfigPatches.Reset()
|
||||
|
||||
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
|
||||
return nil
|
||||
})
|
||||
defer pushManifestPatches.Reset()
|
||||
err := convertModelFile(context.Background(), opt)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run newModctlHandler failed", func(t *testing.T) {
|
||||
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
|
||||
return nil, errors.New("new handler error")
|
||||
})
|
||||
defer patches.Reset()
|
||||
err := convertModelFile(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run external handle failed", func(t *testing.T) {
|
||||
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
|
||||
return &modctl.Handler{}, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
|
||||
return errors.New("external handle mock error")
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
err := convertModelFile(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run packFinalBootstrap failed", func(t *testing.T) {
|
||||
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
|
||||
return &modctl.Handler{}, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", errors.New("pack final bootstrap mock error")
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
err := convertModelFile(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run buildModelConfig failed", func(t *testing.T) {
|
||||
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
|
||||
return &modctl.Handler{}, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", nil
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
|
||||
buildModelConfigPatches := gomonkey.ApplyFunc(buildModelConfig, func(*modctl.Handler) (*modelspec.Model, error) {
|
||||
return nil, errors.New("buildModelConfig mock error")
|
||||
})
|
||||
defer buildModelConfigPatches.Reset()
|
||||
err := convertModelFile(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run pushManifest failed", func(t *testing.T) {
|
||||
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
|
||||
return &modctl.Handler{}, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", nil
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
|
||||
buildModelConfigPatches := gomonkey.ApplyFunc(buildModelConfig, func(*modctl.Handler) (*modelspec.Model, error) {
|
||||
return &modelspec.Model{}, nil
|
||||
})
|
||||
defer buildModelConfigPatches.Reset()
|
||||
|
||||
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
|
||||
return errors.New("pushManifest mock error")
|
||||
})
|
||||
defer pushManifestPatches.Reset()
|
||||
err := convertModelFile(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestConvertModelArtifact(t *testing.T) {
|
||||
opt := Opt{
|
||||
WorkDir: "/tmp/nydusify",
|
||||
Source: "docker.io/library/busybox:latest",
|
||||
Target: "docker.io/library/busybox:latest_nydus",
|
||||
ChunkSize: "0x100000",
|
||||
}
|
||||
|
||||
t.Run("Run normal", func(t *testing.T) {
|
||||
mockRemoteHandler := &modctl.RemoteHandler{}
|
||||
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
|
||||
return mockRemoteHandler, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
|
||||
return "", "", nil
|
||||
})
|
||||
defer packWithAttributesPatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", nil
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
|
||||
getModelConfigPaches := gomonkey.ApplyMethod(mockRemoteHandler, "GetModelConfig", func() (*modelspec.Model, error) {
|
||||
return &modelspec.Model{}, nil
|
||||
})
|
||||
defer getModelConfigPaches.Reset()
|
||||
|
||||
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
|
||||
return nil
|
||||
})
|
||||
defer pushManifestPatches.Reset()
|
||||
err := convertModelArtifact(context.Background(), opt)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run RemoteHandle failed", func(t *testing.T) {
|
||||
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
|
||||
return nil, errors.New("remote handler mock error")
|
||||
})
|
||||
defer patches.Reset()
|
||||
err := convertModelArtifact(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run packWithAttributes failed", func(t *testing.T) {
|
||||
mockRemoteHandler := &modctl.RemoteHandler{}
|
||||
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
|
||||
return mockRemoteHandler, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
|
||||
return "", "", errors.New("pack with attributes failed mock error")
|
||||
})
|
||||
defer packWithAttributesPatches.Reset()
|
||||
err := convertModelArtifact(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run packFinalBootstrap failed", func(t *testing.T) {
|
||||
mockRemoteHandler := &modctl.RemoteHandler{}
|
||||
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
|
||||
return mockRemoteHandler, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
|
||||
return "", "", nil
|
||||
})
|
||||
defer packWithAttributesPatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", errors.New("packFinalBootstrap mock error")
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
|
||||
err := convertModelArtifact(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run GetModelConfig failed", func(t *testing.T) {
|
||||
mockRemoteHandler := &modctl.RemoteHandler{}
|
||||
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
|
||||
return mockRemoteHandler, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
|
||||
return "", "", nil
|
||||
})
|
||||
defer packWithAttributesPatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", nil
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
|
||||
getModelConfigPaches := gomonkey.ApplyMethod(mockRemoteHandler, "GetModelConfig", func() (*modelspec.Model, error) {
|
||||
return nil, errors.New("run getModelConfig mock error")
|
||||
})
|
||||
defer getModelConfigPaches.Reset()
|
||||
|
||||
err := convertModelArtifact(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run pushManifest failed", func(t *testing.T) {
|
||||
mockRemoteHandler := &modctl.RemoteHandler{}
|
||||
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
|
||||
return mockRemoteHandler, nil
|
||||
})
|
||||
defer patches.Reset()
|
||||
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
|
||||
return nil
|
||||
})
|
||||
defer extHandlePatches.Reset()
|
||||
|
||||
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
|
||||
return "", "", nil
|
||||
})
|
||||
defer packWithAttributesPatches.Reset()
|
||||
|
||||
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
|
||||
return "", nil
|
||||
})
|
||||
defer packFinBootPatches.Reset()
|
||||
|
||||
getModelConfigPaches := gomonkey.ApplyMethod(mockRemoteHandler, "GetModelConfig", func() (*modelspec.Model, error) {
|
||||
return &modelspec.Model{}, nil
|
||||
})
|
||||
defer getModelConfigPaches.Reset()
|
||||
|
||||
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
|
||||
return errors.New("push manifest mock error")
|
||||
})
|
||||
defer pushManifestPatches.Reset()
|
||||
|
||||
err := convertModelArtifact(context.Background(), opt)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestPackWithAttributes(t *testing.T) {
|
||||
packOpt := snapConv.PackOption{
|
||||
BuilderPath: "/tmp/nydus-image",
|
||||
}
|
||||
blobDir := "/tmp/nydusify"
|
||||
os.MkdirAll(blobDir, 0755)
|
||||
defer os.RemoveAll(blobDir)
|
||||
_, _, err := packWithAttributes(context.Background(), packOpt, blobDir)
|
||||
assert.Nil(t, err)
|
||||
}
|
||||
|
||||
type mockReaderAt struct{}
|
||||
|
||||
func (m *mockReaderAt) ReadAt([]byte, int64) (n int, err error) {
|
||||
return 0, errors.New("mock error")
|
||||
}
|
||||
func (m *mockReaderAt) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockReaderAt) Size() int64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
func TestPackFinalBootstrap(t *testing.T) {
|
||||
workDir := "/tmp/nydusify"
|
||||
os.MkdirAll(workDir, 0755)
|
||||
defer os.RemoveAll(workDir)
|
||||
cfgPath := filepath.Join(workDir, "backend.json")
|
||||
os.Create(cfgPath)
|
||||
extDigest := digest.FromString("abc1234")
|
||||
mockReaderAt := &mockReaderAt{}
|
||||
|
||||
t.Run("Run local OpenReader failed", func(t *testing.T) {
|
||||
_, err := packFinalBootstrap(workDir, cfgPath, extDigest)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run unpack entry failed", func(t *testing.T) {
|
||||
openReaderPatches := gomonkey.ApplyFunc(local.OpenReader, func(string) (content.ReaderAt, error) {
|
||||
return mockReaderAt, nil
|
||||
})
|
||||
defer openReaderPatches.Reset()
|
||||
_, err := packFinalBootstrap(workDir, cfgPath, extDigest)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run normal", func(t *testing.T) {
|
||||
openReaderPatches := gomonkey.ApplyFunc(local.OpenReader, func(string) (content.ReaderAt, error) {
|
||||
return mockReaderAt, nil
|
||||
})
|
||||
defer openReaderPatches.Reset()
|
||||
|
||||
unpackEntryPatches := gomonkey.ApplyFunc(snapConv.UnpackEntry, func(content.ReaderAt, string, io.Writer) (*snapConv.TOCEntry, error) {
|
||||
return &snapConv.TOCEntry{}, nil
|
||||
})
|
||||
defer unpackEntryPatches.Reset()
|
||||
|
||||
packToTarPaches := gomonkey.ApplyFunc(snapConv.PackToTar, func([]snapConv.File, bool) io.ReadCloser {
|
||||
var buff bytes.Buffer
|
||||
return io.NopCloser(&buff)
|
||||
})
|
||||
defer packToTarPaches.Reset()
|
||||
|
||||
ioCopyPatches := gomonkey.ApplyFunc(io.Copy, func(io.Writer, io.Reader) (int64, error) {
|
||||
return 0, nil
|
||||
})
|
||||
defer ioCopyPatches.Reset()
|
||||
_, err := packFinalBootstrap(workDir, cfgPath, extDigest)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func TestBuildNydusImage(t *testing.T) {
|
||||
image := buildNydusImage()
|
||||
assert.NotNil(t, image)
|
||||
}
|
||||
|
||||
func TestMakeDesc(t *testing.T) {
|
||||
input := "test"
|
||||
oldDesc := ocispec.Descriptor{
|
||||
MediaType: "test",
|
||||
}
|
||||
_, _, err := makeDesc(input, oldDesc)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestBuildModelConfig(t *testing.T) {
|
||||
modctlHander := &modctl.Handler{}
|
||||
_, err := buildModelConfig(modctlHander)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestPushManifest(t *testing.T) {
|
||||
remoter := &remote.Remote{}
|
||||
t.Run("Run make desc failed", func(t *testing.T) {
|
||||
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
|
||||
return nil, nil, errors.New("make desc mock error")
|
||||
})
|
||||
defer makeDescPatches.Reset()
|
||||
err := pushManifest(context.Background(), Opt{}, modelspec.Model{}, nil, parser.Image{}, "")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run default remote failed", func(t *testing.T) {
|
||||
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
|
||||
return []byte{}, &ocispec.Descriptor{}, nil
|
||||
})
|
||||
defer makeDescPatches.Reset()
|
||||
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return nil, errors.New("default remote failed mock error")
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
err := pushManifest(context.Background(), Opt{}, modelspec.Model{}, nil, parser.Image{}, "")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run push failed", func(t *testing.T) {
|
||||
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
|
||||
return []byte{}, &ocispec.Descriptor{}, nil
|
||||
})
|
||||
defer makeDescPatches.Reset()
|
||||
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
|
||||
return errors.New("push mock timeout error")
|
||||
})
|
||||
defer pushPatches.Reset()
|
||||
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, "")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run open failed", func(t *testing.T) {
|
||||
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
|
||||
return []byte{}, &ocispec.Descriptor{}, nil
|
||||
})
|
||||
defer makeDescPatches.Reset()
|
||||
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
|
||||
return nil
|
||||
})
|
||||
defer pushPatches.Reset()
|
||||
|
||||
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, "")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run getSourceManifestSubject failed", func(t *testing.T) {
|
||||
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
|
||||
return []byte{}, &ocispec.Descriptor{}, nil
|
||||
})
|
||||
defer makeDescPatches.Reset()
|
||||
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
|
||||
return nil
|
||||
})
|
||||
defer pushPatches.Reset()
|
||||
|
||||
bootstrapPath := "/tmp/nydusify/bootstrap"
|
||||
os.Mkdir("/tmp/nydusify/", 0755)
|
||||
os.Create(bootstrapPath)
|
||||
defer os.RemoveAll("/tmp/nydusify/")
|
||||
defer os.Remove(bootstrapPath)
|
||||
|
||||
getSourceManifestSubjectPatches := gomonkey.ApplyFunc(getSourceManifestSubject, func(context.Context, string, bool, bool) (*ocispec.Descriptor, error) {
|
||||
return nil, errors.New("get source manifest subject mock error")
|
||||
})
|
||||
defer getSourceManifestSubjectPatches.Reset()
|
||||
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, bootstrapPath)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run normal", func(t *testing.T) {
|
||||
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
|
||||
return []byte{}, &ocispec.Descriptor{}, nil
|
||||
})
|
||||
defer makeDescPatches.Reset()
|
||||
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
|
||||
return nil
|
||||
})
|
||||
defer pushPatches.Reset()
|
||||
|
||||
bootstrapPath := "/tmp/nydusify/bootstrap"
|
||||
os.Mkdir("/tmp/nydusify/", 0755)
|
||||
os.Create(bootstrapPath)
|
||||
defer os.RemoveAll("/tmp/nydusify/")
|
||||
defer os.Remove(bootstrapPath)
|
||||
|
||||
getSourceManifestSubjectPatches := gomonkey.ApplyFunc(getSourceManifestSubject, func(context.Context, string, bool, bool) (*ocispec.Descriptor, error) {
|
||||
return &ocispec.Descriptor{}, nil
|
||||
})
|
||||
defer getSourceManifestSubjectPatches.Reset()
|
||||
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, bootstrapPath)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetSourceManifestSubject(t *testing.T) {
|
||||
remoter := &remote.Remote{}
|
||||
t.Run("Run default remote failed", func(t *testing.T) {
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return nil, errors.New("default remote failed mock error")
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
_, err := getSourceManifestSubject(context.Background(), "", false, false)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run resolve failed", func(t *testing.T) {
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
remoterReolvePatches := gomonkey.ApplyMethod(remoter, "Resolve", func(*remote.Remote, context.Context) (*ocispec.Descriptor, error) {
|
||||
return nil, errors.New("resolve failed mock error timeout")
|
||||
})
|
||||
defer remoterReolvePatches.Reset()
|
||||
_, err := getSourceManifestSubject(context.Background(), "", false, false)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run normal", func(t *testing.T) {
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
remoterReolvePatches := gomonkey.ApplyMethod(remoter, "Resolve", func(*remote.Remote, context.Context) (*ocispec.Descriptor, error) {
|
||||
return &ocispec.Descriptor{}, nil
|
||||
})
|
||||
defer remoterReolvePatches.Reset()
|
||||
desc, err := getSourceManifestSubject(context.Background(), "", false, false)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, desc)
|
||||
})
|
||||
|
||||
}
|
|
@ -22,12 +22,14 @@ import (
|
|||
"github.com/containerd/containerd/platforms"
|
||||
"github.com/containerd/containerd/remotes"
|
||||
"github.com/containerd/containerd/remotes/docker"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
|
||||
"github.com/goharbor/acceleration-service/pkg/cache"
|
||||
accelcontent "github.com/goharbor/acceleration-service/pkg/content"
|
||||
"github.com/goharbor/acceleration-service/pkg/remote"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var LayerConcurrentLimit = 5
|
||||
|
@ -42,6 +44,8 @@ type Provider struct {
|
|||
cacheSize int
|
||||
cacheVersion string
|
||||
chunkSize int64
|
||||
pushRetryCount int
|
||||
pushRetryDelay time.Duration
|
||||
}
|
||||
|
||||
func New(root string, hosts remote.HostFunc, cacheSize uint, cacheVersion string, platformMC platforms.MatchComparer, chunkSize int64) (*Provider, error) {
|
||||
|
@ -62,6 +66,8 @@ func New(root string, hosts remote.HostFunc, cacheSize uint, cacheVersion string
|
|||
platformMC: platformMC,
|
||||
cacheVersion: cacheVersion,
|
||||
chunkSize: chunkSize,
|
||||
pushRetryCount: 3,
|
||||
pushRetryDelay: 5 * time.Second,
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
@ -142,6 +148,14 @@ func (pvd *Provider) Pull(ctx context.Context, ref string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// SetPushRetryConfig sets the retry configuration for push operations
|
||||
func (pvd *Provider) SetPushRetryConfig(count int, delay time.Duration) {
|
||||
pvd.mutex.Lock()
|
||||
defer pvd.mutex.Unlock()
|
||||
pvd.pushRetryCount = count
|
||||
pvd.pushRetryDelay = delay
|
||||
}
|
||||
|
||||
func (pvd *Provider) Push(ctx context.Context, desc ocispec.Descriptor, ref string) error {
|
||||
resolver, err := pvd.Resolver(ref)
|
||||
if err != nil {
|
||||
|
@ -153,7 +167,15 @@ func (pvd *Provider) Push(ctx context.Context, desc ocispec.Descriptor, ref stri
|
|||
MaxConcurrentUploadedLayers: LayerConcurrentLimit,
|
||||
}
|
||||
|
||||
err = utils.WithRetry(func() error {
|
||||
return push(ctx, pvd.store, rc, desc, ref)
|
||||
}, pvd.pushRetryCount, pvd.pushRetryDelay)
|
||||
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Push failed after all attempts")
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (pvd *Provider) Import(ctx context.Context, reader io.Reader) (string, error) {
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/BraveY/snapshotter-converter/converter"
|
||||
"github.com/containerd/containerd/archive/compression"
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/images"
|
||||
|
@ -21,7 +22,6 @@ import (
|
|||
"github.com/containerd/containerd/reference/docker"
|
||||
"github.com/containerd/containerd/remotes"
|
||||
containerdErrdefs "github.com/containerd/errdefs"
|
||||
"github.com/containerd/nydus-snapshotter/pkg/converter"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
|
||||
|
@ -67,23 +67,6 @@ type output struct {
|
|||
Blobs []string
|
||||
}
|
||||
|
||||
func withRetry(handle func() error, total int) error {
|
||||
for {
|
||||
total--
|
||||
err := handle()
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if total > 0 && !errors.Is(err, context.Canceled) {
|
||||
logrus.WithError(err).Warnf("retry (remain %d times)", total)
|
||||
continue
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func hosts(opt Opt) remote.HostFunc {
|
||||
maps := map[string]bool{
|
||||
opt.Source: opt.SourceInsecure,
|
||||
|
@ -189,7 +172,7 @@ func pushBlobFromBackend(
|
|||
},
|
||||
}
|
||||
|
||||
if err := withRetry(func() error {
|
||||
if err := nydusifyUtils.RetryWithAttempts(func() error {
|
||||
pusher, err := getPusherInChunked(ctx, pvd, blobDescs[idx], opt)
|
||||
if err != nil {
|
||||
if errdefs.NeedsRetryWithHTTP(err) {
|
||||
|
|
|
@ -0,0 +1,383 @@
|
|||
// Copyright 2025 Nydus Developers. All rights reserved.
|
||||
//
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
package modctl
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
const (
|
||||
BlobPath = "/content.v1/docker/registry/v2/blobs/%s/%s/%s/data"
|
||||
ReposPath = "/content.v1/docker/registry/v2/repositories"
|
||||
ManifestPath = "/_manifests/tags/%s/current/link"
|
||||
ModelWeightMediaType = "application/vnd.cnai.model.weight.v1.tar"
|
||||
ModelDatasetMediaType = "application/vnd.cnai.model.dataset.v1.tar"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultFileChunkSize = "4MiB"
|
||||
)
|
||||
|
||||
var mediaTypeChunkSizeMap = map[string]string{
|
||||
ModelWeightMediaType: "64MiB",
|
||||
ModelDatasetMediaType: "64MiB",
|
||||
}
|
||||
|
||||
var _ backend.Handler = &Handler{}
|
||||
|
||||
type Handler struct {
|
||||
root string
|
||||
registryHost string
|
||||
namespace string
|
||||
imageName string
|
||||
tag string
|
||||
manifest ocispec.Manifest
|
||||
blobs []backend.Blob
|
||||
// key is the blob's sha256, value is the blob's mediaType type and index
|
||||
blobsMap map[string]blobInfo
|
||||
// config layer in modctl's manifest
|
||||
blobConfig ocispec.Descriptor
|
||||
objectID uint32
|
||||
}
|
||||
|
||||
type blobInfo struct {
|
||||
mediaType string
|
||||
// Index in the blobs array
|
||||
blobIndex uint32
|
||||
blobDigest string
|
||||
blobSize string
|
||||
}
|
||||
|
||||
type chunk struct {
|
||||
blobDigest string
|
||||
blobSize string
|
||||
objectID uint32
|
||||
objectContent Object
|
||||
objectOffset uint64
|
||||
}
|
||||
|
||||
// ObjectID returns the blob index of the chunk
|
||||
func (c *chunk) ObjectID() uint32 {
|
||||
return c.objectID
|
||||
}
|
||||
|
||||
func (c *chunk) ObjectContent() interface{} {
|
||||
return c.objectContent
|
||||
}
|
||||
|
||||
// ObjectOffset returns the offset of the chunk in the blob file
|
||||
func (c *chunk) ObjectOffset() uint64 {
|
||||
return c.objectOffset
|
||||
}
|
||||
|
||||
func (c *chunk) FilePath() string {
|
||||
return c.objectContent.Path
|
||||
}
|
||||
|
||||
func (c *chunk) LimitChunkSize() string {
|
||||
return c.objectContent.ChunkSize
|
||||
}
|
||||
|
||||
func (c *chunk) BlobDigest() string {
|
||||
return c.blobDigest
|
||||
}
|
||||
|
||||
func (c *chunk) BlobSize() string {
|
||||
return c.blobSize
|
||||
}
|
||||
|
||||
type Object struct {
|
||||
Path string
|
||||
ChunkSize string
|
||||
}
|
||||
|
||||
type fileInfo struct {
|
||||
name string
|
||||
mode uint32
|
||||
size uint64
|
||||
offset uint64
|
||||
}
|
||||
|
||||
type Option struct {
|
||||
Root string `json:"root"`
|
||||
RegistryHost string `json:"registry_host"`
|
||||
Namespace string `json:"namespace"`
|
||||
ImageName string `json:"image_name"`
|
||||
Tag string `json:"tag"`
|
||||
WeightChunkSize uint64 `josn:"weightChunkSize"`
|
||||
}
|
||||
|
||||
func setWeightChunkSize(chunkSize uint64) {
|
||||
if chunkSize == 0 {
|
||||
chunkSize = 64 * 1024 * 1024
|
||||
}
|
||||
chunkSizeStr := humanize.IBytes(chunkSize)
|
||||
// remove space in chunkSizeStr `16 Mib` -> `16Mib`
|
||||
chunkSizeStr = strings.ReplaceAll(chunkSizeStr, " ", "")
|
||||
mediaTypeChunkSizeMap[ModelWeightMediaType] = chunkSizeStr
|
||||
mediaTypeChunkSizeMap[ModelDatasetMediaType] = chunkSizeStr
|
||||
}
|
||||
|
||||
func getChunkSizeByMediaType(mediaType string) string {
|
||||
if chunkSize, ok := mediaTypeChunkSizeMap[mediaType]; ok {
|
||||
return chunkSize
|
||||
}
|
||||
return DefaultFileChunkSize
|
||||
}
|
||||
|
||||
func NewHandler(opt Option) (*Handler, error) {
|
||||
handler := &Handler{
|
||||
root: opt.Root,
|
||||
registryHost: opt.RegistryHost,
|
||||
namespace: opt.Namespace,
|
||||
imageName: opt.ImageName,
|
||||
tag: opt.Tag,
|
||||
objectID: 0,
|
||||
blobsMap: make(map[string]blobInfo),
|
||||
}
|
||||
if opt.WeightChunkSize != 0 {
|
||||
setWeightChunkSize(opt.WeightChunkSize)
|
||||
}
|
||||
if err := initHandler(handler); err != nil {
|
||||
return nil, errors.Wrap(err, "init handler")
|
||||
}
|
||||
return handler, nil
|
||||
}
|
||||
func initHandler(handler *Handler) error {
|
||||
m, err := handler.extractManifest()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "extract manifest failed")
|
||||
}
|
||||
handler.manifest = *m
|
||||
handler.blobs = convertToBlobs(&handler.manifest)
|
||||
handler.setBlobConfig(m)
|
||||
handler.setBlobsMap()
|
||||
return nil
|
||||
}
|
||||
|
||||
func GetOption(srcRef, modCtlRoot string, weightChunkSize uint64) (*Option, error) {
|
||||
parts := strings.Split(srcRef, "/")
|
||||
if len(parts) != 3 {
|
||||
return nil, errors.Errorf("invalid source ref:%s", srcRef)
|
||||
}
|
||||
nameTagParts := strings.Split(parts[2], ":")
|
||||
if len(nameTagParts) != 2 {
|
||||
return nil, errors.New("invalid source ref for name and tag")
|
||||
}
|
||||
opt := Option{
|
||||
Root: modCtlRoot,
|
||||
RegistryHost: parts[0],
|
||||
Namespace: parts[1],
|
||||
ImageName: nameTagParts[0],
|
||||
Tag: nameTagParts[1],
|
||||
WeightChunkSize: weightChunkSize,
|
||||
}
|
||||
return &opt, nil
|
||||
}
|
||||
|
||||
func (handler *Handler) Handle(_ context.Context, file backend.File) ([]backend.Chunk, error) {
|
||||
chunks := []backend.Chunk{}
|
||||
needIgnore, blobInfo := handler.needIgnore(file.RelativePath)
|
||||
if needIgnore {
|
||||
return nil, nil
|
||||
}
|
||||
chunkSize := getChunkSizeByMediaType(blobInfo.mediaType)
|
||||
|
||||
// read the tar file and get the meta of files
|
||||
f, err := os.Open(filepath.Join(handler.root, file.RelativePath))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "open tar file failed")
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
files, err := readTarBlob(f)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "read blob failed")
|
||||
}
|
||||
|
||||
chunkSizeInInt, err := humanize.ParseBytes(chunkSize)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "parse chunk size failed")
|
||||
}
|
||||
for _, f := range files {
|
||||
objectOffsets := backend.SplitObjectOffsets(int64(f.size), int64(chunkSizeInInt))
|
||||
for _, objectOffset := range objectOffsets {
|
||||
chunks = append(chunks, &chunk{
|
||||
blobDigest: blobInfo.blobDigest,
|
||||
blobSize: blobInfo.blobSize,
|
||||
objectID: blobInfo.blobIndex,
|
||||
objectContent: Object{
|
||||
Path: f.name,
|
||||
ChunkSize: chunkSize,
|
||||
},
|
||||
objectOffset: f.offset + objectOffset,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
handler.objectID++
|
||||
|
||||
return chunks, nil
|
||||
}
|
||||
|
||||
func (handler *Handler) Backend(context.Context) (*backend.Backend, error) {
|
||||
bkd := backend.Backend{
|
||||
Version: "v1",
|
||||
}
|
||||
bkd.Backends = []backend.Config{
|
||||
{
|
||||
Type: "registry",
|
||||
},
|
||||
}
|
||||
bkd.Blobs = handler.blobs
|
||||
return &bkd, nil
|
||||
}
|
||||
|
||||
func (handler *Handler) GetConfig() ([]byte, error) {
|
||||
return handler.extractBlobs(handler.blobConfig.Digest.String())
|
||||
}
|
||||
|
||||
func (handler *Handler) GetLayers() []ocispec.Descriptor {
|
||||
return handler.manifest.Layers
|
||||
}
|
||||
|
||||
func (handler *Handler) setBlobConfig(m *ocispec.Manifest) {
|
||||
handler.blobConfig = m.Config
|
||||
}
|
||||
|
||||
func (handler *Handler) setBlobsMap() {
|
||||
for i, blob := range handler.blobs {
|
||||
handler.blobsMap[blob.Config.Digest] = blobInfo{
|
||||
mediaType: blob.Config.MediaType,
|
||||
blobIndex: uint32(i),
|
||||
blobDigest: blob.Config.Digest,
|
||||
blobSize: blob.Config.Size,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (handler *Handler) extractManifest() (*ocispec.Manifest, error) {
|
||||
tagPath := fmt.Sprintf(ManifestPath, handler.tag)
|
||||
manifestPath := filepath.Join(handler.root, ReposPath, handler.registryHost, handler.namespace, handler.imageName, tagPath)
|
||||
line, err := os.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "read manifest digest file failed")
|
||||
}
|
||||
content, err := handler.extractBlobs(string(line))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "extract blobs failed")
|
||||
}
|
||||
|
||||
var m ocispec.Manifest
|
||||
if err := json.Unmarshal(content, &m); err != nil {
|
||||
return nil, errors.Wrap(err, "unmarshal manifest blobs file failed")
|
||||
}
|
||||
return &m, nil
|
||||
}
|
||||
|
||||
func (handler *Handler) extractBlobs(digest string) ([]byte, error) {
|
||||
line := strings.TrimSpace(digest)
|
||||
digestSplit := strings.Split(line, ":")
|
||||
if len(digestSplit) != 2 {
|
||||
return nil, errors.New("invalid digest string")
|
||||
}
|
||||
|
||||
blobPath := fmt.Sprintf(BlobPath, digestSplit[0], digestSplit[1][:2], digestSplit[1])
|
||||
blobPath = filepath.Join(handler.root, blobPath)
|
||||
content, err := os.ReadFile(blobPath)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "read blobs file failed")
|
||||
}
|
||||
return content, nil
|
||||
}
|
||||
|
||||
func convertToBlobs(m *ocispec.Manifest) []backend.Blob {
|
||||
createBlob := func(layer ocispec.Descriptor) backend.Blob {
|
||||
digestStr := layer.Digest.String()
|
||||
digestParts := strings.Split(digestStr, ":")
|
||||
if len(digestParts) == 2 {
|
||||
digestStr = digestParts[1]
|
||||
}
|
||||
|
||||
chunkSize := getChunkSizeByMediaType(layer.MediaType)
|
||||
return backend.Blob{
|
||||
Backend: 0,
|
||||
Config: backend.BlobConfig{
|
||||
MediaType: layer.MediaType,
|
||||
Digest: digestStr,
|
||||
Size: fmt.Sprintf("%d", layer.Size),
|
||||
ChunkSize: chunkSize,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
blobs := make([]backend.Blob, len(m.Layers))
|
||||
|
||||
for i, layer := range m.Layers {
|
||||
blobs[i] = createBlob(layer)
|
||||
}
|
||||
|
||||
return blobs
|
||||
}
|
||||
|
||||
func (handler *Handler) needIgnore(relPath string) (bool, *blobInfo) {
|
||||
// ignore manifest link file
|
||||
if strings.HasSuffix(relPath, "link") {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// ignore blobs file belong to other image
|
||||
parts := strings.Split(relPath, "/")
|
||||
if len(parts) < 3 {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
digest := parts[len(parts)-2]
|
||||
blobInfo, ok := handler.blobsMap[digest]
|
||||
if !ok {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return false, &blobInfo
|
||||
}
|
||||
|
||||
func readTarBlob(r io.ReadSeeker) ([]fileInfo, error) {
|
||||
var files []fileInfo
|
||||
tarReader := tar.NewReader(r)
|
||||
for {
|
||||
header, err := tarReader.Next()
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return nil, errors.Wrap(err, "read tar file failed")
|
||||
}
|
||||
currentOffset, err := r.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "seek tar file failed")
|
||||
}
|
||||
files = append(files, fileInfo{
|
||||
name: header.Name,
|
||||
mode: uint32(header.Mode),
|
||||
size: uint64(header.Size),
|
||||
offset: uint64(currentOffset),
|
||||
})
|
||||
}
|
||||
return files, nil
|
||||
}
|
|
@ -0,0 +1,379 @@
|
|||
package modctl
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/agiledragon/gomonkey/v2"
|
||||
"github.com/dustin/go-humanize"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
// Test cases for readBlob function
|
||||
func TestReadImageRefBlob(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
targeRef := os.Getenv("NYDUS_MODEL_IMAGE_REF")
|
||||
if targeRef == "" {
|
||||
t.Skip("NYDUS_MODEL_IMAGE_REF is not set, skip test")
|
||||
}
|
||||
remoter, err := pkgPvd.DefaultRemote(targeRef, true)
|
||||
require.Nil(t, err)
|
||||
// Pull manifest
|
||||
maniDesc, err := remoter.Resolve(ctx)
|
||||
require.Nil(t, err)
|
||||
t.Logf("manifest desc: %v", maniDesc)
|
||||
rc, err := remoter.Pull(ctx, *maniDesc, true)
|
||||
require.Nil(t, err)
|
||||
defer rc.Close()
|
||||
var buf bytes.Buffer
|
||||
io.Copy(&buf, rc)
|
||||
var manifest ocispec.Manifest
|
||||
err = json.Unmarshal(buf.Bytes(), &manifest)
|
||||
require.Nil(t, err)
|
||||
t.Logf("manifest: %v", manifest)
|
||||
|
||||
for _, layer := range manifest.Layers {
|
||||
startTime := time.Now()
|
||||
rsc, err := remoter.ReadSeekCloser(context.Background(), layer, true)
|
||||
require.Nil(t, err)
|
||||
defer rsc.Close()
|
||||
rs, ok := rsc.(io.ReadSeeker)
|
||||
require.True(t, ok)
|
||||
files, err := readTarBlob(rs)
|
||||
require.Nil(t, err)
|
||||
require.NotEqual(t, 0, len(files))
|
||||
t.Logf("files: %v, elapesed: %v", files, time.Since(startTime))
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// MockReadSeeker is a mock implementation of io.ReadSeeker
|
||||
type MockReadSeeker struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
func (m *MockReadSeeker) Read(p []byte) (n int, err error) {
|
||||
args := m.Called(p)
|
||||
return args.Int(0), args.Error(1)
|
||||
}
|
||||
|
||||
func (m *MockReadSeeker) Seek(offset int64, whence int) (int64, error) {
|
||||
args := m.Called(offset, whence)
|
||||
return args.Get(0).(int64), args.Error(1)
|
||||
}
|
||||
|
||||
func TestReadTarBlob(t *testing.T) {
|
||||
t.Run("Normal case: valid tar file", func(t *testing.T) {
|
||||
// Create a valid tar file in memory
|
||||
var buf bytes.Buffer
|
||||
tw := tar.NewWriter(&buf)
|
||||
files := []struct {
|
||||
name string
|
||||
size int64
|
||||
}{
|
||||
{"file1.txt", 10},
|
||||
{"file2.txt", 20},
|
||||
{"file3.txt", 30},
|
||||
}
|
||||
for _, file := range files {
|
||||
header := &tar.Header{
|
||||
Name: file.name,
|
||||
Size: file.size,
|
||||
}
|
||||
if err := tw.WriteHeader(header); err != nil {
|
||||
t.Fatalf("Failed to write tar header: %v", err)
|
||||
}
|
||||
if _, err := tw.Write(make([]byte, file.size)); err != nil {
|
||||
t.Fatalf("Failed to write tar content: %v", err)
|
||||
}
|
||||
}
|
||||
tw.Close()
|
||||
|
||||
reader := bytes.NewReader(buf.Bytes()) // Convert *bytes.Buffer to io.ReadSeeker
|
||||
result, err := readTarBlob(reader)
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, result, len(files))
|
||||
|
||||
for i, file := range files {
|
||||
assert.Equal(t, file.name, result[i].name)
|
||||
// Since the file size is less than 512 bytes, it will be padded to 512 bytes in the tar body.
|
||||
assert.Equal(t, uint64((2*i+1)*512), result[i].offset)
|
||||
assert.Equal(t, uint64(file.size), result[i].size)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Empty tar file", func(t *testing.T) {
|
||||
// Create an empty tar file in memory
|
||||
var buf bytes.Buffer
|
||||
tw := tar.NewWriter(&buf)
|
||||
tw.Close()
|
||||
|
||||
// Call the function
|
||||
reader := bytes.NewReader(buf.Bytes()) // Convert *bytes.Buffer to io.ReadSeeker
|
||||
result, err := readTarBlob(reader)
|
||||
|
||||
// Validate the result
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, result)
|
||||
})
|
||||
|
||||
t.Run("I/O error during read", func(t *testing.T) {
|
||||
// Create a mock ReadSeeker that returns an error on Read
|
||||
mockReader := new(MockReadSeeker)
|
||||
mockReader.On("Read", mock.Anything).Return(0, errors.New("mock read error"))
|
||||
mockReader.On("Seek", mock.Anything, mock.Anything).Return(int64(0), nil)
|
||||
|
||||
// Call the function
|
||||
_, err := readTarBlob(mockReader)
|
||||
|
||||
// Validate the error
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "read tar file failed")
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetOption(t *testing.T) {
|
||||
t.Run("Valid srcRef", func(t *testing.T) {
|
||||
srcRef := "host/namespace/image:tag"
|
||||
modCtlRoot := "/mock/root"
|
||||
weightChunkSize := uint64(64 * 1024 * 1024)
|
||||
|
||||
opt, err := GetOption(srcRef, modCtlRoot, weightChunkSize)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "host", opt.RegistryHost)
|
||||
assert.Equal(t, "namespace", opt.Namespace)
|
||||
assert.Equal(t, "image", opt.ImageName)
|
||||
assert.Equal(t, "tag", opt.Tag)
|
||||
assert.Equal(t, weightChunkSize, opt.WeightChunkSize)
|
||||
})
|
||||
|
||||
t.Run("Invalid srcRef format", func(t *testing.T) {
|
||||
srcRef := "invalid-ref"
|
||||
modCtlRoot := "/mock/root"
|
||||
weightChunkSize := uint64(64 * 1024 * 1024)
|
||||
|
||||
_, err := GetOption(srcRef, modCtlRoot, weightChunkSize)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestHandle(t *testing.T) {
|
||||
handler := &Handler{
|
||||
root: "/tmp",
|
||||
}
|
||||
|
||||
t.Run("File ignored", func(t *testing.T) {
|
||||
file := backend.File{RelativePath: "ignored-file/link"}
|
||||
chunks, err := handler.Handle(context.Background(), file)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, chunks)
|
||||
})
|
||||
|
||||
handler.blobsMap = make(map[string]blobInfo)
|
||||
handler.blobsMap["test_digest"] = blobInfo{
|
||||
mediaType: ModelWeightMediaType,
|
||||
}
|
||||
t.Run("Open file failure", func(t *testing.T) {
|
||||
file := backend.File{RelativePath: "test/test_digest/nonexistent-file"}
|
||||
_, err := handler.Handle(context.Background(), file)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "open tar file failed")
|
||||
})
|
||||
|
||||
t.Run("Normal", func(t *testing.T) {
|
||||
os.MkdirAll("/tmp/test/test_digest/", 0755)
|
||||
testFile, err := os.CreateTemp("/tmp/test/test_digest/", "test_tar")
|
||||
assert.NoError(t, err)
|
||||
defer testFile.Close()
|
||||
defer os.RemoveAll(testFile.Name())
|
||||
tw := tar.NewWriter(testFile)
|
||||
header := &tar.Header{
|
||||
Name: "test.txt",
|
||||
Mode: 0644,
|
||||
Size: 4,
|
||||
}
|
||||
assert.NoError(t, tw.WriteHeader(header))
|
||||
_, err = tw.Write([]byte("test"))
|
||||
assert.NoError(t, err)
|
||||
tw.Close()
|
||||
testFilePath := strings.TrimPrefix(testFile.Name(), "/tmp/")
|
||||
file := backend.File{RelativePath: testFilePath}
|
||||
chunks, err := handler.Handle(context.Background(), file)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, len(chunks))
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func TestModctlBackend(t *testing.T) {
|
||||
handler := &Handler{
|
||||
blobs: []backend.Blob{
|
||||
{
|
||||
Config: backend.BlobConfig{
|
||||
MediaType: "application/vnd.cnai.model.weight.v1.tar",
|
||||
Digest: "sha256:mockdigest",
|
||||
Size: "1024",
|
||||
ChunkSize: "64MiB",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
bkd, err := handler.Backend(context.Background())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "v1", bkd.Version)
|
||||
assert.Equal(t, "registry", bkd.Backends[0].Type)
|
||||
assert.Len(t, bkd.Blobs, 1)
|
||||
}
|
||||
|
||||
func TestConvertToBlobs(t *testing.T) {
|
||||
manifestWithColon := &ocispec.Manifest{
|
||||
Layers: []ocispec.Descriptor{
|
||||
{
|
||||
Digest: digest.Digest("sha256:abc123"),
|
||||
MediaType: ModelWeightMediaType,
|
||||
Size: 100,
|
||||
},
|
||||
},
|
||||
}
|
||||
actualBlobs1 := convertToBlobs(manifestWithColon)
|
||||
assert.Equal(t, 1, len(actualBlobs1))
|
||||
assert.Equal(t, ModelWeightMediaType, actualBlobs1[0].Config.MediaType)
|
||||
assert.Equal(t, "abc123", actualBlobs1[0].Config.Digest)
|
||||
|
||||
manifestWithoutColon := &ocispec.Manifest{
|
||||
Layers: []ocispec.Descriptor{
|
||||
{
|
||||
Digest: digest.Digest("abc123"),
|
||||
MediaType: ModelDatasetMediaType,
|
||||
Size: 100,
|
||||
},
|
||||
},
|
||||
}
|
||||
actualBlobs2 := convertToBlobs(manifestWithoutColon)
|
||||
assert.Equal(t, 1, len(actualBlobs2))
|
||||
assert.Equal(t, ModelDatasetMediaType, actualBlobs2[0].Config.MediaType)
|
||||
assert.Equal(t, "abc123", actualBlobs2[0].Config.Digest)
|
||||
}
|
||||
|
||||
func TestExtractManifest(t *testing.T) {
|
||||
handler := &Handler{
|
||||
root: "/tmp/test",
|
||||
}
|
||||
tagPath := fmt.Sprintf(ManifestPath, handler.tag)
|
||||
manifestPath := filepath.Join(handler.root, ReposPath, handler.registryHost, handler.namespace, handler.imageName, tagPath)
|
||||
dir := filepath.Dir(manifestPath)
|
||||
os.MkdirAll(dir, 0755)
|
||||
maniFile, err := os.Create(manifestPath)
|
||||
assert.NoError(t, err)
|
||||
_, err = maniFile.WriteString("sha256:abc1234")
|
||||
assert.NoError(t, err)
|
||||
maniFile.Close()
|
||||
defer os.RemoveAll(manifestPath)
|
||||
t.Logf("manifest path: %s", manifestPath)
|
||||
os.MkdirAll(filepath.Dir(manifestPath), 0755)
|
||||
// No file
|
||||
_, err = handler.extractManifest()
|
||||
assert.Error(t, err)
|
||||
|
||||
var m = ocispec.Manifest{
|
||||
Config: ocispec.Descriptor{
|
||||
MediaType: ModelWeightMediaType,
|
||||
Digest: "sha256:abc1234",
|
||||
Size: 10,
|
||||
},
|
||||
}
|
||||
data, err := json.Marshal(m)
|
||||
assert.NoError(t, err)
|
||||
blobDir := "/tmp/test/content.v1/docker/registry/v2/blobs/sha256/ab/abc1234/"
|
||||
os.MkdirAll(blobDir, 0755)
|
||||
blobPath := blobDir + "data"
|
||||
blobFile, err := os.Create(blobPath)
|
||||
assert.NoError(t, err)
|
||||
defer os.RemoveAll(blobPath)
|
||||
io.Copy(blobFile, bytes.NewReader(data))
|
||||
blobFile.Close()
|
||||
|
||||
mani, err := handler.extractManifest()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, mani.Config.Digest.String(), "sha256:abc1234")
|
||||
}
|
||||
|
||||
func TestSetBlobsMap(t *testing.T) {
|
||||
handler := &Handler{
|
||||
root: "/tmp",
|
||||
blobs: make([]backend.Blob, 0),
|
||||
blobsMap: map[string]blobInfo{},
|
||||
}
|
||||
handler.blobs = append(handler.blobs, backend.Blob{
|
||||
Config: backend.BlobConfig{
|
||||
Digest: "sha256:abc1234",
|
||||
},
|
||||
})
|
||||
handler.setBlobsMap()
|
||||
assert.Equal(t, handler.blobsMap["sha256:abc1234"].blobDigest, "sha256:abc1234")
|
||||
}
|
||||
|
||||
func TestSetWeightChunkSize(t *testing.T) {
|
||||
setWeightChunkSize(0)
|
||||
expectedDefault := "64MiB"
|
||||
assert.Equal(t, expectedDefault, mediaTypeChunkSizeMap[ModelWeightMediaType], "Weight media type should be set to default value")
|
||||
assert.Equal(t, expectedDefault, mediaTypeChunkSizeMap[ModelDatasetMediaType], "Dataset media type should be set to default value")
|
||||
|
||||
chunkSize := uint64(16 * 1024 * 1024)
|
||||
setWeightChunkSize(chunkSize)
|
||||
expectedNonDefault := humanize.IBytes(chunkSize)
|
||||
expectedNonDefault = strings.ReplaceAll(expectedNonDefault, " ", "")
|
||||
|
||||
assert.Equal(t, expectedNonDefault, mediaTypeChunkSizeMap[ModelWeightMediaType], "Weight media type should match the specified chunk size")
|
||||
assert.Equal(t, expectedNonDefault, mediaTypeChunkSizeMap[ModelDatasetMediaType], "Dataset media type should match the specified chunk size")
|
||||
}
|
||||
|
||||
func TestNewHandler(t *testing.T) {
|
||||
// handler := &Handler{}
|
||||
t.Run("Run extract manifest failed", func(t *testing.T) {
|
||||
_, err := NewHandler(Option{})
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run Normal", func(t *testing.T) {
|
||||
initHandlerPatches := gomonkey.ApplyFunc(initHandler, func(*Handler) error {
|
||||
return nil
|
||||
})
|
||||
defer initHandlerPatches.Reset()
|
||||
handler, err := NewHandler(Option{})
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, handler)
|
||||
})
|
||||
}
|
||||
|
||||
func TestInitHandler(t *testing.T) {
|
||||
t.Run("Run initHandler failed", func(t *testing.T) {
|
||||
handler := &Handler{}
|
||||
extractManifestPatches := gomonkey.ApplyPrivateMethod(handler, "extractManifest", func() (*ocispec.Manifest, error) {
|
||||
return &ocispec.Manifest{}, nil
|
||||
})
|
||||
defer extractManifestPatches.Reset()
|
||||
err := initHandler(handler)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
|
@ -0,0 +1,249 @@
|
|||
// Copyright 2025 Nydus Developers. All rights reserved.
|
||||
//
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
package modctl
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"os"
|
||||
"strconv"
|
||||
"sync"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
|
||||
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
type RemoteInterface interface {
|
||||
Resolve(ctx context.Context) (*ocispec.Descriptor, error)
|
||||
Pull(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadCloser, error)
|
||||
ReadSeekCloser(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadSeekCloser, error)
|
||||
WithHTTP()
|
||||
MaybeWithHTTP(err error)
|
||||
}
|
||||
|
||||
type RemoteHandler struct {
|
||||
ctx context.Context
|
||||
imageRef string
|
||||
remoter RemoteInterface
|
||||
manifest ocispec.Manifest
|
||||
// convert from the manifest.Layers, same order as manifest.Layers
|
||||
blobs []backend.Blob
|
||||
}
|
||||
|
||||
type FileCrcList struct {
|
||||
Files []FileCrcInfo `json:"files"`
|
||||
}
|
||||
|
||||
type FileCrcInfo struct {
|
||||
FilePath string `json:"file_path"`
|
||||
ChunkCrcs string `json:"chunk_crcs"`
|
||||
}
|
||||
|
||||
const (
|
||||
filePathKey = "org.cnai.model.filepath"
|
||||
crcsKey = "org.cnai.nydus.crcs"
|
||||
)
|
||||
|
||||
func NewRemoteHandler(ctx context.Context, imageRef string, plainHTTP bool) (*RemoteHandler, error) {
|
||||
remoter, err := pkgPvd.DefaultRemote(imageRef, true)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "new remote failed")
|
||||
}
|
||||
if plainHTTP {
|
||||
remoter.WithHTTP()
|
||||
}
|
||||
handler := &RemoteHandler{
|
||||
ctx: ctx,
|
||||
imageRef: imageRef,
|
||||
remoter: remoter,
|
||||
}
|
||||
if err := initRemoteHandler(handler); err != nil {
|
||||
return nil, errors.Wrap(err, "init remote handler failed")
|
||||
}
|
||||
return handler, nil
|
||||
}
|
||||
|
||||
func initRemoteHandler(handler *RemoteHandler) error {
|
||||
if err := handler.setManifest(); err != nil {
|
||||
return errors.Wrap(err, "set manifest failed")
|
||||
}
|
||||
handler.blobs = convertToBlobs(&handler.manifest)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *RemoteHandler) Handle(ctx context.Context) (*backend.Backend, []backend.FileAttribute, error) {
|
||||
var (
|
||||
fileAttrs []backend.FileAttribute
|
||||
mu sync.Mutex
|
||||
eg *errgroup.Group
|
||||
)
|
||||
eg, ctx = errgroup.WithContext(ctx)
|
||||
eg.SetLimit(10)
|
||||
|
||||
for idx, layer := range handler.manifest.Layers {
|
||||
eg.Go(func() error {
|
||||
var fa []backend.FileAttribute
|
||||
err := utils.RetryWithAttempts(func() error {
|
||||
_fa, err := handler.handle(ctx, layer, int32(idx))
|
||||
fa = _fa
|
||||
return err
|
||||
}, 5)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
mu.Lock()
|
||||
fileAttrs = append(fileAttrs, fa...)
|
||||
mu.Unlock()
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
if err := eg.Wait(); err != nil {
|
||||
return nil, nil, errors.Wrap(err, "wait for handle failed")
|
||||
}
|
||||
|
||||
bkd, err := handler.backend()
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "get backend failed")
|
||||
}
|
||||
|
||||
return bkd, fileAttrs, nil
|
||||
}
|
||||
|
||||
func (handler *RemoteHandler) GetModelConfig() (*modelspec.Model, error) {
|
||||
var modelCfg modelspec.Model
|
||||
rc, err := handler.remoter.Pull(handler.ctx, handler.manifest.Config, true)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "pull model config failed")
|
||||
}
|
||||
defer rc.Close()
|
||||
var buf bytes.Buffer
|
||||
if _, err = io.Copy(&buf, rc); err != nil {
|
||||
return nil, errors.Wrap(err, "copy model config failed")
|
||||
}
|
||||
if err = json.Unmarshal(buf.Bytes(), &modelCfg); err != nil {
|
||||
return nil, errors.Wrap(err, "unmarshal model config failed")
|
||||
}
|
||||
return &modelCfg, nil
|
||||
}
|
||||
|
||||
func (handler *RemoteHandler) GetLayers() []ocispec.Descriptor {
|
||||
return handler.manifest.Layers
|
||||
}
|
||||
|
||||
func (handler *RemoteHandler) setManifest() error {
|
||||
maniDesc, err := handler.remoter.Resolve(handler.ctx)
|
||||
if utils.RetryWithHTTP(err) {
|
||||
handler.remoter.MaybeWithHTTP(err)
|
||||
maniDesc, err = handler.remoter.Resolve(handler.ctx)
|
||||
}
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "resolve image manifest failed")
|
||||
}
|
||||
|
||||
rc, err := handler.remoter.Pull(handler.ctx, *maniDesc, true)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "pull manifest failed")
|
||||
}
|
||||
defer rc.Close()
|
||||
var buf bytes.Buffer
|
||||
io.Copy(&buf, rc)
|
||||
var manifest ocispec.Manifest
|
||||
if err = json.Unmarshal(buf.Bytes(), &manifest); err != nil {
|
||||
return errors.Wrap(err, "unmarshal manifest failed")
|
||||
}
|
||||
handler.manifest = manifest
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *RemoteHandler) backend() (*backend.Backend, error) {
|
||||
bkd := backend.Backend{
|
||||
Version: "v1",
|
||||
}
|
||||
bkd.Backends = []backend.Config{
|
||||
{
|
||||
Type: "registry",
|
||||
},
|
||||
}
|
||||
bkd.Blobs = handler.blobs
|
||||
return &bkd, nil
|
||||
}
|
||||
|
||||
func (handler *RemoteHandler) handle(ctx context.Context, layer ocispec.Descriptor, index int32) ([]backend.FileAttribute, error) {
|
||||
logrus.Debugf("handle layer: %s", layer.Digest.String())
|
||||
chunkSize := getChunkSizeByMediaType(layer.MediaType)
|
||||
rsc, err := handler.remoter.ReadSeekCloser(ctx, layer, true)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "read seek closer failed")
|
||||
}
|
||||
defer rsc.Close()
|
||||
files, err := readTarBlob(rsc)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "read tar blob failed")
|
||||
}
|
||||
|
||||
var fileCrcList = FileCrcList{}
|
||||
var fileCrcMap = make(map[string]string)
|
||||
if layer.Annotations != nil {
|
||||
if c, ok := layer.Annotations[crcsKey]; ok {
|
||||
if err := json.Unmarshal([]byte(c), &fileCrcList); err != nil {
|
||||
return nil, errors.Wrap(err, "unmarshal crcs failed")
|
||||
}
|
||||
for _, f := range fileCrcList.Files {
|
||||
fileCrcMap[f.FilePath] = f.ChunkCrcs
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
blobInfo := handler.blobs[index].Config
|
||||
fileAttrs := make([]backend.FileAttribute, len(files))
|
||||
hackFile := os.Getenv("HACK_FILE")
|
||||
for idx, f := range files {
|
||||
if hackFile != "" && f.name == hackFile {
|
||||
hackFileWrapper(&f)
|
||||
}
|
||||
|
||||
fileAttrs[idx] = backend.FileAttribute{
|
||||
BlobID: blobInfo.Digest,
|
||||
BlobIndex: uint32(index),
|
||||
BlobSize: blobInfo.Size,
|
||||
FileSize: f.size,
|
||||
Chunk0CompressedOffset: f.offset,
|
||||
ChunkSize: chunkSize,
|
||||
RelativePath: f.name,
|
||||
Type: "external",
|
||||
Mode: f.mode,
|
||||
}
|
||||
if crcs, ok := fileCrcMap[f.name]; ok {
|
||||
fileAttrs[idx].Crcs = crcs
|
||||
}
|
||||
}
|
||||
return fileAttrs, nil
|
||||
}
|
||||
|
||||
func hackFileWrapper(f *fileInfo) {
|
||||
// HACK to chmod config.json to 0640
|
||||
hackMode := uint32(0640)
|
||||
// etc 640.
|
||||
hackModeStr := os.Getenv("HACK_MODE")
|
||||
if hackModeStr != "" {
|
||||
modeValue, err := strconv.ParseUint(hackModeStr, 8, 32)
|
||||
if err != nil {
|
||||
logrus.Errorf("Invalid HACK_MODE value: %s, using default 0640", hackModeStr)
|
||||
} else {
|
||||
hackMode = uint32(modeValue)
|
||||
}
|
||||
}
|
||||
f.mode = hackMode
|
||||
logrus.Infof("hack file: %s mode: %o", f.name, f.mode)
|
||||
}
|
|
@ -0,0 +1,256 @@
|
|||
package modctl
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
|
||||
"github.com/agiledragon/gomonkey/v2"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
type MockRemote struct {
|
||||
ResolveFunc func(ctx context.Context) (*ocispec.Descriptor, error)
|
||||
PullFunc func(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadCloser, error)
|
||||
ReadSeekCloserFunc func(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadSeekCloser, error)
|
||||
WithHTTPFunc func()
|
||||
MaybeWithHTTPFunc func(err error)
|
||||
}
|
||||
|
||||
func (m *MockRemote) Resolve(ctx context.Context) (*ocispec.Descriptor, error) {
|
||||
return m.ResolveFunc(ctx)
|
||||
}
|
||||
|
||||
func (m *MockRemote) Pull(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadCloser, error) {
|
||||
return m.PullFunc(ctx, desc, plainHTTP)
|
||||
}
|
||||
|
||||
func (m *MockRemote) ReadSeekCloser(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadSeekCloser, error) {
|
||||
return m.ReadSeekCloserFunc(ctx, desc, plainHTTP)
|
||||
}
|
||||
|
||||
func (m *MockRemote) WithHTTP() {
|
||||
m.WithHTTPFunc()
|
||||
}
|
||||
|
||||
func (m *MockRemote) MaybeWithHTTP(err error) {
|
||||
m.MaybeWithHTTPFunc(err)
|
||||
}
|
||||
|
||||
type readSeekCloser struct {
|
||||
*bytes.Reader
|
||||
}
|
||||
|
||||
func (r *readSeekCloser) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestRemoteHandler_Handle(t *testing.T) {
|
||||
mockRemote := &MockRemote{
|
||||
ResolveFunc: func(context.Context) (*ocispec.Descriptor, error) {
|
||||
return &ocispec.Descriptor{}, nil
|
||||
},
|
||||
PullFunc: func(context.Context, ocispec.Descriptor, bool) (io.ReadCloser, error) {
|
||||
return io.NopCloser(bytes.NewReader([]byte("{}"))), nil
|
||||
},
|
||||
ReadSeekCloserFunc: func(context.Context, ocispec.Descriptor, bool) (io.ReadSeekCloser, error) {
|
||||
// prepare tar
|
||||
|
||||
var buf bytes.Buffer
|
||||
tw := tar.NewWriter(&buf)
|
||||
files := []struct {
|
||||
name string
|
||||
size int64
|
||||
}{
|
||||
{"file1.txt", 10},
|
||||
{"file2.txt", 20},
|
||||
{"file3.txt", 30},
|
||||
}
|
||||
for _, file := range files {
|
||||
header := &tar.Header{
|
||||
Name: file.name,
|
||||
Size: file.size,
|
||||
}
|
||||
if err := tw.WriteHeader(header); err != nil {
|
||||
t.Fatalf("Failed to write tar header: %v", err)
|
||||
}
|
||||
if _, err := tw.Write(make([]byte, file.size)); err != nil {
|
||||
t.Fatalf("Failed to write tar content: %v", err)
|
||||
}
|
||||
}
|
||||
tw.Close()
|
||||
reader := bytes.NewReader(buf.Bytes())
|
||||
return &readSeekCloser{reader}, nil
|
||||
},
|
||||
WithHTTPFunc: func() {},
|
||||
MaybeWithHTTPFunc: func(error) {},
|
||||
}
|
||||
|
||||
fileCrcInfo := &FileCrcInfo{
|
||||
ChunkCrcs: "0x1234,0x5678",
|
||||
FilePath: "file1.txt",
|
||||
}
|
||||
fileCrcList := &FileCrcList{
|
||||
Files: []FileCrcInfo{
|
||||
*fileCrcInfo,
|
||||
},
|
||||
}
|
||||
crcs, err := json.Marshal(fileCrcList)
|
||||
require.NoError(t, err)
|
||||
annotations := map[string]string{
|
||||
filePathKey: "file1.txt",
|
||||
crcsKey: string(crcs),
|
||||
}
|
||||
handler := &RemoteHandler{
|
||||
ctx: context.Background(),
|
||||
imageRef: "test-image",
|
||||
remoter: mockRemote,
|
||||
manifest: ocispec.Manifest{
|
||||
Layers: []ocispec.Descriptor{
|
||||
{
|
||||
MediaType: "test-media-type",
|
||||
Digest: "test-digest",
|
||||
Annotations: annotations,
|
||||
},
|
||||
},
|
||||
},
|
||||
blobs: []backend.Blob{
|
||||
{
|
||||
Config: backend.BlobConfig{
|
||||
Digest: "test-digest",
|
||||
Size: "100",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
backend, fileAttrs, err := handler.Handle(context.Background())
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, backend)
|
||||
assert.NotEmpty(t, fileAttrs)
|
||||
assert.Equal(t, 3, len(fileAttrs))
|
||||
assert.Equal(t, fileCrcInfo.ChunkCrcs, fileAttrs[0].Crcs)
|
||||
assert.Equal(t, "", fileAttrs[1].Crcs)
|
||||
|
||||
handler.manifest.Layers[0].Annotations = map[string]string{
|
||||
filePathKey: "file1.txt",
|
||||
crcsKey: "0x1234,0x5678",
|
||||
}
|
||||
_, _, err = handler.Handle(context.Background())
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGetModelConfig(t *testing.T) {
|
||||
mockRemote := &MockRemote{
|
||||
ResolveFunc: func(context.Context) (*ocispec.Descriptor, error) {
|
||||
return &ocispec.Descriptor{}, nil
|
||||
},
|
||||
PullFunc: func(_ context.Context, desc ocispec.Descriptor, _ bool) (io.ReadCloser, error) {
|
||||
desc = ocispec.Descriptor{
|
||||
MediaType: modelspec.MediaTypeModelConfig,
|
||||
Size: desc.Size,
|
||||
}
|
||||
data, err := json.Marshal(desc)
|
||||
assert.Nil(t, err)
|
||||
return io.NopCloser(bytes.NewReader(data)), nil
|
||||
},
|
||||
}
|
||||
|
||||
handler := &RemoteHandler{
|
||||
ctx: context.Background(),
|
||||
imageRef: "test-image",
|
||||
remoter: mockRemote,
|
||||
}
|
||||
|
||||
modelConfig, err := handler.GetModelConfig()
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, modelConfig)
|
||||
}
|
||||
|
||||
func TestSetManifest(t *testing.T) {
|
||||
mockRemote := &MockRemote{
|
||||
ResolveFunc: func(context.Context) (*ocispec.Descriptor, error) {
|
||||
return &ocispec.Descriptor{}, nil
|
||||
},
|
||||
PullFunc: func(context.Context, ocispec.Descriptor, bool) (io.ReadCloser, error) {
|
||||
mani := ocispec.Manifest{
|
||||
MediaType: ocispec.MediaTypeImageManifest,
|
||||
}
|
||||
data, err := json.Marshal(mani)
|
||||
assert.Nil(t, err)
|
||||
return io.NopCloser(bytes.NewReader(data)), nil
|
||||
},
|
||||
}
|
||||
handler := &RemoteHandler{
|
||||
ctx: context.Background(),
|
||||
imageRef: "test-image",
|
||||
remoter: mockRemote,
|
||||
}
|
||||
|
||||
err := handler.setManifest()
|
||||
assert.Nil(t, err)
|
||||
}
|
||||
|
||||
func TestBackend(t *testing.T) {
|
||||
handler := &RemoteHandler{
|
||||
manifest: ocispec.Manifest{},
|
||||
blobs: []backend.Blob{
|
||||
{
|
||||
Config: backend.BlobConfig{
|
||||
Digest: "test-digest",
|
||||
Size: "100",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
backend, err := handler.backend()
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, backend)
|
||||
assert.Equal(t, "v1", backend.Version)
|
||||
assert.Equal(t, "registry", backend.Backends[0].Type)
|
||||
}
|
||||
|
||||
func TestNewRemoteHandler(t *testing.T) {
|
||||
var remoter = remote.Remote{}
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(provider.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return &remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
initRemoteHandlerPatches := gomonkey.ApplyFunc(initRemoteHandler, func(*RemoteHandler) error {
|
||||
return nil
|
||||
})
|
||||
defer initRemoteHandlerPatches.Reset()
|
||||
|
||||
remoteHandler, err := NewRemoteHandler(context.Background(), "test", false)
|
||||
assert.Nil(t, err)
|
||||
assert.NotNil(t, remoteHandler)
|
||||
}
|
||||
|
||||
func TestInitRemoteHandlerError(t *testing.T) {
|
||||
handler := &RemoteHandler{}
|
||||
setManifestPatches := gomonkey.ApplyPrivateMethod(handler, "setManifest", func(*RemoteHandler) error {
|
||||
return nil
|
||||
})
|
||||
defer setManifestPatches.Reset()
|
||||
err := initRemoteHandler(handler)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestHackFileWrapper(t *testing.T) {
|
||||
f := &fileInfo{}
|
||||
os.Setenv("HACK_MODE", "0640")
|
||||
hackFileWrapper(f)
|
||||
assert.Equal(t, uint32(0640), f.mode)
|
||||
}
|
|
@ -18,6 +18,7 @@ import (
|
|||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containerd/containerd/mount"
|
||||
"github.com/opencontainers/go-digest"
|
||||
|
@ -113,7 +114,7 @@ func (sl *defaultSourceLayer) Mount(ctx context.Context) ([]mount.Mount, func()
|
|||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
}, 3, 5*time.Second); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
|
|
|
@ -88,7 +88,7 @@ func (remote *Remote) ReaderAt(ctx context.Context, desc ocispec.Descriptor, byD
|
|||
}
|
||||
|
||||
// Create a new resolver instance for the request
|
||||
fetcher, err := remote.resolverFunc(remote.retryWithHTTP).Fetcher(ctx, ref)
|
||||
fetcher, err := remote.resolverFunc(remote.withHTTP).Fetcher(ctx, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -97,3 +97,29 @@ func (remote *Remote) ReaderAt(ctx context.Context, desc ocispec.Descriptor, byD
|
|||
provider := FromFetcher(fetcher)
|
||||
return provider.ReaderAt(ctx, desc)
|
||||
}
|
||||
|
||||
func (remote *Remote) ReadSeekCloser(ctx context.Context, desc ocispec.Descriptor, byDigest bool) (io.ReadSeekCloser, error) {
|
||||
var ref string
|
||||
if byDigest {
|
||||
ref = remote.parsed.Name()
|
||||
} else {
|
||||
ref = reference.TagNameOnly(remote.parsed).String()
|
||||
}
|
||||
|
||||
// Create a new resolver instance for the request
|
||||
fetcher, err := remote.resolverFunc(remote.withHTTP).Fetcher(ctx, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rc, err := fetcher.Fetch(ctx, desc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rsc, ok := rc.(io.ReadSeekCloser)
|
||||
if !ok {
|
||||
return nil, errors.New("fetcher does not support ReadSeekCloser")
|
||||
}
|
||||
return rsc, nil
|
||||
}
|
||||
|
|
|
@ -0,0 +1,127 @@
|
|||
package remote
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/containerd/containerd/remotes"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
type MockNamed struct {
|
||||
mockName string
|
||||
mockString string
|
||||
}
|
||||
|
||||
func (m *MockNamed) Name() string {
|
||||
return m.mockName
|
||||
}
|
||||
func (m *MockNamed) String() string {
|
||||
return m.mockString
|
||||
}
|
||||
|
||||
// MockResolver implements the Resolver interface for testing purposes.
|
||||
type MockResolver struct {
|
||||
ResolveFunc func(ctx context.Context, ref string) (string, ocispec.Descriptor, error)
|
||||
FetcherFunc func(ctx context.Context, ref string) (remotes.Fetcher, error)
|
||||
PusherFunc func(ctx context.Context, ref string) (remotes.Pusher, error)
|
||||
PusherInChunkedFunc func(ctx context.Context, ref string) (remotes.PusherInChunked, error)
|
||||
}
|
||||
|
||||
// Resolve implements the Resolver.Resolve method.
|
||||
func (m *MockResolver) Resolve(ctx context.Context, ref string) (string, ocispec.Descriptor, error) {
|
||||
if m.ResolveFunc != nil {
|
||||
return m.ResolveFunc(ctx, ref)
|
||||
}
|
||||
return "", ocispec.Descriptor{}, errors.New("ResolveFunc not implemented")
|
||||
}
|
||||
|
||||
// Fetcher implements the Resolver.Fetcher method.
|
||||
func (m *MockResolver) Fetcher(ctx context.Context, ref string) (remotes.Fetcher, error) {
|
||||
if m.FetcherFunc != nil {
|
||||
return m.FetcherFunc(ctx, ref)
|
||||
}
|
||||
return nil, errors.New("FetcherFunc not implemented")
|
||||
}
|
||||
|
||||
// Pusher implements the Resolver.Pusher method.
|
||||
func (m *MockResolver) Pusher(ctx context.Context, ref string) (remotes.Pusher, error) {
|
||||
if m.PusherFunc != nil {
|
||||
return m.PusherFunc(ctx, ref)
|
||||
}
|
||||
return nil, errors.New("PusherFunc not implemented")
|
||||
}
|
||||
|
||||
// PusherInChunked implements the Resolver.PusherInChunked method.
|
||||
func (m *MockResolver) PusherInChunked(ctx context.Context, ref string) (remotes.PusherInChunked, error) {
|
||||
if m.PusherInChunkedFunc != nil {
|
||||
return m.PusherInChunkedFunc(ctx, ref)
|
||||
}
|
||||
return nil, errors.New("PusherInChunkedFunc not implemented")
|
||||
}
|
||||
|
||||
type mockReadSeekCloeser struct {
|
||||
buf bytes.Buffer
|
||||
}
|
||||
|
||||
func (m *mockReadSeekCloeser) Read(p []byte) (n int, err error) {
|
||||
return m.buf.Read(p)
|
||||
}
|
||||
|
||||
func (m *mockReadSeekCloeser) Seek(int64, int) (int64, error) {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func (m *mockReadSeekCloeser) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestReadSeekCloser(t *testing.T) {
|
||||
remote := &Remote{
|
||||
parsed: &MockNamed{
|
||||
mockName: "docker.io/library/busybox:latest",
|
||||
mockString: "docker.io/library/busybox:latest",
|
||||
},
|
||||
}
|
||||
t.Run("Run not ReadSeekCloser", func(t *testing.T) {
|
||||
remote.resolverFunc = func(bool) remotes.Resolver {
|
||||
return &MockResolver{
|
||||
FetcherFunc: func(context.Context, string) (remotes.Fetcher, error) {
|
||||
var buf bytes.Buffer
|
||||
return remotes.FetcherFunc(func(context.Context, ocispec.Descriptor) (io.ReadCloser, error) {
|
||||
// return io.ReadSeekCloser
|
||||
return &readerAt{
|
||||
Reader: &buf,
|
||||
Closer: io.NopCloser(&buf),
|
||||
}, nil
|
||||
}), nil
|
||||
},
|
||||
}
|
||||
}
|
||||
_, err := remote.ReadSeekCloser(context.Background(), ocispec.Descriptor{}, false)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run Normal", func(t *testing.T) {
|
||||
// mock io.ReadSeekCloser
|
||||
remote.resolverFunc = func(bool) remotes.Resolver {
|
||||
return &MockResolver{
|
||||
FetcherFunc: func(context.Context, string) (remotes.Fetcher, error) {
|
||||
var buf bytes.Buffer
|
||||
return remotes.FetcherFunc(func(context.Context, ocispec.Descriptor) (io.ReadCloser, error) {
|
||||
return &mockReadSeekCloeser{
|
||||
buf: buf,
|
||||
}, nil
|
||||
}), nil
|
||||
},
|
||||
}
|
||||
}
|
||||
rsc, err := remote.ReadSeekCloser(context.Background(), ocispec.Descriptor{}, false)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, rsc)
|
||||
})
|
||||
}
|
|
@ -31,7 +31,7 @@ type Remote struct {
|
|||
resolverFunc func(insecure bool) remotes.Resolver
|
||||
pushed sync.Map
|
||||
|
||||
retryWithHTTP bool
|
||||
withHTTP bool
|
||||
}
|
||||
|
||||
// New creates remote instance from docker remote resolver
|
||||
|
@ -55,13 +55,16 @@ func (remote *Remote) MaybeWithHTTP(err error) {
|
|||
// If the error message includes the current registry host string, it
|
||||
// implies that we can retry the request with plain HTTP.
|
||||
if strings.Contains(err.Error(), fmt.Sprintf("/%s/", host)) {
|
||||
remote.retryWithHTTP = true
|
||||
remote.withHTTP = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (remote *Remote) WithHTTP() {
|
||||
remote.withHTTP = true
|
||||
}
|
||||
func (remote *Remote) IsWithHTTP() bool {
|
||||
return remote.retryWithHTTP
|
||||
return remote.withHTTP
|
||||
}
|
||||
|
||||
// Push pushes blob to registry
|
||||
|
@ -83,7 +86,7 @@ func (remote *Remote) Push(ctx context.Context, desc ocispec.Descriptor, byDiges
|
|||
}
|
||||
|
||||
// Create a new resolver instance for the request
|
||||
pusher, err := remote.resolverFunc(remote.retryWithHTTP).Pusher(ctx, ref)
|
||||
pusher, err := remote.resolverFunc(remote.withHTTP).Pusher(ctx, ref)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -110,7 +113,7 @@ func (remote *Remote) Pull(ctx context.Context, desc ocispec.Descriptor, byDiges
|
|||
}
|
||||
|
||||
// Create a new resolver instance for the request
|
||||
puller, err := remote.resolverFunc(remote.retryWithHTTP).Fetcher(ctx, ref)
|
||||
puller, err := remote.resolverFunc(remote.withHTTP).Fetcher(ctx, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -128,7 +131,7 @@ func (remote *Remote) Resolve(ctx context.Context) (*ocispec.Descriptor, error)
|
|||
ref := reference.TagNameOnly(remote.parsed).String()
|
||||
|
||||
// Create a new resolver instance for the request
|
||||
_, desc, err := remote.resolverFunc(remote.retryWithHTTP).Resolve(ctx, ref)
|
||||
_, desc, err := remote.resolverFunc(remote.withHTTP).Resolve(ctx, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
@ -0,0 +1,96 @@
|
|||
package backend
|
||||
|
||||
import (
|
||||
"context"
|
||||
)
|
||||
|
||||
type Backend struct {
|
||||
Version string `json:"version"`
|
||||
Backends []Config `json:"backends"`
|
||||
Blobs []Blob `json:"blobs"`
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
Type string `json:"type"`
|
||||
Config map[string]interface{} `json:"config,omitempty"`
|
||||
}
|
||||
|
||||
type Blob struct {
|
||||
Backend int `json:"backend"`
|
||||
Config BlobConfig `json:"config"`
|
||||
}
|
||||
|
||||
type BlobConfig struct {
|
||||
MediaType string `json:"media_type"`
|
||||
Digest string `json:"digest"`
|
||||
Size string `json:"size"`
|
||||
ChunkSize string `json:"chunk_size"`
|
||||
}
|
||||
|
||||
type Result struct {
|
||||
Chunks []Chunk
|
||||
Files []FileAttribute
|
||||
Backend Backend
|
||||
}
|
||||
|
||||
type FileAttribute struct {
|
||||
RelativePath string
|
||||
BlobIndex uint32
|
||||
BlobID string
|
||||
BlobSize string
|
||||
FileSize uint64
|
||||
ChunkSize string
|
||||
Chunk0CompressedOffset uint64
|
||||
Type string
|
||||
Mode uint32
|
||||
Crcs string
|
||||
}
|
||||
|
||||
type File struct {
|
||||
RelativePath string
|
||||
Size int64
|
||||
}
|
||||
|
||||
// Handler is the interface for backend handler.
|
||||
type Handler interface {
|
||||
// Backend returns the backend information.
|
||||
Backend(ctx context.Context) (*Backend, error)
|
||||
// Handle handles the file and returns the object information.
|
||||
Handle(ctx context.Context, file File) ([]Chunk, error)
|
||||
}
|
||||
|
||||
type RemoteHanlder interface {
|
||||
// Handle handles the file and returns the object information.
|
||||
Handle(ctx context.Context) (*Backend, []FileAttribute, error)
|
||||
}
|
||||
|
||||
type Chunk interface {
|
||||
ObjectID() uint32
|
||||
ObjectContent() interface{}
|
||||
ObjectOffset() uint64
|
||||
FilePath() string
|
||||
LimitChunkSize() string
|
||||
BlobDigest() string
|
||||
BlobSize() string
|
||||
}
|
||||
|
||||
// SplitObjectOffsets splits the total size into object offsets
|
||||
// with the specified chunk size.
|
||||
func SplitObjectOffsets(totalSize, chunkSize int64) []uint64 {
|
||||
objectOffsets := []uint64{}
|
||||
if chunkSize <= 0 {
|
||||
return objectOffsets
|
||||
}
|
||||
|
||||
chunkN := totalSize / chunkSize
|
||||
|
||||
for i := int64(0); i < chunkN; i++ {
|
||||
objectOffsets = append(objectOffsets, uint64(i*chunkSize))
|
||||
}
|
||||
|
||||
if totalSize%chunkSize > 0 {
|
||||
objectOffsets = append(objectOffsets, uint64(chunkN*chunkSize))
|
||||
}
|
||||
|
||||
return objectOffsets
|
||||
}
|
|
@ -0,0 +1,59 @@
|
|||
package backend
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"testing"
|
||||
"unsafe"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestLayout(t *testing.T) {
|
||||
require.Equal(t, fmt.Sprintf("%d", 4096), fmt.Sprintf("%d", unsafe.Sizeof(Header{})))
|
||||
require.Equal(t, fmt.Sprintf("%d", 256), fmt.Sprintf("%d", unsafe.Sizeof(ChunkMeta{})))
|
||||
require.Equal(t, fmt.Sprintf("%d", 256), fmt.Sprintf("%d", unsafe.Sizeof(ObjectMeta{})))
|
||||
}
|
||||
|
||||
func TestSplitObjectOffsets(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
totalSize int64
|
||||
chunkSize int64
|
||||
expected []uint64
|
||||
}{
|
||||
{
|
||||
name: "Chunk size is less than or equal to zero",
|
||||
totalSize: 100,
|
||||
chunkSize: 0,
|
||||
expected: []uint64{},
|
||||
},
|
||||
{
|
||||
name: "Total size is zero",
|
||||
totalSize: 0,
|
||||
chunkSize: 10,
|
||||
expected: []uint64{},
|
||||
},
|
||||
{
|
||||
name: "Total size is divisible by chunk size",
|
||||
totalSize: 100,
|
||||
chunkSize: 10,
|
||||
expected: []uint64{0, 10, 20, 30, 40, 50, 60, 70, 80, 90},
|
||||
},
|
||||
{
|
||||
name: "Total size is not divisible by chunk size",
|
||||
totalSize: 105,
|
||||
chunkSize: 10,
|
||||
expected: []uint64{0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := SplitObjectOffsets(tt.totalSize, tt.chunkSize)
|
||||
if !reflect.DeepEqual(result, tt.expected) {
|
||||
t.Errorf("SplitObjectOffsets(%d, %d) = %v; want %v", tt.totalSize, tt.chunkSize, result, tt.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
|
@ -0,0 +1,55 @@
|
|||
package backend
|
||||
|
||||
const MetaMagic uint32 = 0x0AF5_E1E2
|
||||
const MetaVersion uint32 = 0x0000_0001
|
||||
|
||||
// Layout
|
||||
//
|
||||
// header: magic | version | chunk_meta_offset | object_meta_offset
|
||||
// chunks: chunk_meta | chunk | chunk | ...
|
||||
// objects: object_meta | [object_offsets] | object | object | ...
|
||||
|
||||
// 4096 bytes
|
||||
type Header struct {
|
||||
Magic uint32
|
||||
Version uint32
|
||||
|
||||
ChunkMetaOffset uint32
|
||||
ObjectMetaOffset uint32
|
||||
|
||||
Reserved2 [4080]byte
|
||||
}
|
||||
|
||||
// 256 bytes
|
||||
type ChunkMeta struct {
|
||||
EntryCount uint32
|
||||
EntrySize uint32
|
||||
|
||||
Reserved [248]byte
|
||||
}
|
||||
|
||||
// 256 bytes
|
||||
type ObjectMeta struct {
|
||||
EntryCount uint32
|
||||
// = 0 means indeterminate entry size, and len(object_offsets) > 0.
|
||||
// > 0 means fixed entry size, and len(object_offsets) == 0.
|
||||
EntrySize uint32
|
||||
|
||||
Reserved [248]byte
|
||||
}
|
||||
|
||||
// 8 bytes
|
||||
type ChunkOndisk struct {
|
||||
ObjectIndex uint32
|
||||
Reserved [4]byte
|
||||
ObjectOffset uint64
|
||||
}
|
||||
|
||||
// 4 bytes
|
||||
type ObjectOffset uint32
|
||||
|
||||
// Size depends on different external backend implementations
|
||||
type ObjectOndisk struct {
|
||||
EntrySize uint32
|
||||
EncodedData []byte
|
||||
}
|
|
@ -0,0 +1,127 @@
|
|||
package backend
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io/fs"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type Walker struct {
|
||||
}
|
||||
|
||||
func NewWalker() *Walker {
|
||||
return &Walker{}
|
||||
}
|
||||
|
||||
func bfsWalk(path string, fn func(string, fs.FileInfo) error) error {
|
||||
info, err := os.Lstat(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if info.IsDir() {
|
||||
files, err := os.ReadDir(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dirs := []string{}
|
||||
for _, file := range files {
|
||||
filePath := filepath.Join(path, file.Name())
|
||||
if file.Type().IsRegular() {
|
||||
info, err := file.Info()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := fn(filePath, info); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if file.IsDir() {
|
||||
dirs = append(dirs, filePath)
|
||||
}
|
||||
}
|
||||
|
||||
for _, dir := range dirs {
|
||||
if err := bfsWalk(dir, fn); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (walker *Walker) Walk(ctx context.Context, root string, handler Handler) (*Result, error) {
|
||||
chunks := []Chunk{}
|
||||
files := []FileAttribute{}
|
||||
|
||||
addFile := func(size int64, relativeTarget string) error {
|
||||
_chunks, err := handler.Handle(ctx, File{
|
||||
RelativePath: relativeTarget,
|
||||
Size: size,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(_chunks) == 0 {
|
||||
return nil
|
||||
}
|
||||
chunks = append(chunks, _chunks...)
|
||||
lastFile := ""
|
||||
for _, c := range _chunks {
|
||||
cf := c.FilePath()
|
||||
if cf != lastFile {
|
||||
fa := FileAttribute{
|
||||
BlobID: c.BlobDigest(),
|
||||
BlobSize: c.BlobSize(),
|
||||
BlobIndex: c.ObjectID(),
|
||||
Chunk0CompressedOffset: c.ObjectOffset(),
|
||||
ChunkSize: c.LimitChunkSize(),
|
||||
RelativePath: cf,
|
||||
Type: "external",
|
||||
}
|
||||
files = append(files, fa)
|
||||
lastFile = cf
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
walkFiles := []func() error{}
|
||||
|
||||
if err := bfsWalk(root, func(path string, info fs.FileInfo) error {
|
||||
target, err := filepath.Rel(root, path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
walkFiles = append(walkFiles, func() error {
|
||||
return addFile(info.Size(), target)
|
||||
})
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
return nil, errors.Wrap(err, "walk directory")
|
||||
}
|
||||
|
||||
for i := 0; i < len(walkFiles); i++ {
|
||||
if err := walkFiles[i](); err != nil {
|
||||
return nil, errors.Wrap(err, "handle files")
|
||||
}
|
||||
}
|
||||
|
||||
// backend.json
|
||||
bkd, err := handler.Backend(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Result{
|
||||
Chunks: chunks,
|
||||
Files: files,
|
||||
Backend: *bkd,
|
||||
}, nil
|
||||
}
|
|
@ -0,0 +1,170 @@
|
|||
package backend
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Helper function to create a temporary directory and files for testing
|
||||
func setupTestDir(t *testing.T) (string, func()) {
|
||||
// Create a temporary directory
|
||||
tmpDir, err := os.MkdirTemp("", "bfsWalkTest")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create temp dir: %v", err)
|
||||
}
|
||||
|
||||
// Create test files and directories
|
||||
err = os.Mkdir(filepath.Join(tmpDir, "dir"), 0755)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create dir: %v", err)
|
||||
}
|
||||
err = os.WriteFile(filepath.Join(tmpDir, "dir", "file1"), []byte("test content"), 0644)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file1: %v", err)
|
||||
}
|
||||
err = os.Mkdir(filepath.Join(tmpDir, "dir", "subdir"), 0755)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create subdir: %v", err)
|
||||
}
|
||||
err = os.WriteFile(filepath.Join(tmpDir, "dir", "subdir", "file2"), []byte("test content"), 0644)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
|
||||
// Cleanup function to remove the temporary directory
|
||||
cleanup := func() {
|
||||
err := os.RemoveAll(tmpDir)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to cleanup temp dir: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return tmpDir, cleanup
|
||||
}
|
||||
|
||||
// TestBfsWalk tests the bfsWalk function with various cases.
|
||||
func TestBfsWalk(t *testing.T) {
|
||||
// Setup test directory
|
||||
tmpDir, cleanup := setupTestDir(t)
|
||||
defer cleanup()
|
||||
|
||||
t.Run("Invalid path", func(t *testing.T) {
|
||||
err := bfsWalk(filepath.Join(tmpDir, "invalid_path"), func(string, os.FileInfo) error { return nil })
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Single file", func(t *testing.T) {
|
||||
called := false
|
||||
err := bfsWalk(filepath.Join(tmpDir, "dir", "subdir"), func(path string, _ os.FileInfo) error {
|
||||
called = true
|
||||
assert.Equal(t, filepath.Join(tmpDir, "dir", "subdir", "file2"), path)
|
||||
return nil
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, called)
|
||||
})
|
||||
|
||||
t.Run("Empty directory", func(t *testing.T) {
|
||||
emptyDir := filepath.Join(tmpDir, "empty_dir")
|
||||
err := os.Mkdir(emptyDir, 0755)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create empty_dir: %v", err)
|
||||
}
|
||||
|
||||
called := false
|
||||
err = bfsWalk(emptyDir, func(string, os.FileInfo) error {
|
||||
called = true
|
||||
return nil
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, called)
|
||||
})
|
||||
|
||||
t.Run("Directory with files and subdirectories", func(t *testing.T) {
|
||||
var paths []string
|
||||
err := bfsWalk(filepath.Join(tmpDir, "dir"), func(path string, _ os.FileInfo) error {
|
||||
paths = append(paths, path)
|
||||
return nil
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
expectedPaths := []string{
|
||||
filepath.Join(tmpDir, "dir", "file1"),
|
||||
filepath.Join(tmpDir, "dir", "subdir", "file2"),
|
||||
}
|
||||
assert.Equal(t, expectedPaths, paths)
|
||||
})
|
||||
}
|
||||
|
||||
type MockChunk struct {
|
||||
ID uint32
|
||||
Content interface{}
|
||||
Offset uint64
|
||||
Path string
|
||||
ChunkSize string
|
||||
Digest string
|
||||
Size string
|
||||
}
|
||||
|
||||
func (m *MockChunk) ObjectID() uint32 {
|
||||
return m.ID
|
||||
}
|
||||
func (m *MockChunk) ObjectContent() interface{} {
|
||||
return m.Content
|
||||
}
|
||||
|
||||
func (m *MockChunk) ObjectOffset() uint64 {
|
||||
return m.Offset
|
||||
}
|
||||
func (m *MockChunk) FilePath() string {
|
||||
return m.Path
|
||||
}
|
||||
func (m *MockChunk) LimitChunkSize() string {
|
||||
return m.ChunkSize
|
||||
}
|
||||
func (m *MockChunk) BlobDigest() string {
|
||||
return m.Digest
|
||||
}
|
||||
func (m *MockChunk) BlobSize() string {
|
||||
return m.Size
|
||||
}
|
||||
|
||||
type MockHandler struct {
|
||||
BackendFunc func(context.Context) (*Backend, error)
|
||||
HandleFunc func(context.Context, File) ([]Chunk, error)
|
||||
}
|
||||
|
||||
func (m MockHandler) Backend(ctx context.Context) (*Backend, error) {
|
||||
if m.BackendFunc == nil {
|
||||
return &Backend{}, nil
|
||||
}
|
||||
return m.BackendFunc(ctx)
|
||||
}
|
||||
|
||||
func (m MockHandler) Handle(ctx context.Context, file File) ([]Chunk, error) {
|
||||
if m.HandleFunc == nil {
|
||||
return []Chunk{
|
||||
&MockChunk{
|
||||
Path: "test1",
|
||||
},
|
||||
&MockChunk{
|
||||
Path: "test2",
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
return m.HandleFunc(ctx, file)
|
||||
}
|
||||
|
||||
func TestWalk(t *testing.T) {
|
||||
walker := &Walker{}
|
||||
handler := MockHandler{}
|
||||
root := "/tmp/nydusify"
|
||||
os.MkdirAll(root, 0755)
|
||||
defer os.RemoveAll(root)
|
||||
os.CreateTemp(root, "test")
|
||||
_, err := walker.Walk(context.Background(), root, handler)
|
||||
assert.NoError(t, err)
|
||||
}
|
|
@ -0,0 +1,138 @@
|
|||
package external
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type Options struct {
|
||||
Dir string
|
||||
ContextDir string
|
||||
Handler backend.Handler
|
||||
RemoteHandler backend.RemoteHanlder
|
||||
MetaOutput string
|
||||
BackendOutput string
|
||||
AttributesOutput string
|
||||
}
|
||||
|
||||
type Attribute struct {
|
||||
Pattern string
|
||||
}
|
||||
|
||||
// Handle handles the directory and generates the backend meta and attributes.
|
||||
func Handle(ctx context.Context, opts Options) error {
|
||||
walker := backend.NewWalker()
|
||||
|
||||
backendRet, err := walker.Walk(ctx, opts.Dir, opts.Handler)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
generators, err := NewGenerators(*backendRet)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ret, err := generators.Generate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bkd := ret.Backend
|
||||
attributes := buildAttr(ret)
|
||||
|
||||
if err := os.WriteFile(opts.MetaOutput, ret.Meta, 0644); err != nil {
|
||||
return errors.Wrapf(err, "write meta to %s", opts.MetaOutput)
|
||||
}
|
||||
|
||||
backendBytes, err := json.MarshalIndent(bkd, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.WriteFile(opts.BackendOutput, backendBytes, 0644); err != nil {
|
||||
return errors.Wrapf(err, "write backend json to %s", opts.BackendOutput)
|
||||
}
|
||||
logrus.Debugf("backend json: %s", backendBytes)
|
||||
|
||||
attributeContent := []string{}
|
||||
for _, attribute := range attributes {
|
||||
attributeContent = append(attributeContent, attribute.Pattern)
|
||||
}
|
||||
if err := os.WriteFile(opts.AttributesOutput, []byte(strings.Join(attributeContent, "\n")), 0644); err != nil {
|
||||
return errors.Wrapf(err, "write attributes to %s", opts.AttributesOutput)
|
||||
}
|
||||
logrus.Debugf("attributes: %v", strings.Join(attributeContent, "\n"))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func buildAttr(ret *Result) []Attribute {
|
||||
attributes := []Attribute{}
|
||||
for _, file := range ret.Files {
|
||||
p := fmt.Sprintf("/%s type=%s blob_index=%d blob_id=%s chunk_size=%s chunk_0_compressed_offset=%d compressed_size=%s",
|
||||
file.RelativePath, file.Type, file.BlobIndex, file.BlobID, file.ChunkSize, file.Chunk0CompressedOffset, file.BlobSize)
|
||||
attributes = append(attributes, Attribute{
|
||||
Pattern: p,
|
||||
})
|
||||
}
|
||||
return attributes
|
||||
}
|
||||
|
||||
func RemoteHandle(ctx context.Context, opts Options) error {
|
||||
bkd, fileAttrs, err := opts.RemoteHandler.Handle(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "handle modctl")
|
||||
}
|
||||
attributes := []Attribute{}
|
||||
for _, file := range fileAttrs {
|
||||
p := fmt.Sprintf("/%s type=%s file_size=%d blob_index=%d blob_id=%s chunk_size=%s chunk_0_compressed_offset=%d compressed_size=%s crcs=%s",
|
||||
file.RelativePath, file.Type, file.FileSize, file.BlobIndex, file.BlobID, file.ChunkSize, file.Chunk0CompressedOffset, file.BlobSize, file.Crcs)
|
||||
attributes = append(attributes, Attribute{
|
||||
Pattern: p,
|
||||
})
|
||||
logrus.Debugf("file attr: %s, file_mode: %o", p, file.Mode)
|
||||
}
|
||||
|
||||
backendBytes, err := json.MarshalIndent(bkd, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.WriteFile(opts.BackendOutput, backendBytes, 0644); err != nil {
|
||||
return errors.Wrapf(err, "write backend json to %s", opts.BackendOutput)
|
||||
}
|
||||
logrus.Debugf("backend json: %s", backendBytes)
|
||||
|
||||
attributeContent := []string{}
|
||||
for _, attribute := range attributes {
|
||||
attributeContent = append(attributeContent, attribute.Pattern)
|
||||
}
|
||||
if err := os.WriteFile(opts.AttributesOutput, []byte(strings.Join(attributeContent, "\n")), 0644); err != nil {
|
||||
return errors.Wrapf(err, "write attributes to %s", opts.AttributesOutput)
|
||||
}
|
||||
logrus.Debugf("attributes: %v", strings.Join(attributeContent, "\n"))
|
||||
|
||||
// Build dummy files with empty content.
|
||||
if err := buildEmptyFiles(fileAttrs, opts.ContextDir); err != nil {
|
||||
return errors.Wrap(err, "build empty files")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func buildEmptyFiles(fileAttrs []backend.FileAttribute, contextDir string) error {
|
||||
for _, fileAttr := range fileAttrs {
|
||||
filePath := fmt.Sprintf("%s/%s", contextDir, fileAttr.RelativePath)
|
||||
if err := os.MkdirAll(filepath.Dir(filePath), 0755); err != nil {
|
||||
return errors.Wrapf(err, "create dir %s", filepath.Dir(filePath))
|
||||
}
|
||||
if err := os.WriteFile(filePath, []byte{}, os.FileMode(fileAttr.Mode)); err != nil {
|
||||
return errors.Wrapf(err, "write file %s", filePath)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,153 @@
|
|||
package external
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Mock implementation for backend.Handler
|
||||
type mockHandler struct {
|
||||
backendFunc func(ctx context.Context) (*backend.Backend, error)
|
||||
handleFunc func(ctx context.Context, file backend.File) ([]backend.Chunk, error)
|
||||
}
|
||||
|
||||
func (m *mockHandler) Backend(ctx context.Context) (*backend.Backend, error) {
|
||||
return m.backendFunc(ctx)
|
||||
}
|
||||
|
||||
func (m *mockHandler) Handle(ctx context.Context, file backend.File) ([]backend.Chunk, error) {
|
||||
return m.handleFunc(ctx, file)
|
||||
}
|
||||
|
||||
// Mock implementation for backend.RemoteHandler
|
||||
type mockRemoteHandler struct {
|
||||
handleFunc func(ctx context.Context) (*backend.Backend, []backend.FileAttribute, error)
|
||||
}
|
||||
|
||||
func (m *mockRemoteHandler) Handle(ctx context.Context) (*backend.Backend, []backend.FileAttribute, error) {
|
||||
return m.handleFunc(ctx)
|
||||
}
|
||||
|
||||
// TestHandle tests the Handle function.
|
||||
func TestHandle(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
metaOutput := filepath.Join(tmpDir, "meta.json")
|
||||
backendOutput := filepath.Join(tmpDir, "backend.json")
|
||||
attributesOutput := filepath.Join(tmpDir, "attributes.txt")
|
||||
|
||||
mockHandler := &mockHandler{
|
||||
backendFunc: func(context.Context) (*backend.Backend, error) {
|
||||
return &backend.Backend{Version: "mock"}, nil
|
||||
},
|
||||
handleFunc: func(context.Context, backend.File) ([]backend.Chunk, error) {
|
||||
return []backend.Chunk{}, nil
|
||||
},
|
||||
}
|
||||
|
||||
opts := Options{
|
||||
Dir: tmpDir,
|
||||
MetaOutput: metaOutput,
|
||||
BackendOutput: backendOutput,
|
||||
AttributesOutput: attributesOutput,
|
||||
Handler: mockHandler,
|
||||
}
|
||||
|
||||
err := Handle(context.Background(), opts)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify outputs
|
||||
assert.FileExists(t, metaOutput)
|
||||
assert.FileExists(t, backendOutput)
|
||||
assert.FileExists(t, attributesOutput)
|
||||
}
|
||||
|
||||
// TestRemoteHandle tests the RemoteHandle function.
|
||||
func TestRemoteHandle(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
contextDir := filepath.Join(tmpDir, "context")
|
||||
backendOutput := filepath.Join(tmpDir, "backend.json")
|
||||
attributesOutput := filepath.Join(tmpDir, "attributes.txt")
|
||||
|
||||
mockRemoteHandler := &mockRemoteHandler{
|
||||
handleFunc: func(context.Context) (*backend.Backend, []backend.FileAttribute, error) {
|
||||
return &backend.Backend{Version: "mock"},
|
||||
[]backend.FileAttribute{
|
||||
{
|
||||
RelativePath: "testfile",
|
||||
Type: "regular",
|
||||
FileSize: 1024,
|
||||
BlobIndex: 0,
|
||||
BlobID: "blob1",
|
||||
ChunkSize: "1MB",
|
||||
Chunk0CompressedOffset: 0,
|
||||
BlobSize: "10MB",
|
||||
Mode: 0644,
|
||||
},
|
||||
}, nil
|
||||
},
|
||||
}
|
||||
|
||||
opts := Options{
|
||||
ContextDir: contextDir,
|
||||
BackendOutput: backendOutput,
|
||||
AttributesOutput: attributesOutput,
|
||||
RemoteHandler: mockRemoteHandler,
|
||||
}
|
||||
|
||||
err := RemoteHandle(context.Background(), opts)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify outputs
|
||||
assert.FileExists(t, backendOutput)
|
||||
assert.FileExists(t, attributesOutput)
|
||||
assert.FileExists(t, filepath.Join(contextDir, "testfile"))
|
||||
}
|
||||
|
||||
// TestBuildEmptyFiles tests the buildEmptyFiles function.
|
||||
func TestBuildEmptyFiles(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
fileAttrs := []backend.FileAttribute{
|
||||
{
|
||||
RelativePath: "dir1/file1",
|
||||
Mode: 0644,
|
||||
},
|
||||
{
|
||||
RelativePath: "dir2/file2",
|
||||
Mode: 0755,
|
||||
},
|
||||
}
|
||||
|
||||
err := buildEmptyFiles(fileAttrs, tmpDir)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify files are created
|
||||
assert.FileExists(t, filepath.Join(tmpDir, "dir1", "file1"))
|
||||
assert.FileExists(t, filepath.Join(tmpDir, "dir2", "file2"))
|
||||
|
||||
// Verify file modes
|
||||
info, err := os.Stat(filepath.Join(tmpDir, "dir1", "file1"))
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, os.FileMode(0644), info.Mode())
|
||||
|
||||
info, err = os.Stat(filepath.Join(tmpDir, "dir2", "file2"))
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, os.FileMode(0755), info.Mode())
|
||||
}
|
||||
|
||||
func TestBuildAttr(t *testing.T) {
|
||||
ret := Result{
|
||||
Files: []backend.FileAttribute{
|
||||
{
|
||||
RelativePath: "dir1/file1",
|
||||
},
|
||||
},
|
||||
}
|
||||
attrs := buildAttr(&ret)
|
||||
assert.Equal(t, len(attrs), 1)
|
||||
}
|
|
@ -0,0 +1,150 @@
|
|||
package external
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"unsafe"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/vmihailenco/msgpack/v5"
|
||||
)
|
||||
|
||||
type Result struct {
|
||||
Meta []byte
|
||||
Backend backend.Backend
|
||||
Files []backend.FileAttribute
|
||||
}
|
||||
|
||||
type MetaGenerator struct {
|
||||
backend.Header
|
||||
backend.ChunkMeta
|
||||
Chunks []backend.ChunkOndisk
|
||||
backend.ObjectMeta
|
||||
ObjectOffsets []backend.ObjectOffset
|
||||
Objects []backend.ObjectOndisk
|
||||
}
|
||||
|
||||
type Generator interface {
|
||||
Generate() error
|
||||
}
|
||||
|
||||
type Generators struct {
|
||||
MetaGenerator
|
||||
Backend backend.Backend
|
||||
Files []backend.FileAttribute
|
||||
}
|
||||
|
||||
func NewGenerators(ret backend.Result) (*Generators, error) {
|
||||
objects := []backend.ObjectOndisk{}
|
||||
chunks := []backend.ChunkOndisk{}
|
||||
objectMap := make(map[uint32]uint32) // object id -> object index
|
||||
|
||||
for _, chunk := range ret.Chunks {
|
||||
objectID := chunk.ObjectID()
|
||||
objectIndex, ok := objectMap[objectID]
|
||||
if !ok {
|
||||
objectIndex = uint32(len(objects))
|
||||
objectMap[objectID] = objectIndex
|
||||
encoded, err := msgpack.Marshal(chunk.ObjectContent())
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "encode to msgpack format")
|
||||
}
|
||||
objects = append(objects, backend.ObjectOndisk{
|
||||
EntrySize: uint32(len(encoded)),
|
||||
EncodedData: encoded[:],
|
||||
})
|
||||
}
|
||||
chunks = append(chunks, backend.ChunkOndisk{
|
||||
ObjectIndex: objectIndex,
|
||||
ObjectOffset: chunk.ObjectOffset(),
|
||||
})
|
||||
}
|
||||
|
||||
return &Generators{
|
||||
MetaGenerator: MetaGenerator{
|
||||
Chunks: chunks,
|
||||
Objects: objects,
|
||||
},
|
||||
Backend: ret.Backend,
|
||||
Files: ret.Files,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (generators *Generators) Generate() (*Result, error) {
|
||||
meta, err := generators.MetaGenerator.Generate()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "generate backend meta")
|
||||
}
|
||||
return &Result{
|
||||
Meta: meta,
|
||||
Backend: generators.Backend,
|
||||
Files: generators.Files,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (generator *MetaGenerator) Generate() ([]byte, error) {
|
||||
// prepare data
|
||||
chunkMetaOffset := uint32(unsafe.Sizeof(generator.Header))
|
||||
generator.ChunkMeta.EntryCount = uint32(len(generator.Chunks))
|
||||
generator.ChunkMeta.EntrySize = uint32(unsafe.Sizeof(backend.ChunkOndisk{}))
|
||||
objectMetaOffset := chunkMetaOffset + uint32(unsafe.Sizeof(generator.ChunkMeta)) + generator.ChunkMeta.EntryCount*generator.ChunkMeta.EntrySize
|
||||
generator.Header = backend.Header{
|
||||
Magic: backend.MetaMagic,
|
||||
Version: backend.MetaVersion,
|
||||
ChunkMetaOffset: chunkMetaOffset,
|
||||
ObjectMetaOffset: objectMetaOffset,
|
||||
}
|
||||
|
||||
generator.ObjectMeta.EntryCount = uint32(len(generator.Objects))
|
||||
objectOffsets := []backend.ObjectOffset{}
|
||||
objectOffset := backend.ObjectOffset(objectMetaOffset + uint32(unsafe.Sizeof(generator.ObjectMeta)) + 4*generator.ObjectMeta.EntryCount)
|
||||
var lastEntrySize uint32
|
||||
fixedEntrySize := true
|
||||
for _, object := range generator.Objects {
|
||||
if lastEntrySize > 0 && lastEntrySize != object.EntrySize {
|
||||
fixedEntrySize = false
|
||||
}
|
||||
lastEntrySize = object.EntrySize
|
||||
objectOffsets = append(objectOffsets, objectOffset)
|
||||
objectOffset += backend.ObjectOffset(uint32(unsafe.Sizeof(object.EntrySize)) + uint32(len(object.EncodedData)))
|
||||
}
|
||||
if fixedEntrySize && len(generator.Objects) > 0 {
|
||||
generator.ObjectMeta.EntrySize = generator.Objects[0].EntrySize
|
||||
}
|
||||
generator.ObjectOffsets = objectOffsets
|
||||
|
||||
// dump bytes
|
||||
var buf bytes.Buffer
|
||||
|
||||
if err := binary.Write(&buf, binary.LittleEndian, generator.Header); err != nil {
|
||||
return nil, errors.Wrap(err, "dump")
|
||||
}
|
||||
if err := binary.Write(&buf, binary.LittleEndian, generator.ChunkMeta); err != nil {
|
||||
return nil, errors.Wrap(err, "dump")
|
||||
}
|
||||
|
||||
for _, chunk := range generator.Chunks {
|
||||
if err := binary.Write(&buf, binary.LittleEndian, chunk); err != nil {
|
||||
return nil, errors.Wrap(err, "dump")
|
||||
}
|
||||
}
|
||||
if err := binary.Write(&buf, binary.LittleEndian, generator.ObjectMeta); err != nil {
|
||||
return nil, errors.Wrap(err, "dump")
|
||||
}
|
||||
for _, objectOffset := range generator.ObjectOffsets {
|
||||
if err := binary.Write(&buf, binary.LittleEndian, objectOffset); err != nil {
|
||||
return nil, errors.Wrap(err, "dump")
|
||||
}
|
||||
}
|
||||
for _, object := range generator.Objects {
|
||||
if err := binary.Write(&buf, binary.LittleEndian, object.EntrySize); err != nil {
|
||||
return nil, errors.Wrap(err, "dump")
|
||||
}
|
||||
if err := binary.Write(&buf, binary.LittleEndian, object.EncodedData); err != nil {
|
||||
return nil, errors.Wrap(err, "dump")
|
||||
}
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
|
@ -0,0 +1,170 @@
|
|||
package external
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
type MockChunk struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
func (m *MockChunk) ObjectID() uint32 {
|
||||
args := m.Called()
|
||||
return args.Get(0).(uint32)
|
||||
}
|
||||
|
||||
func (m *MockChunk) ObjectContent() interface{} {
|
||||
args := m.Called()
|
||||
return args.Get(0)
|
||||
}
|
||||
|
||||
func (m *MockChunk) ObjectOffset() uint64 {
|
||||
args := m.Called()
|
||||
return args.Get(0).(uint64)
|
||||
}
|
||||
|
||||
func (m *MockChunk) FilePath() string {
|
||||
args := m.Called()
|
||||
return args.String(0)
|
||||
}
|
||||
|
||||
func (m *MockChunk) LimitChunkSize() string {
|
||||
args := m.Called()
|
||||
return args.String(0)
|
||||
}
|
||||
|
||||
func (m *MockChunk) BlobDigest() string {
|
||||
args := m.Called()
|
||||
return args.String(0)
|
||||
}
|
||||
|
||||
func (m *MockChunk) BlobSize() string {
|
||||
args := m.Called()
|
||||
return args.String(0)
|
||||
}
|
||||
|
||||
type MockBackend struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
func (m *MockBackend) Backend(ctx context.Context) (*backend.Backend, error) {
|
||||
args := m.Called(ctx)
|
||||
return args.Get(0).(*backend.Backend), args.Error(1)
|
||||
}
|
||||
|
||||
func TestNewGenerators(t *testing.T) {
|
||||
t.Run("normal case", func(t *testing.T) {
|
||||
chunk := &MockChunk{}
|
||||
chunk.On("ObjectID").Return(uint32(1))
|
||||
chunk.On("ObjectContent").Return("content")
|
||||
chunk.On("ObjectOffset").Return(uint64(100))
|
||||
chunk.On("BlobDigest").Return("digest")
|
||||
chunk.On("BlobSize").Return("1024")
|
||||
|
||||
ret := backend.Result{
|
||||
Chunks: []backend.Chunk{chunk},
|
||||
Backend: backend.Backend{
|
||||
Version: "1.0",
|
||||
},
|
||||
Files: []backend.FileAttribute{
|
||||
{
|
||||
RelativePath: "file1",
|
||||
FileSize: 1024,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
generators, err := NewGenerators(ret)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, len(generators.MetaGenerator.Objects))
|
||||
assert.Equal(t, 1, len(generators.MetaGenerator.Chunks))
|
||||
assert.Equal(t, "1.0", generators.Backend.Version)
|
||||
})
|
||||
|
||||
t.Run("empty input", func(t *testing.T) {
|
||||
ret := backend.Result{
|
||||
Chunks: []backend.Chunk{},
|
||||
Backend: backend.Backend{},
|
||||
Files: []backend.FileAttribute{},
|
||||
}
|
||||
|
||||
generators, err := NewGenerators(ret)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, len(generators.MetaGenerator.Objects))
|
||||
assert.Equal(t, 0, len(generators.MetaGenerator.Chunks))
|
||||
})
|
||||
}
|
||||
|
||||
func TestGenerate(t *testing.T) {
|
||||
t.Run("normal case", func(t *testing.T) {
|
||||
generators := &Generators{
|
||||
MetaGenerator: MetaGenerator{
|
||||
Chunks: []backend.ChunkOndisk{
|
||||
{
|
||||
ObjectIndex: 0,
|
||||
ObjectOffset: 100,
|
||||
},
|
||||
},
|
||||
Objects: []backend.ObjectOndisk{
|
||||
{
|
||||
EntrySize: 10,
|
||||
EncodedData: []byte("encoded"),
|
||||
},
|
||||
},
|
||||
},
|
||||
Backend: backend.Backend{
|
||||
Version: "1.0",
|
||||
},
|
||||
Files: []backend.FileAttribute{
|
||||
{
|
||||
RelativePath: "file1",
|
||||
FileSize: 1024,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
result, err := generators.Generate()
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, result)
|
||||
assert.Equal(t, "1.0", result.Backend.Version)
|
||||
assert.Equal(t, 1, len(result.Files))
|
||||
})
|
||||
}
|
||||
|
||||
func TestMetaGeneratorGenerate(t *testing.T) {
|
||||
t.Run("normal case", func(t *testing.T) {
|
||||
generator := &MetaGenerator{
|
||||
Chunks: []backend.ChunkOndisk{
|
||||
{
|
||||
ObjectIndex: 0,
|
||||
ObjectOffset: 100,
|
||||
},
|
||||
},
|
||||
Objects: []backend.ObjectOndisk{
|
||||
{
|
||||
EntrySize: 10,
|
||||
EncodedData: []byte("encoded"),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
data, err := generator.Generate()
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, data)
|
||||
assert.Greater(t, len(data), 0)
|
||||
})
|
||||
|
||||
t.Run("empty input", func(t *testing.T) {
|
||||
generator := &MetaGenerator{}
|
||||
|
||||
data, err := generator.Generate()
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, data)
|
||||
assert.Greater(t, len(data), 0)
|
||||
})
|
||||
}
|
|
@ -2,11 +2,13 @@ package utils
|
|||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/distribution/reference"
|
||||
dockerconfig "github.com/docker/cli/cli/config"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
|
@ -20,9 +22,12 @@ type RegistryBackendConfig struct {
|
|||
}
|
||||
|
||||
type BackendProxyConfig struct {
|
||||
CacheDir string `json:"cache_dir"`
|
||||
URL string `json:"url"`
|
||||
Fallback bool `json:"fallback"`
|
||||
PingURL string `json:"ping_url"`
|
||||
Timeout int `json:"timeout"`
|
||||
ConnectTimeout int `json:"connect_timeout"`
|
||||
}
|
||||
|
||||
func NewRegistryBackendConfig(parsed reference.Named, insecure bool) (RegistryBackendConfig, error) {
|
||||
|
@ -55,3 +60,54 @@ func NewRegistryBackendConfig(parsed reference.Named, insecure bool) (RegistryBa
|
|||
|
||||
return backendConfig, nil
|
||||
}
|
||||
|
||||
// The external backend configuration extracted from the manifest is missing the runtime configuration.
|
||||
// Therefore, it is necessary to construct the runtime configuration using the available backend configuration.
|
||||
func BuildRuntimeExternalBackendConfig(backendConfig, externalBackendConfigPath string) error {
|
||||
extBkdCfg := backend.Backend{}
|
||||
extBkdCfgBytes, err := os.ReadFile(externalBackendConfigPath)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to read external backend config file")
|
||||
}
|
||||
|
||||
if err := json.Unmarshal(extBkdCfgBytes, &extBkdCfg); err != nil {
|
||||
return errors.Wrap(err, "failed to unmarshal external backend config file")
|
||||
}
|
||||
|
||||
bkdCfg := RegistryBackendConfig{}
|
||||
if err := json.Unmarshal([]byte(backendConfig), &bkdCfg); err != nil {
|
||||
return errors.Wrap(err, "failed to unmarshal registry backend config file")
|
||||
}
|
||||
|
||||
proxyURL := os.Getenv("NYDUS_EXTERNAL_PROXY_URL")
|
||||
if proxyURL == "" {
|
||||
proxyURL = bkdCfg.Proxy.URL
|
||||
}
|
||||
cacheDir := os.Getenv("NYDUS_EXTERNAL_PROXY_CACHE_DIR")
|
||||
if cacheDir == "" {
|
||||
cacheDir = bkdCfg.Proxy.CacheDir
|
||||
}
|
||||
|
||||
extBkdCfg.Backends[0].Config = map[string]interface{}{
|
||||
"scheme": bkdCfg.Scheme,
|
||||
"host": bkdCfg.Host,
|
||||
"repo": bkdCfg.Repo,
|
||||
"auth": bkdCfg.Auth,
|
||||
"timeout": 30,
|
||||
"connect_timeout": 5,
|
||||
"proxy": BackendProxyConfig{
|
||||
CacheDir: cacheDir,
|
||||
URL: proxyURL,
|
||||
Fallback: true,
|
||||
},
|
||||
}
|
||||
|
||||
extBkdCfgBytes, err = json.MarshalIndent(extBkdCfg, "", " ")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to marshal external backend config file")
|
||||
}
|
||||
if err = os.WriteFile(externalBackendConfigPath, extBkdCfgBytes, 0644); err != nil {
|
||||
return errors.Wrap(err, "failed to write external backend config file")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -0,0 +1,52 @@
|
|||
package utils
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestBuildExternalBackend(t *testing.T) {
|
||||
bkdCfg := RegistryBackendConfig{
|
||||
Host: "test.host",
|
||||
}
|
||||
bkdCfgBytes, err := json.Marshal(bkdCfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
oldExtCfg := backend.Backend{
|
||||
Version: "test.ver",
|
||||
Backends: []backend.Config{
|
||||
{Type: "registry"},
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("not exist", func(t *testing.T) {
|
||||
err = BuildRuntimeExternalBackendConfig(string(bkdCfgBytes), "not-exist")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("normal", func(t *testing.T) {
|
||||
extFile, err := os.CreateTemp("/tmp", "external-backend-config")
|
||||
require.NoError(t, err)
|
||||
defer os.Remove(extFile.Name())
|
||||
|
||||
oldExtCfgBytes, err := json.Marshal(oldExtCfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(extFile.Name(), oldExtCfgBytes, 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = BuildRuntimeExternalBackendConfig(string(bkdCfgBytes), extFile.Name())
|
||||
require.NoError(t, err)
|
||||
|
||||
newExtCfg := backend.Backend{}
|
||||
newExtCfgBytes, err := os.ReadFile(extFile.Name())
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, json.Unmarshal(newExtCfgBytes, &newExtCfg))
|
||||
assert.Equal(t, bkdCfg.Host, newExtCfg.Backends[0].Config["host"])
|
||||
})
|
||||
}
|
|
@ -8,6 +8,7 @@ const (
|
|||
ManifestOSFeatureNydus = "nydus.remoteimage.v1"
|
||||
MediaTypeNydusBlob = "application/vnd.oci.image.layer.nydus.blob.v1"
|
||||
BootstrapFileNameInLayer = "image/image.boot"
|
||||
BackendFileNameInLayer = "image/backend.json"
|
||||
|
||||
ManifestNydusCache = "containerd.io/snapshot/nydus-cache"
|
||||
|
||||
|
@ -17,6 +18,7 @@ const (
|
|||
LayerAnnotationNydusBootstrap = "containerd.io/snapshot/nydus-bootstrap"
|
||||
LayerAnnotationNydusFsVersion = "containerd.io/snapshot/nydus-fs-version"
|
||||
LayerAnnotationNydusSourceChainID = "containerd.io/snapshot/nydus-source-chainid"
|
||||
LayerAnnotationNydusArtifactType = "containerd.io/snapshot/nydus-artifact-type"
|
||||
|
||||
LayerAnnotationNydusReferenceBlobIDs = "containerd.io/snapshot/nydus-reference-blob-ids"
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ package utils
|
|||
|
||||
import (
|
||||
"archive/tar"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -18,6 +19,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/containerd/containerd/archive/compression"
|
||||
"github.com/goharbor/acceleration-service/pkg/errdefs"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
|
@ -28,9 +30,6 @@ import (
|
|||
const SupportedOS = "linux"
|
||||
const SupportedArch = runtime.GOARCH
|
||||
|
||||
const defaultRetryAttempts = 3
|
||||
const defaultRetryInterval = time.Second * 2
|
||||
|
||||
const (
|
||||
PlatformArchAMD64 string = "amd64"
|
||||
PlatformArchARM64 string = "arm64"
|
||||
|
@ -59,27 +58,85 @@ func GetNydusFsVersionOrDefault(annotations map[string]string, defaultVersion Fs
|
|||
return defaultVersion
|
||||
}
|
||||
|
||||
func WithRetry(op func() error) error {
|
||||
var err error
|
||||
attempts := defaultRetryAttempts
|
||||
for attempts > 0 {
|
||||
// WithRetry retries the given function with the specified retry count and delay.
|
||||
// If retryCount is 0, it will use the default value of 3.
|
||||
// If retryDelay is 0, it will use the default value of 5 seconds.
|
||||
func WithRetry(f func() error, retryCount int, retryDelay time.Duration) error {
|
||||
const (
|
||||
defaultRetryCount = 3
|
||||
defaultRetryDelay = 5 * time.Second
|
||||
)
|
||||
|
||||
if retryCount <= 0 {
|
||||
retryCount = defaultRetryCount
|
||||
}
|
||||
if retryDelay <= 0 {
|
||||
retryDelay = defaultRetryDelay
|
||||
}
|
||||
|
||||
var lastErr error
|
||||
for i := 0; i < retryCount; i++ {
|
||||
if lastErr != nil {
|
||||
if !RetryWithHTTP(lastErr) {
|
||||
return lastErr
|
||||
}
|
||||
logrus.WithError(lastErr).
|
||||
WithField("attempt", i+1).
|
||||
WithField("total_attempts", retryCount).
|
||||
WithField("retry_delay", retryDelay.String()).
|
||||
Warn("Operation failed, will retry")
|
||||
time.Sleep(retryDelay)
|
||||
}
|
||||
if err := f(); err != nil {
|
||||
lastErr = err
|
||||
continue
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if lastErr != nil {
|
||||
logrus.WithError(lastErr).
|
||||
WithField("total_attempts", retryCount).
|
||||
Error("Operation failed after all attempts")
|
||||
}
|
||||
|
||||
return lastErr
|
||||
}
|
||||
|
||||
func RetryWithAttempts(handle func() error, attempts int) error {
|
||||
for {
|
||||
attempts--
|
||||
if err != nil {
|
||||
if RetryWithHTTP(err) {
|
||||
err := handle()
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if attempts > 0 && !errors.Is(err, context.Canceled) {
|
||||
logrus.WithError(err).Warnf("retry (remain %d times)", attempts)
|
||||
continue
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
logrus.Warnf("Retry due to error: %s", err)
|
||||
time.Sleep(defaultRetryInterval)
|
||||
}
|
||||
if err = op(); err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func RetryWithHTTP(err error) bool {
|
||||
return err != nil && (errors.Is(err, http.ErrSchemeMismatch) || errors.Is(err, syscall.ECONNREFUSED))
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check for HTTP status code errors
|
||||
if strings.Contains(err.Error(), "503 Service Unavailable") ||
|
||||
strings.Contains(err.Error(), "502 Bad Gateway") ||
|
||||
strings.Contains(err.Error(), "504 Gateway Timeout") ||
|
||||
strings.Contains(err.Error(), "401 Unauthorized") {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check for connection errors
|
||||
return errors.Is(err, http.ErrSchemeMismatch) ||
|
||||
errors.Is(err, syscall.ECONNREFUSED) ||
|
||||
errdefs.NeedsRetryWithHTTP(err)
|
||||
}
|
||||
|
||||
func MarshalToDesc(data interface{}, mediaType string) (*ocispec.Descriptor, []byte, error) {
|
||||
|
|
|
@ -8,12 +8,15 @@ package utils
|
|||
import (
|
||||
"archive/tar"
|
||||
"compress/gzip"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"strings"
|
||||
"syscall"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
|
@ -241,12 +244,14 @@ func TestWithRetry(t *testing.T) {
|
|||
err := WithRetry(func() error {
|
||||
_, err := http.Get("http://localhost:5000")
|
||||
return err
|
||||
})
|
||||
}, 3, 5*time.Second)
|
||||
require.ErrorIs(t, err, syscall.ECONNREFUSED)
|
||||
}
|
||||
|
||||
func TestRetryWithHTTP(t *testing.T) {
|
||||
require.True(t, RetryWithHTTP(errors.Wrap(http.ErrSchemeMismatch, "parse Nydus image")))
|
||||
require.True(t, RetryWithHTTP(fmt.Errorf("dial tcp 192.168.0.1:443: i/o timeout")))
|
||||
require.True(t, RetryWithHTTP(fmt.Errorf("dial tcp 192.168.0.1:443: connect: connection refused")))
|
||||
require.False(t, RetryWithHTTP(nil))
|
||||
}
|
||||
|
||||
|
@ -270,3 +275,30 @@ func TestGetNydusFsVersionOrDefault(t *testing.T) {
|
|||
fsVersion = GetNydusFsVersionOrDefault(testAnnotations, V5)
|
||||
require.Equal(t, fsVersion, V5)
|
||||
}
|
||||
|
||||
func TestRetryWithAttempts_SuccessOnFirstAttempt(t *testing.T) {
|
||||
err := RetryWithAttempts(func() error {
|
||||
return nil
|
||||
}, 3)
|
||||
require.NoError(t, err)
|
||||
|
||||
attempts := 0
|
||||
err = RetryWithAttempts(func() error {
|
||||
attempts++
|
||||
if attempts == 1 {
|
||||
return errors.New("first attempt failed")
|
||||
}
|
||||
return nil
|
||||
}, 3)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = RetryWithAttempts(func() error {
|
||||
return errors.New("always fails")
|
||||
}, 3)
|
||||
require.Error(t, err)
|
||||
|
||||
err = RetryWithAttempts(func() error {
|
||||
return context.Canceled
|
||||
}, 3)
|
||||
require.Equal(t, context.Canceled, err)
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
@ -69,6 +70,7 @@ func New(opt Opt) (*FsViewer, error) {
|
|||
BackendType: opt.BackendType,
|
||||
BackendConfig: opt.BackendConfig,
|
||||
BootstrapPath: filepath.Join(opt.WorkDir, "nydus_bootstrap"),
|
||||
ExternalBackendConfigPath: filepath.Join(opt.WorkDir, "nydus_external_backend"),
|
||||
ConfigPath: filepath.Join(opt.WorkDir, "fs/nydusd_config.json"),
|
||||
BlobCacheDir: filepath.Join(opt.WorkDir, "fs/nydus_blobs"),
|
||||
MountPath: opt.MountPath,
|
||||
|
@ -109,19 +111,34 @@ func (fsViewer *FsViewer) PullBootstrap(ctx context.Context, targetParsed *parse
|
|||
return errors.Wrap(err, "output Nydus config file")
|
||||
}
|
||||
|
||||
target := filepath.Join(fsViewer.WorkDir, "nydus_bootstrap")
|
||||
target := fsViewer.NydusdConfig.BootstrapPath
|
||||
logrus.Infof("Pulling Nydus bootstrap to %s", target)
|
||||
bootstrapReader, err := fsViewer.Parser.PullNydusBootstrap(ctx, targetParsed.NydusImage)
|
||||
if err := fsViewer.getBootstrapFile(ctx, targetParsed.NydusImage, utils.BootstrapFileNameInLayer, target); err != nil {
|
||||
return errors.Wrap(err, "failed to unpack Nydus bootstrap layer")
|
||||
}
|
||||
|
||||
logrus.Infof("Pulling Nydus external backend to %s", target)
|
||||
target = fsViewer.NydusdConfig.ExternalBackendConfigPath
|
||||
if err := fsViewer.getBootstrapFile(ctx, targetParsed.NydusImage, utils.BackendFileNameInLayer, target); err != nil {
|
||||
if !strings.Contains(err.Error(), "Not found") {
|
||||
return errors.Wrap(err, "failed to unpack Nydus external backend layer")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (fsViewer *FsViewer) getBootstrapFile(ctx context.Context, image *parser.Image, source, target string) error {
|
||||
bootstrapReader, err := fsViewer.Parser.PullNydusBootstrap(ctx, image)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to pull Nydus bootstrap layer")
|
||||
}
|
||||
defer bootstrapReader.Close()
|
||||
|
||||
if err := utils.UnpackFile(bootstrapReader, utils.BootstrapFileNameInLayer, target); err != nil {
|
||||
if err := utils.UnpackFile(bootstrapReader, source, target); err != nil {
|
||||
return errors.Wrap(err, "failed to unpack Nydus bootstrap layer")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -175,6 +192,10 @@ func (fsViewer *FsViewer) view(ctx context.Context) error {
|
|||
return errors.Wrap(err, "failed to pull Nydus image bootstrap")
|
||||
}
|
||||
|
||||
if err = fsViewer.handleExternalBackendConfig(); err != nil {
|
||||
return errors.Wrap(err, "failed to handle external backend config")
|
||||
}
|
||||
|
||||
// Adjust nydusd parameters(DigestValidate) according to rafs format
|
||||
nydusManifest := parser.FindNydusBootstrapDesc(&targetParsed.NydusImage.Manifest)
|
||||
if nydusManifest != nil {
|
||||
|
@ -211,3 +232,11 @@ func (fsViewer *FsViewer) view(ctx context.Context) error {
|
|||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (fsViewer *FsViewer) handleExternalBackendConfig() error {
|
||||
extBkdCfgPath := fsViewer.NydusdConfig.ExternalBackendConfigPath
|
||||
if _, err := os.Stat(extBkdCfgPath); os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
return utils.BuildRuntimeExternalBackendConfig(fsViewer.BackendConfig, extBkdCfgPath)
|
||||
}
|
||||
|
|
|
@ -0,0 +1,151 @@
|
|||
package viewer
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/agiledragon/gomonkey/v2"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestNewFsViewer(t *testing.T) {
|
||||
var remoter = remote.Remote{}
|
||||
defaultRemotePatches := gomonkey.ApplyFunc(provider.DefaultRemote, func(string, bool) (*remote.Remote, error) {
|
||||
return &remoter, nil
|
||||
})
|
||||
defer defaultRemotePatches.Reset()
|
||||
|
||||
var targetParser = parser.Parser{}
|
||||
parserNewPatches := gomonkey.ApplyFunc(parser.New, func(*remote.Remote, string) (*parser.Parser, error) {
|
||||
return &targetParser, nil
|
||||
})
|
||||
defer parserNewPatches.Reset()
|
||||
opt := Opt{
|
||||
Target: "test",
|
||||
}
|
||||
fsViewer, err := New(opt)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, fsViewer)
|
||||
}
|
||||
|
||||
func TestPullBootstrap(t *testing.T) {
|
||||
opt := Opt{
|
||||
WorkDir: "/tmp/nydusify/fsviwer",
|
||||
}
|
||||
fsViwer := FsViewer{
|
||||
Opt: opt,
|
||||
}
|
||||
os.MkdirAll(fsViwer.WorkDir, 0755)
|
||||
defer os.RemoveAll(fsViwer.WorkDir)
|
||||
targetParsed := &parser.Parsed{
|
||||
NydusImage: &parser.Image{},
|
||||
}
|
||||
err := fsViwer.PullBootstrap(context.Background(), targetParsed)
|
||||
assert.Error(t, err)
|
||||
callCount := 0
|
||||
getBootstrapPatches := gomonkey.ApplyPrivateMethod(&fsViwer, "getBootstrapFile", func(context.Context, *parser.Image, string, string) error {
|
||||
if callCount == 0 {
|
||||
callCount++
|
||||
return nil
|
||||
}
|
||||
return errors.New("failed to pull Nydus bootstrap layer mock error")
|
||||
})
|
||||
defer getBootstrapPatches.Reset()
|
||||
err = fsViwer.PullBootstrap(context.Background(), targetParsed)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGetBootstrapFile(t *testing.T) {
|
||||
opt := Opt{
|
||||
WorkDir: "/tmp/nydusify/fsviwer",
|
||||
}
|
||||
fsViwer := FsViewer{
|
||||
Opt: opt,
|
||||
Parser: &parser.Parser{},
|
||||
}
|
||||
t.Run("Run pull bootstrap failed", func(t *testing.T) {
|
||||
pullNydusBootstrapPatches := gomonkey.ApplyMethod(fsViwer.Parser, "PullNydusBootstrap", func(*parser.Parser, context.Context, *parser.Image) (io.ReadCloser, error) {
|
||||
return nil, errors.New("failed to pull Nydus bootstrap layer mock error")
|
||||
})
|
||||
defer pullNydusBootstrapPatches.Reset()
|
||||
image := &parser.Image{}
|
||||
err := fsViwer.getBootstrapFile(context.Background(), image, "", "")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run unpack failed", func(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
pullNydusBootstrapPatches := gomonkey.ApplyMethod(fsViwer.Parser, "PullNydusBootstrap", func(*parser.Parser, context.Context, *parser.Image) (io.ReadCloser, error) {
|
||||
return io.NopCloser(&buf), nil
|
||||
})
|
||||
defer pullNydusBootstrapPatches.Reset()
|
||||
image := &parser.Image{}
|
||||
err := fsViwer.getBootstrapFile(context.Background(), image, "", "")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run normal", func(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
pullNydusBootstrapPatches := gomonkey.ApplyMethod(fsViwer.Parser, "PullNydusBootstrap", func(*parser.Parser, context.Context, *parser.Image) (io.ReadCloser, error) {
|
||||
return io.NopCloser(&buf), nil
|
||||
})
|
||||
defer pullNydusBootstrapPatches.Reset()
|
||||
|
||||
unpackPatches := gomonkey.ApplyFunc(utils.UnpackFile, func(io.Reader, string, string) error {
|
||||
return nil
|
||||
})
|
||||
defer unpackPatches.Reset()
|
||||
image := &parser.Image{}
|
||||
err := fsViwer.getBootstrapFile(context.Background(), image, "", "")
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestHandleExternalBackendConfig(t *testing.T) {
|
||||
backend := &backend.Backend{
|
||||
Backends: []backend.Config{
|
||||
{
|
||||
Type: "registry",
|
||||
},
|
||||
},
|
||||
}
|
||||
bkdConfig, err := json.Marshal(backend)
|
||||
require.NoError(t, err)
|
||||
opt := Opt{
|
||||
WorkDir: "/tmp/nydusify/fsviwer",
|
||||
BackendConfig: string(bkdConfig),
|
||||
}
|
||||
fsViwer := FsViewer{
|
||||
Opt: opt,
|
||||
Parser: &parser.Parser{},
|
||||
}
|
||||
t.Run("Run not exist", func(t *testing.T) {
|
||||
err := fsViwer.handleExternalBackendConfig()
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Run normal", func(t *testing.T) {
|
||||
osStatPatches := gomonkey.ApplyFunc(os.Stat, func(string) (os.FileInfo, error) {
|
||||
return nil, nil
|
||||
})
|
||||
defer osStatPatches.Reset()
|
||||
|
||||
buildExternalConfigPatches := gomonkey.ApplyFunc(utils.BuildRuntimeExternalBackendConfig, func(string, string) error {
|
||||
return nil
|
||||
})
|
||||
defer buildExternalConfigPatches.Reset()
|
||||
err := fsViwer.handleExternalBackendConfig()
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
|
@ -40,7 +40,9 @@ db-urls = ["https://github.com/rustsec/advisory-db"]
|
|||
yanked = "warn"
|
||||
# A list of advisory IDs to ignore. Note that ignored advisories will still
|
||||
# output a note when they are encountered.
|
||||
ignore = []
|
||||
ignore = [
|
||||
{ id = "RUSTSEC-2024-0436", reason = "No safe upgrade is available!" },
|
||||
]
|
||||
# Threshold for security vulnerabilities, any vulnerability with a CVSS score
|
||||
# lower than the range specified will be ignored. Note that ignored advisories
|
||||
# will still output a note when they are encountered.
|
||||
|
|
|
@ -319,64 +319,8 @@ or
|
|||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `HttpProxy` backend also supports the `Proxy` and `Mirrors` configurations for remote usage like the `Registry backend` described above.
|
||||
|
||||
##### Enable Mirrors for Storage Backend (Recommend)
|
||||
|
||||
Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P mirror mode, please refer the [doc](https://d7y.io/docs/next/operations/integrations/container-runtime/nydus/) to learn how configuring Nydus to use Dragonfly.
|
||||
|
||||
Add `device.backend.config.mirrors` field to enable mirrors for storage backend. The mirror can be a P2P distribution server or registry. If the request to mirror server failed, it will fall back to the original registry.
|
||||
Currently, the mirror mode is only tested in the registry backend, and in theory, the OSS backend also supports it.
|
||||
|
||||
<font color='red'>!!</font> The `mirrors` field conflicts with `proxy` field.
|
||||
|
||||
```
|
||||
{
|
||||
"device": {
|
||||
"backend": {
|
||||
"type": "registry",
|
||||
"config": {
|
||||
"mirrors": [
|
||||
{
|
||||
// Mirror server URL (including scheme), e.g. Dragonfly dfdaemon server URL
|
||||
"host": "http://dragonfly1.io:65001",
|
||||
// Headers for mirror server
|
||||
"headers": {
|
||||
// For Dragonfly dfdaemon server URL, we need to specify "X-Dragonfly-Registry" (including scheme).
|
||||
// When Dragonfly does not cache data, nydusd will pull it from "X-Dragonfly-Registry".
|
||||
// If not set "X-Dragonfly-Registry", Dragonfly will pull data from proxy.registryMirror.url.
|
||||
"X-Dragonfly-Registry": "https://index.docker.io"
|
||||
},
|
||||
// This URL endpoint is used to check the health of mirror server, and if the mirror is unhealthy,
|
||||
// the request will fallback to the next mirror or the original registry server.
|
||||
// Use $host/v2 as default if left empty.
|
||||
"ping_url": "http://127.0.0.1:40901/server/ping",
|
||||
// Interval time (s) to check and recover unavailable mirror. Use 5 as default if left empty.
|
||||
"health_check_interval": 5,
|
||||
// Failure counts before disabling this mirror. Use 5 as default if left empty.
|
||||
"failure_limit": 5,
|
||||
// Elapsed time to pause mirror health check when the request is inactive, in seconds.
|
||||
// Use 300 as default if left empty.
|
||||
"health_check_pause_elapsed": 300,
|
||||
},
|
||||
{
|
||||
"host": "http://dragonfly2.io:65001",
|
||||
"headers": {
|
||||
"X-Dragonfly-Registry": "https://index.docker.io"
|
||||
},
|
||||
}
|
||||
],
|
||||
...
|
||||
}
|
||||
},
|
||||
...
|
||||
},
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
-
|
||||
- The `HttpProxy` backend also supports the `Proxy` configuration for remote usage like the `Registry backend` described above.
|
||||
##### Enable P2P Proxy for Storage Backend
|
||||
|
||||
Add `device.backend.config.proxy` field to enable HTTP proxy for storage backend. For example, use P2P distribution service to reduce network workload and latency in large scale container cluster using [Dragonfly](https://d7y.io/) (enable centralized dfdaemon mode).
|
||||
|
|
|
@ -0,0 +1,119 @@
|
|||
#!/bin/bash
|
||||
|
||||
GOOS=$(go env GOOS)
|
||||
if [ -z "$GOARCH" ]; then
|
||||
GOARCH=$(go env GOARCH)
|
||||
echo "GOARCH is not set, use GOARCH=$GOARCH"
|
||||
fi
|
||||
|
||||
cat <<EOF > .goreleaser.yml
|
||||
version: 2
|
||||
release:
|
||||
draft: true
|
||||
replace_existing_draft: true
|
||||
|
||||
before:
|
||||
hooks:
|
||||
- go mod download
|
||||
|
||||
builds:
|
||||
- main: contrib/goreleaser/main.go
|
||||
id: nydusify
|
||||
binary: nydusify
|
||||
goos:
|
||||
- $GOOS
|
||||
goarch:
|
||||
- $GOARCH
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
hooks:
|
||||
post:
|
||||
- cp nydus-static/{{ .Name }} dist/{{ .Name }}_{{ .Target }}/{{ .Name }}
|
||||
- main: contrib/goreleaser/main.go
|
||||
id: nydus-overlayfs
|
||||
binary: nydus-overlayfs
|
||||
goos:
|
||||
- $GOOS
|
||||
goarch:
|
||||
- $GOARCH
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
hooks:
|
||||
post:
|
||||
- cp nydus-static/{{ .Name }} dist/{{ .Name }}_{{ .Target }}/{{ .Name }}
|
||||
- main: contrib/goreleaser/main.go
|
||||
id: nydus-image
|
||||
binary: nydus-image
|
||||
goos:
|
||||
- $GOOS
|
||||
goarch:
|
||||
- $GOARCH
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
hooks:
|
||||
post:
|
||||
- cp nydus-static/{{ .Name }} dist/{{ .Name }}_{{ .Target }}/{{ .Name }}
|
||||
- main: contrib/goreleaser/main.go
|
||||
id: nydusctl
|
||||
binary: nydusctl
|
||||
goos:
|
||||
- $GOOS
|
||||
goarch:
|
||||
- $GOARCH
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
hooks:
|
||||
post:
|
||||
- cp nydus-static/{{ .Name }} dist/{{ .Name }}_{{ .Target }}/{{ .Name }}
|
||||
- main: contrib/goreleaser/main.go
|
||||
id: nydusd
|
||||
binary: nydusd
|
||||
goos:
|
||||
- $GOOS
|
||||
goarch:
|
||||
- $GOARCH
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
hooks:
|
||||
post:
|
||||
- cp nydus-static/{{ .Name }} dist/{{ .Name }}_{{ .Target }}/{{ .Name }}
|
||||
|
||||
archives:
|
||||
- name_template: "nydus-static-{{ .Version }}-{{ .Os }}-{{ .Arch }}"
|
||||
formats: ["zip"]
|
||||
|
||||
checksum:
|
||||
name_template: "checksums-{{ .Version }}-$GOOS-$GOARCH.txt"
|
||||
|
||||
snapshot:
|
||||
version_template: "{{ .Tag }}-next"
|
||||
|
||||
changelog:
|
||||
sort: asc
|
||||
filters:
|
||||
exclude:
|
||||
- "^docs:"
|
||||
- "^test:"
|
||||
|
||||
nfpms:
|
||||
- id: nydus-static
|
||||
maintainer: Nydus Maintainers <dragonfly-maintainers@googlegroups.com>
|
||||
file_name_template: "nydus-static-{{ .Version }}-{{ .Os }}-{{ .Arch }}"
|
||||
package_name: nydus-static
|
||||
description: Static binaries of Nydus, designed for building and mounting Nydus images.
|
||||
license: "Apache 2.0"
|
||||
bindir: /usr/bin
|
||||
ids:
|
||||
- nydusify
|
||||
- nydus-overlayfs
|
||||
- nydus-image
|
||||
- nydusctl
|
||||
- nydusd
|
||||
|
||||
formats:
|
||||
- rpm
|
||||
- deb
|
||||
contents:
|
||||
- src: nydus-static/configs
|
||||
dst: /etc/nydus/configs
|
||||
EOF
|
|
@ -50,18 +50,6 @@ check_interval = 5
|
|||
# Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
|
||||
use_http = false
|
||||
|
||||
[[backend.oss.mirrors]]
|
||||
# Mirror server URL, for example http://127.0.0.1:65001.
|
||||
host = "http://127.0.0.1:65001"
|
||||
# Ping URL to check mirror server health.
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
# HTTP request headers to be passed to mirror server.
|
||||
# headers =
|
||||
# Interval for mirror health checking, in seconds.
|
||||
health_check_interval = 5
|
||||
# Maximum number of failures before marking a mirror as unusable.
|
||||
failure_limit = 5
|
||||
|
||||
[backend.registy]
|
||||
# Registry http scheme, either 'http' or 'https'
|
||||
scheme = "https"
|
||||
|
@ -99,18 +87,6 @@ check_interval = 5
|
|||
# Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
|
||||
use_http = false
|
||||
|
||||
[[backend.registry.mirrors]]
|
||||
# Mirror server URL, for example http://127.0.0.1:65001.
|
||||
host = "http://127.0.0.1:65001"
|
||||
# Ping URL to check mirror server health.
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
# HTTP request headers to be passed to mirror server.
|
||||
# headers =
|
||||
# Interval for mirror health checking, in seconds.
|
||||
health_check_interval = 5
|
||||
# Maximum number of failures before marking a mirror as unusable.
|
||||
failure_limit = 5
|
||||
|
||||
[cache]
|
||||
# Type of blob cache: "blobcache", "filecache", "fscache", "dummycache" or ""
|
||||
type = "filecache"
|
||||
|
|
|
@ -55,18 +55,6 @@ check_interval = 5
|
|||
# Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
|
||||
use_http = false
|
||||
|
||||
[[config_v2.backend.oss.mirrors]]
|
||||
# Mirror server URL, for example http://127.0.0.1:65001.
|
||||
host = "http://127.0.0.1:65001"
|
||||
# Ping URL to check mirror server health.
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
# HTTP request headers to be passed to mirror server.
|
||||
# headers =
|
||||
# Interval for mirror health checking, in seconds.
|
||||
health_check_interval = 5
|
||||
# Maximum number of failures before marking a mirror as unusable.
|
||||
failure_limit = 5
|
||||
|
||||
[config_v2.backend.registry]
|
||||
# Registry http scheme, either 'http' or 'https'
|
||||
scheme = "https"
|
||||
|
@ -104,18 +92,6 @@ check_interval = 5
|
|||
# Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
|
||||
use_http = false
|
||||
|
||||
[[config_v2.backend.registry.mirrors]]
|
||||
# Mirror server URL, for example http://127.0.0.1:65001.
|
||||
host = "http://127.0.0.1:65001"
|
||||
# Ping URL to check mirror server health.
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
# HTTP request headers to be passed to mirror server.
|
||||
# headers =
|
||||
# Interval for mirror health checking, in seconds.
|
||||
health_check_interval = 5
|
||||
# Maximum number of failures before marking a mirror as unusable.
|
||||
failure_limit = 5
|
||||
|
||||
[config_v2.cache]
|
||||
# Type of blob cache: "blobcache", "filecache", "fscache", "dummycache" or ""
|
||||
type = "filecache"
|
||||
|
|
|
@ -48,18 +48,6 @@ check_interval = 5
|
|||
# Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
|
||||
use_http = false
|
||||
|
||||
[[backend.oss.mirrors]]
|
||||
# Mirror server URL, for example http://127.0.0.1:65001.
|
||||
host = "http://127.0.0.1:65001"
|
||||
# Ping URL to check mirror server health.
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
# HTTP request headers to be passed to mirror server.
|
||||
# headers =
|
||||
# Interval for mirror health checking, in seconds.
|
||||
health_check_interval = 5
|
||||
# Maximum number of failures before marking a mirror as unusable.
|
||||
failure_limit = 5
|
||||
|
||||
[backend.registy]
|
||||
# Registry http scheme, either 'http' or 'https'
|
||||
scheme = "https"
|
||||
|
@ -97,17 +85,6 @@ check_interval = 5
|
|||
# Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
|
||||
use_http = false
|
||||
|
||||
[[backend.registry.mirrors]]
|
||||
# Mirror server URL, for example http://127.0.0.1:65001.
|
||||
host = "http://127.0.0.1:65001"
|
||||
# Ping URL to check mirror server health.
|
||||
ping_url = "http://127.0.0.1:65001/ping"
|
||||
# HTTP request headers to be passed to mirror server.
|
||||
# headers =
|
||||
# Interval for mirror health checking, in seconds.
|
||||
health_check_interval = 5
|
||||
# Maximum number of failures before marking a mirror as unusable.
|
||||
failure_limit = 5
|
||||
|
||||
[cache]
|
||||
# Type of blob cache: "blobcache", "filecache", "fscache", "dummycache" or ""
|
||||
|
|
|
@ -63,12 +63,6 @@ address = ":9110"
|
|||
[remote]
|
||||
convert_vpc_registry = false
|
||||
|
||||
[remote.mirrors_config]
|
||||
# Snapshotter will overwrite daemon's mirrors configuration
|
||||
# if the values loaded from this driectory are not null before starting a daemon.
|
||||
# Set to "" or an empty directory to disable it.
|
||||
#dir = "/etc/nydus/certs.d"
|
||||
|
||||
[remote.auth]
|
||||
# Fetch the private registry auth by listening to K8s API server
|
||||
enable_kubeconfig_keychain = false
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
[package]
|
||||
name = "nydus-rafs"
|
||||
version = "0.3.2"
|
||||
version = "0.4.0"
|
||||
description = "The RAFS filesystem format for Nydus Image Service"
|
||||
authors = ["The Nydus Developers"]
|
||||
license = "Apache-2.0 OR BSD-3-Clause"
|
||||
|
@ -22,11 +22,11 @@ vm-memory = "0.14.1"
|
|||
fuse-backend-rs = "^0.12.0"
|
||||
thiserror = "1"
|
||||
|
||||
nydus-api = { version = "0.3", path = "../api" }
|
||||
nydus-storage = { version = "0.6", path = "../storage", features = [
|
||||
nydus-api = { version = "0.4.0", path = "../api" }
|
||||
nydus-storage = { version = "0.7.0", path = "../storage", features = [
|
||||
"backend-localfs",
|
||||
] }
|
||||
nydus-utils = { version = "0.4", path = "../utils" }
|
||||
nydus-utils = { version = "0.5.0", path = "../utils" }
|
||||
|
||||
[dev-dependencies]
|
||||
vmm-sys-util = "0.12.1"
|
||||
|
|
|
@ -707,6 +707,7 @@ pub struct CachedChunkInfoV5 {
|
|||
compressed_size: u32,
|
||||
uncompressed_size: u32,
|
||||
flags: BlobChunkFlags,
|
||||
crc32: u32,
|
||||
}
|
||||
|
||||
impl CachedChunkInfoV5 {
|
||||
|
@ -761,6 +762,17 @@ impl BlobChunkInfo for CachedChunkInfoV5 {
|
|||
false
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
self.flags.contains(BlobChunkFlags::HAS_CRC32)
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
if self.has_crc32() {
|
||||
self.crc32
|
||||
} else {
|
||||
0
|
||||
}
|
||||
}
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
@ -814,6 +826,7 @@ mod cached_tests {
|
|||
use crate::metadata::layout::{RafsXAttrs, RAFS_V5_ROOT_INODE};
|
||||
use crate::metadata::{
|
||||
RafsInode, RafsInodeWalkAction, RafsStore, RafsSuperBlock, RafsSuperInodes, RafsSuperMeta,
|
||||
RAFS_MAX_NAME,
|
||||
};
|
||||
use crate::{BufWriter, RafsInodeExt, RafsIoRead, RafsIoReader};
|
||||
use vmm_sys_util::tempfile::TempFile;
|
||||
|
@ -1182,4 +1195,833 @@ mod cached_tests {
|
|||
assert!(info.is_compressed());
|
||||
assert!(!info.is_encrypted());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_validation_errors() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
|
||||
// Test invalid inode number (0)
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 0;
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test invalid nlink (0)
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 1;
|
||||
inode.i_nlink = 0;
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test invalid parent for non-root inode
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 2;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_parent = 0;
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test invalid name length
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 1;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_name = OsString::from("a".repeat(RAFS_MAX_NAME + 1));
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test empty name
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 1;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_name = OsString::new();
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test invalid parent inode (parent >= child for non-hardlink)
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 5;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_parent = 10;
|
||||
inode.i_name = OsString::from("test");
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_file_type_validation() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
|
||||
// Test regular file with invalid chunk count
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 1;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_name = OsString::from("test");
|
||||
inode.i_mode = libc::S_IFREG as u32;
|
||||
inode.i_size = 2048; // 2 chunks of 1024 bytes
|
||||
inode.i_data = vec![]; // But no chunks
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test regular file with invalid block count
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 1;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_name = OsString::from("test");
|
||||
inode.i_mode = libc::S_IFREG as u32;
|
||||
inode.i_size = 1024;
|
||||
inode.i_blocks = 100; // Invalid block count
|
||||
inode.i_data = vec![Arc::new(CachedChunkInfoV5::new())];
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test directory with invalid child index
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 5;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_name = OsString::from("test_dir");
|
||||
inode.i_mode = libc::S_IFDIR as u32;
|
||||
inode.i_child_cnt = 1;
|
||||
inode.i_child_idx = 3; // child_idx <= inode number is invalid
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
|
||||
// Test symlink with empty target
|
||||
let mut inode = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
inode.i_ino = 1;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_name = OsString::from("test_link");
|
||||
inode.i_mode = libc::S_IFLNK as u32;
|
||||
inode.i_target = OsString::new(); // Empty target
|
||||
assert!(inode.validate(100, 1024).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_file_type_checks() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Test block device
|
||||
inode.i_mode = libc::S_IFBLK as u32;
|
||||
assert!(inode.is_blkdev());
|
||||
assert!(!inode.is_chrdev());
|
||||
assert!(!inode.is_sock());
|
||||
assert!(!inode.is_fifo());
|
||||
assert!(!inode.is_dir());
|
||||
assert!(!inode.is_symlink());
|
||||
assert!(!inode.is_reg());
|
||||
|
||||
// Test character device
|
||||
inode.i_mode = libc::S_IFCHR as u32;
|
||||
assert!(!inode.is_blkdev());
|
||||
assert!(inode.is_chrdev());
|
||||
assert!(!inode.is_sock());
|
||||
assert!(!inode.is_fifo());
|
||||
|
||||
// Test socket
|
||||
inode.i_mode = libc::S_IFSOCK as u32;
|
||||
assert!(!inode.is_blkdev());
|
||||
assert!(!inode.is_chrdev());
|
||||
assert!(inode.is_sock());
|
||||
assert!(!inode.is_fifo());
|
||||
|
||||
// Test FIFO
|
||||
inode.i_mode = libc::S_IFIFO as u32;
|
||||
assert!(!inode.is_blkdev());
|
||||
assert!(!inode.is_chrdev());
|
||||
assert!(!inode.is_sock());
|
||||
assert!(inode.is_fifo());
|
||||
|
||||
// Test hardlink detection
|
||||
inode.i_mode = libc::S_IFREG as u32;
|
||||
inode.i_nlink = 2;
|
||||
assert!(inode.is_hardlink());
|
||||
|
||||
inode.i_mode = libc::S_IFDIR as u32;
|
||||
inode.i_nlink = 2;
|
||||
assert!(!inode.is_hardlink()); // Directories are not considered hardlinks
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_xattr_operations() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Test xattr flag
|
||||
inode.i_flags = RafsInodeFlags::XATTR;
|
||||
assert!(inode.has_xattr());
|
||||
|
||||
// Add some xattrs
|
||||
inode
|
||||
.i_xattr
|
||||
.insert(OsString::from("user.test1"), vec![1, 2, 3]);
|
||||
inode
|
||||
.i_xattr
|
||||
.insert(OsString::from("user.test2"), vec![4, 5, 6]);
|
||||
|
||||
// Test get_xattr
|
||||
let value = inode.get_xattr(OsStr::new("user.test1")).unwrap();
|
||||
assert_eq!(value, Some(vec![1, 2, 3]));
|
||||
|
||||
let value = inode.get_xattr(OsStr::new("user.nonexistent")).unwrap();
|
||||
assert_eq!(value, None);
|
||||
|
||||
// Test get_xattrs
|
||||
let xattrs = inode.get_xattrs().unwrap();
|
||||
assert_eq!(xattrs.len(), 2);
|
||||
assert!(xattrs.contains(&b"user.test1".to_vec()));
|
||||
assert!(xattrs.contains(&b"user.test2".to_vec()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_symlink_operations() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Test non-symlink
|
||||
inode.i_mode = libc::S_IFREG as u32;
|
||||
assert!(inode.get_symlink().is_err());
|
||||
assert_eq!(inode.get_symlink_size(), 0);
|
||||
|
||||
// Test symlink
|
||||
inode.i_mode = libc::S_IFLNK as u32;
|
||||
inode.i_target = OsString::from("/path/to/target");
|
||||
|
||||
let target = inode.get_symlink().unwrap();
|
||||
assert_eq!(target, OsString::from("/path/to/target"));
|
||||
assert_eq!(inode.get_symlink_size(), "/path/to/target".len() as u16);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_child_operations() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut parent = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
parent.i_ino = 1;
|
||||
parent.i_mode = libc::S_IFDIR as u32;
|
||||
parent.i_child_cnt = 2;
|
||||
|
||||
// Create child inodes
|
||||
let mut child1 = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
child1.i_ino = 2;
|
||||
child1.i_name = OsString::from("child_b");
|
||||
child1.i_mode = libc::S_IFREG as u32;
|
||||
|
||||
let mut child2 = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
child2.i_ino = 3;
|
||||
child2.i_name = OsString::from("child_a");
|
||||
child2.i_mode = libc::S_IFREG as u32;
|
||||
|
||||
// Add children (they should be sorted by name)
|
||||
parent.add_child(Arc::new(child1));
|
||||
parent.add_child(Arc::new(child2));
|
||||
|
||||
// Test children are sorted
|
||||
assert_eq!(parent.i_child[0].i_name, OsString::from("child_a"));
|
||||
assert_eq!(parent.i_child[1].i_name, OsString::from("child_b"));
|
||||
|
||||
// Test get_child_by_name
|
||||
let child = parent.get_child_by_name(OsStr::new("child_a")).unwrap();
|
||||
assert_eq!(child.ino(), 3);
|
||||
|
||||
assert!(parent.get_child_by_name(OsStr::new("nonexistent")).is_err());
|
||||
|
||||
// Test get_child_by_index
|
||||
let child = parent.get_child_by_index(0).unwrap();
|
||||
assert_eq!(child.ino(), 3);
|
||||
|
||||
let child = parent.get_child_by_index(1).unwrap();
|
||||
assert_eq!(child.ino(), 2);
|
||||
|
||||
assert!(parent.get_child_by_index(2).is_err());
|
||||
|
||||
// Test get_child_count
|
||||
assert_eq!(parent.get_child_count(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_walk_children() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut parent = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
parent.i_ino = 1;
|
||||
parent.i_mode = libc::S_IFDIR as u32;
|
||||
parent.i_child_cnt = 1;
|
||||
|
||||
let mut child = CachedInodeV5::new(blob_table, meta);
|
||||
child.i_ino = 2;
|
||||
child.i_name = OsString::from("test_child");
|
||||
parent.add_child(Arc::new(child));
|
||||
|
||||
// Test walking from offset 0 (should see ".", "..", and "test_child")
|
||||
let mut entries = Vec::new();
|
||||
parent
|
||||
.walk_children_inodes(0, &mut |_node, name, ino, offset| {
|
||||
entries.push((name, ino, offset));
|
||||
Ok(RafsInodeWalkAction::Continue)
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(entries.len(), 3);
|
||||
assert_eq!(entries[0].0, OsString::from("."));
|
||||
assert_eq!(entries[0].1, 1); // parent inode
|
||||
assert_eq!(entries[1].0, OsString::from(".."));
|
||||
assert_eq!(entries[1].1, 1); // root case
|
||||
assert_eq!(entries[2].0, OsString::from("test_child"));
|
||||
assert_eq!(entries[2].1, 2);
|
||||
|
||||
// Test walking from offset 1 (should skip ".")
|
||||
let mut entries = Vec::new();
|
||||
parent
|
||||
.walk_children_inodes(1, &mut |_node, name, ino, _offset| {
|
||||
entries.push((name, ino));
|
||||
Ok(RafsInodeWalkAction::Continue)
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(entries.len(), 2);
|
||||
assert_eq!(entries[0].0, OsString::from(".."));
|
||||
assert_eq!(entries[1].0, OsString::from("test_child"));
|
||||
|
||||
// Test early break
|
||||
let mut count = 0;
|
||||
parent
|
||||
.walk_children_inodes(0, &mut |_node, _name, _ino, _offset| {
|
||||
count += 1;
|
||||
if count == 1 {
|
||||
Ok(RafsInodeWalkAction::Break)
|
||||
} else {
|
||||
Ok(RafsInodeWalkAction::Continue)
|
||||
}
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(count, 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_chunk_operations() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Add some chunks
|
||||
let mut chunk1 = CachedChunkInfoV5::new();
|
||||
chunk1.index = 0;
|
||||
chunk1.file_offset = 0;
|
||||
chunk1.uncompressed_size = 1024;
|
||||
|
||||
let mut chunk2 = CachedChunkInfoV5::new();
|
||||
chunk2.index = 1;
|
||||
chunk2.file_offset = 1024;
|
||||
chunk2.uncompressed_size = 1024;
|
||||
|
||||
inode.i_data.push(Arc::new(chunk1));
|
||||
inode.i_data.push(Arc::new(chunk2));
|
||||
|
||||
// Note: get_chunk_count() currently returns i_child_cnt, not i_data.len()
|
||||
// This appears to be a bug in the implementation, but we test current behavior
|
||||
assert_eq!(inode.get_chunk_count(), 0); // i_child_cnt is 0 by default
|
||||
|
||||
// Test get_chunk_info
|
||||
let chunk = inode.get_chunk_info(0).unwrap();
|
||||
assert_eq!(chunk.uncompressed_size(), 1024);
|
||||
|
||||
let chunk = inode.get_chunk_info(1).unwrap();
|
||||
assert_eq!(chunk.uncompressed_size(), 1024);
|
||||
|
||||
assert!(inode.get_chunk_info(2).is_err());
|
||||
|
||||
// Test actual data length
|
||||
assert_eq!(inode.i_data.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_collect_descendants() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
|
||||
// Create a directory structure
|
||||
let mut root = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
root.i_ino = 1;
|
||||
root.i_mode = libc::S_IFDIR as u32;
|
||||
root.i_size = 0;
|
||||
|
||||
let mut subdir = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
subdir.i_ino = 2;
|
||||
subdir.i_mode = libc::S_IFDIR as u32;
|
||||
subdir.i_size = 0;
|
||||
|
||||
let mut file1 = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
file1.i_ino = 3;
|
||||
file1.i_mode = libc::S_IFREG as u32;
|
||||
file1.i_size = 1024;
|
||||
|
||||
let mut file2 = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
file2.i_ino = 4;
|
||||
file2.i_mode = libc::S_IFREG as u32;
|
||||
file2.i_size = 0; // Empty file should be skipped
|
||||
|
||||
let mut file3 = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
file3.i_ino = 5;
|
||||
file3.i_mode = libc::S_IFREG as u32;
|
||||
file3.i_size = 2048;
|
||||
|
||||
// Build structure: root -> [subdir, file1, file2], subdir -> [file3]
|
||||
subdir.i_child.push(Arc::new(file3));
|
||||
root.i_child.push(Arc::new(subdir));
|
||||
root.i_child.push(Arc::new(file1));
|
||||
root.i_child.push(Arc::new(file2));
|
||||
|
||||
let mut descendants = Vec::new();
|
||||
root.collect_descendants_inodes(&mut descendants).unwrap();
|
||||
|
||||
// Should collect file1 (non-empty) and file3 (from subdirectory)
|
||||
// file2 should be skipped because it's empty
|
||||
assert_eq!(descendants.len(), 2);
|
||||
let inodes: Vec<u64> = descendants.iter().map(|d| d.ino()).collect();
|
||||
assert!(inodes.contains(&3)); // file1
|
||||
assert!(inodes.contains(&5)); // file3
|
||||
assert!(!inodes.contains(&4)); // file2 (empty)
|
||||
|
||||
// Test with non-directory
|
||||
let file = CachedInodeV5::new(blob_table, meta);
|
||||
let mut descendants = Vec::new();
|
||||
assert!(file.collect_descendants_inodes(&mut descendants).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_chunk_info_v5_detailed() {
|
||||
let mut info = CachedChunkInfoV5::new();
|
||||
info.block_id = Arc::new(RafsDigest::from_buf("test".as_bytes(), Algorithm::Blake3));
|
||||
info.blob_index = 42;
|
||||
info.index = 100;
|
||||
info.file_offset = 2048;
|
||||
info.compressed_offset = 1024;
|
||||
info.uncompressed_offset = 3072;
|
||||
info.compressed_size = 512;
|
||||
info.uncompressed_size = 1024;
|
||||
info.flags = BlobChunkFlags::COMPRESSED | BlobChunkFlags::HAS_CRC32;
|
||||
info.crc32 = 0x12345678;
|
||||
|
||||
// Test basic properties
|
||||
assert_eq!(info.id(), 100);
|
||||
assert!(!info.is_batch());
|
||||
assert!(info.is_compressed());
|
||||
assert!(!info.is_encrypted());
|
||||
assert!(info.has_crc32());
|
||||
assert_eq!(info.crc32(), 0x12345678);
|
||||
|
||||
// Test getters
|
||||
assert_eq!(info.blob_index(), 42);
|
||||
assert_eq!(info.compressed_offset(), 1024);
|
||||
assert_eq!(info.compressed_size(), 512);
|
||||
assert_eq!(info.uncompressed_offset(), 3072);
|
||||
assert_eq!(info.uncompressed_size(), 1024);
|
||||
|
||||
// Test V5-specific getters
|
||||
assert_eq!(info.index(), 100);
|
||||
assert_eq!(info.file_offset(), 2048);
|
||||
assert_eq!(
|
||||
info.flags(),
|
||||
BlobChunkFlags::COMPRESSED | BlobChunkFlags::HAS_CRC32
|
||||
);
|
||||
|
||||
// Test CRC32 without flag
|
||||
info.flags = BlobChunkFlags::COMPRESSED;
|
||||
assert!(!info.has_crc32());
|
||||
assert_eq!(info.crc32(), 0);
|
||||
|
||||
// Test as_base
|
||||
let base_info = info.as_base();
|
||||
assert_eq!(base_info.blob_index(), 42);
|
||||
assert!(base_info.is_compressed());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_superblock_v5_inode_management() {
|
||||
let md = RafsSuperMeta::default();
|
||||
let mut sb = CachedSuperBlockV5::new(md, false);
|
||||
|
||||
// Test empty superblock
|
||||
assert_eq!(sb.get_max_ino(), RAFS_V5_ROOT_INODE);
|
||||
assert!(sb.get_inode(1, false).is_err());
|
||||
assert!(sb.get_extended_inode(1, false).is_err());
|
||||
|
||||
// Test adding regular inode
|
||||
let mut inode1 = CachedInodeV5::new(sb.s_blob.clone(), sb.s_meta.clone());
|
||||
inode1.i_ino = 10;
|
||||
inode1.i_nlink = 1;
|
||||
inode1.i_mode = libc::S_IFREG as u32;
|
||||
let inode1_arc = Arc::new(inode1);
|
||||
sb.hash_inode(inode1_arc.clone()).unwrap();
|
||||
|
||||
assert_eq!(sb.get_max_ino(), 10);
|
||||
assert!(sb.get_inode(10, false).is_ok());
|
||||
assert!(sb.get_extended_inode(10, false).is_ok());
|
||||
|
||||
// Test adding hardlink with data (should not replace existing)
|
||||
let mut hardlink = CachedInodeV5::new(sb.s_blob.clone(), sb.s_meta.clone());
|
||||
hardlink.i_ino = 10; // Same inode number
|
||||
hardlink.i_nlink = 2; // Hardlink
|
||||
hardlink.i_mode = libc::S_IFREG as u32;
|
||||
hardlink.i_data = vec![Arc::new(CachedChunkInfoV5::new())]; // Has data
|
||||
|
||||
let hardlink_arc = Arc::new(hardlink);
|
||||
let _result = sb.hash_inode(hardlink_arc.clone()).unwrap();
|
||||
|
||||
// Since original inode has no data, the hardlink with data should replace it
|
||||
let stored_inode = sb.get_inode(10, false).unwrap();
|
||||
assert_eq!(
|
||||
stored_inode
|
||||
.as_any()
|
||||
.downcast_ref::<CachedInodeV5>()
|
||||
.unwrap()
|
||||
.i_data
|
||||
.len(),
|
||||
1
|
||||
);
|
||||
|
||||
// Test root inode
|
||||
assert_eq!(sb.root_ino(), RAFS_V5_ROOT_INODE);
|
||||
|
||||
// Test destroy
|
||||
sb.destroy();
|
||||
assert_eq!(sb.s_inodes.len(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_superblock_v5_blob_operations() {
|
||||
let md = RafsSuperMeta::default();
|
||||
let sb = CachedSuperBlockV5::new(md, false);
|
||||
|
||||
// Test get_blob_infos with empty blob table
|
||||
let blobs = sb.get_blob_infos();
|
||||
assert!(blobs.is_empty());
|
||||
|
||||
// Note: get_chunk_info() and set_blob_device() both panic with
|
||||
// "not implemented: used by RAFS v6 only" so we can't test them directly
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_superblock_v5_hardlink_handling() {
|
||||
let md = RafsSuperMeta::default();
|
||||
let mut sb = CachedSuperBlockV5::new(md, false);
|
||||
|
||||
// Add inode without data
|
||||
let mut inode1 = CachedInodeV5::new(sb.s_blob.clone(), sb.s_meta.clone());
|
||||
inode1.i_ino = 5;
|
||||
inode1.i_nlink = 1;
|
||||
inode1.i_mode = libc::S_IFREG as u32;
|
||||
sb.hash_inode(Arc::new(inode1)).unwrap();
|
||||
|
||||
// Add hardlink with same inode number but no data - should replace
|
||||
let mut hardlink = CachedInodeV5::new(sb.s_blob.clone(), sb.s_meta.clone());
|
||||
hardlink.i_ino = 5;
|
||||
hardlink.i_nlink = 2;
|
||||
hardlink.i_mode = libc::S_IFREG as u32;
|
||||
hardlink.i_data = vec![]; // No data
|
||||
|
||||
sb.hash_inode(Arc::new(hardlink)).unwrap();
|
||||
|
||||
// Should have replaced the original
|
||||
let stored = sb.get_inode(5, false).unwrap();
|
||||
assert_eq!(
|
||||
stored
|
||||
.as_any()
|
||||
.downcast_ref::<CachedInodeV5>()
|
||||
.unwrap()
|
||||
.i_nlink,
|
||||
2
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_from_rafs_v5_chunk_info() {
|
||||
let mut ondisk_chunk = RafsV5ChunkInfo::new();
|
||||
ondisk_chunk.block_id = RafsDigest::from_buf("test".as_bytes(), Algorithm::Blake3);
|
||||
ondisk_chunk.blob_index = 1;
|
||||
ondisk_chunk.index = 42;
|
||||
ondisk_chunk.file_offset = 1024;
|
||||
ondisk_chunk.compressed_offset = 512;
|
||||
ondisk_chunk.uncompressed_offset = 2048;
|
||||
ondisk_chunk.compressed_size = 256;
|
||||
ondisk_chunk.uncompressed_size = 512;
|
||||
ondisk_chunk.flags = BlobChunkFlags::COMPRESSED;
|
||||
|
||||
let cached_chunk = CachedChunkInfoV5::from(&ondisk_chunk);
|
||||
|
||||
assert_eq!(cached_chunk.blob_index(), 1);
|
||||
assert_eq!(cached_chunk.index(), 42);
|
||||
assert_eq!(cached_chunk.file_offset(), 1024);
|
||||
assert_eq!(cached_chunk.compressed_offset(), 512);
|
||||
assert_eq!(cached_chunk.uncompressed_offset(), 2048);
|
||||
assert_eq!(cached_chunk.compressed_size(), 256);
|
||||
assert_eq!(cached_chunk.uncompressed_size(), 512);
|
||||
assert_eq!(cached_chunk.flags(), BlobChunkFlags::COMPRESSED);
|
||||
assert!(cached_chunk.is_compressed());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_accessor_methods() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Set test values
|
||||
inode.i_ino = 42;
|
||||
inode.i_size = 8192;
|
||||
|
||||
inode.i_rdev = 0x0801; // Example device number
|
||||
inode.i_projid = 1000;
|
||||
inode.i_parent = 1;
|
||||
inode.i_name = OsString::from("test_file");
|
||||
inode.i_flags = RafsInodeFlags::XATTR;
|
||||
inode.i_digest = RafsDigest::from_buf("test".as_bytes(), Algorithm::Blake3);
|
||||
inode.i_child_idx = 10;
|
||||
|
||||
// Test basic getters
|
||||
assert_eq!(inode.ino(), 42);
|
||||
assert_eq!(inode.size(), 8192);
|
||||
assert_eq!(inode.rdev(), 0x0801);
|
||||
assert_eq!(inode.projid(), 1000);
|
||||
assert_eq!(inode.parent(), 1);
|
||||
assert_eq!(inode.name(), OsString::from("test_file"));
|
||||
assert_eq!(inode.get_name_size(), "test_file".len() as u16);
|
||||
assert_eq!(inode.flags(), RafsInodeFlags::XATTR.bits());
|
||||
assert_eq!(inode.get_digest(), inode.i_digest);
|
||||
assert_eq!(inode.get_child_index().unwrap(), 10);
|
||||
|
||||
// Test as_inode
|
||||
let as_inode = inode.as_inode();
|
||||
assert_eq!(as_inode.ino(), 42);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_edge_cases() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Test very large inode number
|
||||
inode.i_ino = u64::MAX;
|
||||
assert_eq!(inode.ino(), u64::MAX);
|
||||
|
||||
// Test edge case file modes
|
||||
inode.i_mode = 0o777 | libc::S_IFREG as u32;
|
||||
assert!(inode.is_reg());
|
||||
assert_eq!(inode.i_mode & 0o777, 0o777);
|
||||
|
||||
// Test empty symlink target (should be invalid but we test getter)
|
||||
inode.i_mode = libc::S_IFLNK as u32;
|
||||
inode.i_target = OsString::new();
|
||||
assert_eq!(inode.get_symlink_size(), 0);
|
||||
|
||||
// Test maximum name length
|
||||
let max_name = "a".repeat(RAFS_MAX_NAME);
|
||||
inode.i_name = OsString::from(max_name.clone());
|
||||
assert_eq!(inode.name(), OsString::from(max_name));
|
||||
assert_eq!(inode.get_name_size(), RAFS_MAX_NAME as u16);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_zero_values() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Test all zero/default values
|
||||
assert_eq!(inode.ino(), 0);
|
||||
assert_eq!(inode.size(), 0);
|
||||
assert_eq!(inode.rdev(), 0);
|
||||
assert_eq!(inode.projid(), 0);
|
||||
assert_eq!(inode.parent(), 0);
|
||||
assert_eq!(inode.flags(), 0);
|
||||
assert_eq!(inode.get_name_size(), 0);
|
||||
assert!(!inode.has_xattr());
|
||||
assert!(!inode.is_hardlink());
|
||||
|
||||
// Test get_child operations on empty inode
|
||||
assert_eq!(inode.get_child_count(), 0);
|
||||
assert!(inode.get_child_by_index(0).is_err());
|
||||
assert!(inode.get_child_by_name(OsStr::new("test")).is_err());
|
||||
|
||||
// Test chunk operations on empty inode
|
||||
assert_eq!(inode.i_data.len(), 0);
|
||||
assert!(inode.get_chunk_info(0).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_chunk_info_v5_boundary_values() {
|
||||
let mut info = CachedChunkInfoV5::new();
|
||||
|
||||
// Test maximum values
|
||||
info.blob_index = u32::MAX;
|
||||
info.index = u32::MAX;
|
||||
info.file_offset = u64::MAX;
|
||||
info.compressed_offset = u64::MAX;
|
||||
info.uncompressed_offset = u64::MAX;
|
||||
info.compressed_size = u32::MAX;
|
||||
info.uncompressed_size = u32::MAX;
|
||||
info.crc32 = u32::MAX;
|
||||
|
||||
assert_eq!(info.blob_index(), u32::MAX);
|
||||
assert_eq!(info.index(), u32::MAX);
|
||||
assert_eq!(info.file_offset(), u64::MAX);
|
||||
assert_eq!(info.compressed_offset(), u64::MAX);
|
||||
assert_eq!(info.uncompressed_offset(), u64::MAX);
|
||||
assert_eq!(info.compressed_size(), u32::MAX);
|
||||
assert_eq!(info.uncompressed_size(), u32::MAX);
|
||||
|
||||
// Test zero values
|
||||
info.blob_index = 0;
|
||||
info.index = 0;
|
||||
info.file_offset = 0;
|
||||
info.compressed_offset = 0;
|
||||
info.uncompressed_offset = 0;
|
||||
info.compressed_size = 0;
|
||||
info.uncompressed_size = 0;
|
||||
info.crc32 = 0;
|
||||
|
||||
assert_eq!(info.blob_index(), 0);
|
||||
assert_eq!(info.index(), 0);
|
||||
assert_eq!(info.file_offset(), 0);
|
||||
assert_eq!(info.compressed_offset(), 0);
|
||||
assert_eq!(info.uncompressed_offset(), 0);
|
||||
assert_eq!(info.compressed_size(), 0);
|
||||
assert_eq!(info.uncompressed_size(), 0);
|
||||
assert_eq!(info.crc32(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_special_names() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
let mut inode = CachedInodeV5::new(blob_table, meta);
|
||||
|
||||
// Test special characters in names
|
||||
let special_names = vec![
|
||||
".",
|
||||
"..",
|
||||
"file with spaces",
|
||||
"file\twith\ttabs",
|
||||
"file\nwith\nnewlines",
|
||||
"file-with-dashes",
|
||||
"file_with_underscores",
|
||||
"file.with.dots",
|
||||
"UPPERCASE_FILE",
|
||||
"MiXeD_cAsE_fIlE",
|
||||
"123456789",
|
||||
"中文文件名", // Chinese characters
|
||||
"файл", // Cyrillic
|
||||
"🦀🦀🦀", // Emojis
|
||||
];
|
||||
|
||||
for name in special_names {
|
||||
inode.i_name = OsString::from(name);
|
||||
assert_eq!(inode.name(), OsString::from(name));
|
||||
assert_eq!(inode.get_name_size(), name.len() as u16);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_superblock_v5_edge_cases() {
|
||||
let md = RafsSuperMeta::default();
|
||||
let mut sb = CachedSuperBlockV5::new(md, false);
|
||||
|
||||
// Test with validation enabled
|
||||
let md_validated = RafsSuperMeta::default();
|
||||
let sb_validated = CachedSuperBlockV5::new(md_validated, true);
|
||||
assert!(sb_validated.validate_inode);
|
||||
|
||||
// Test maximum inode number
|
||||
let mut inode = CachedInodeV5::new(sb.s_blob.clone(), sb.s_meta.clone());
|
||||
inode.i_ino = u64::MAX;
|
||||
inode.i_nlink = 1;
|
||||
inode.i_mode = libc::S_IFREG as u32;
|
||||
inode.i_name = OsString::from("max_inode");
|
||||
|
||||
sb.hash_inode(Arc::new(inode)).unwrap();
|
||||
assert_eq!(sb.get_max_ino(), u64::MAX);
|
||||
|
||||
// Test getting non-existent inode
|
||||
assert!(sb.get_inode(u64::MAX - 1, false).is_err());
|
||||
assert!(sb.get_extended_inode(u64::MAX - 1, false).is_err());
|
||||
|
||||
// Test blob operations
|
||||
let blob_infos = sb.get_blob_infos();
|
||||
assert!(blob_infos.is_empty());
|
||||
|
||||
let blob_extra_infos = sb.get_blob_extra_infos().unwrap();
|
||||
assert!(blob_extra_infos.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cached_inode_v5_complex_directory_structure() {
|
||||
let meta = Arc::new(RafsSuperMeta::default());
|
||||
let blob_table = Arc::new(RafsV5BlobTable::new());
|
||||
|
||||
// Create a complex directory with many children
|
||||
let mut root_dir = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
root_dir.i_ino = 1;
|
||||
root_dir.i_mode = libc::S_IFDIR as u32;
|
||||
root_dir.i_name = OsString::from("root");
|
||||
|
||||
// Add many children with different names to test sorting
|
||||
let child_names = [
|
||||
"zzz_last",
|
||||
"aaa_first",
|
||||
"mmm_middle",
|
||||
"000_numeric",
|
||||
"ZZZ_upper",
|
||||
"___underscore",
|
||||
"...dots",
|
||||
"111_mixed",
|
||||
"yyy_second_last",
|
||||
"bbb_second",
|
||||
];
|
||||
|
||||
// Set the correct child count for sorting to trigger
|
||||
root_dir.i_child_cnt = child_names.len() as u32;
|
||||
|
||||
for (i, name) in child_names.iter().enumerate() {
|
||||
let mut child = CachedInodeV5::new(blob_table.clone(), meta.clone());
|
||||
child.i_ino = i as u64 + 2;
|
||||
child.i_name = OsString::from(*name);
|
||||
child.i_mode = if i % 2 == 0 {
|
||||
libc::S_IFREG as u32
|
||||
} else {
|
||||
libc::S_IFDIR as u32
|
||||
};
|
||||
root_dir.add_child(Arc::new(child));
|
||||
}
|
||||
|
||||
// Verify children are sorted by name (after all children are added)
|
||||
assert_eq!(root_dir.i_child.len(), child_names.len());
|
||||
for i in 1..root_dir.i_child.len() {
|
||||
let prev_name = &root_dir.i_child[i - 1].i_name;
|
||||
let curr_name = &root_dir.i_child[i].i_name;
|
||||
assert!(
|
||||
prev_name <= curr_name,
|
||||
"Children not sorted: {:?} > {:?}",
|
||||
prev_name,
|
||||
curr_name
|
||||
);
|
||||
}
|
||||
|
||||
// Test walking all children
|
||||
let mut visited_count = 0;
|
||||
root_dir
|
||||
.walk_children_inodes(0, &mut |_node, _name, _ino, _offset| {
|
||||
visited_count += 1;
|
||||
Ok(RafsInodeWalkAction::Continue)
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
// Should visit ".", "..", and all children
|
||||
assert_eq!(visited_count, 2 + child_names.len());
|
||||
|
||||
// Test collecting descendants
|
||||
let mut descendants = Vec::new();
|
||||
root_dir
|
||||
.collect_descendants_inodes(&mut descendants)
|
||||
.unwrap();
|
||||
// Only regular files with size > 0 are collected, so should be empty
|
||||
assert!(descendants.is_empty());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -46,8 +46,7 @@ impl Debug for ChunkWrapper {
|
|||
|
||||
impl Display for ChunkWrapper {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
let base_format = format!(
|
||||
"id {}, index {}, blob_index {}, file_offset {}, compressed {}/{}, uncompressed {}/{}",
|
||||
self.id(),
|
||||
self.index(),
|
||||
|
@ -57,7 +56,14 @@ impl Display for ChunkWrapper {
|
|||
self.compressed_size(),
|
||||
self.uncompressed_offset(),
|
||||
self.uncompressed_size(),
|
||||
)
|
||||
);
|
||||
|
||||
let full_format = if self.has_crc32() {
|
||||
format!("{}, crc32 {:#x}", base_format, self.crc32())
|
||||
} else {
|
||||
base_format
|
||||
};
|
||||
write!(f, "{}", full_format)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -251,11 +257,11 @@ impl ChunkWrapper {
|
|||
/// Check whether the chunk is encrypted or not.
|
||||
pub fn is_encrypted(&self) -> bool {
|
||||
match self {
|
||||
ChunkWrapper::V5(c) => c.flags.contains(BlobChunkFlags::ENCYPTED),
|
||||
ChunkWrapper::V6(c) => c.flags.contains(BlobChunkFlags::ENCYPTED),
|
||||
ChunkWrapper::V5(c) => c.flags.contains(BlobChunkFlags::ENCRYPTED),
|
||||
ChunkWrapper::V6(c) => c.flags.contains(BlobChunkFlags::ENCRYPTED),
|
||||
ChunkWrapper::Ref(c) => as_blob_v5_chunk_info(c.deref())
|
||||
.flags()
|
||||
.contains(BlobChunkFlags::ENCYPTED),
|
||||
.contains(BlobChunkFlags::ENCRYPTED),
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -263,8 +269,8 @@ impl ChunkWrapper {
|
|||
pub fn set_encrypted(&mut self, encrypted: bool) {
|
||||
self.ensure_owned();
|
||||
match self {
|
||||
ChunkWrapper::V5(c) => c.flags.set(BlobChunkFlags::ENCYPTED, encrypted),
|
||||
ChunkWrapper::V6(c) => c.flags.set(BlobChunkFlags::ENCYPTED, encrypted),
|
||||
ChunkWrapper::V5(c) => c.flags.set(BlobChunkFlags::ENCRYPTED, encrypted),
|
||||
ChunkWrapper::V6(c) => c.flags.set(BlobChunkFlags::ENCRYPTED, encrypted),
|
||||
ChunkWrapper::Ref(_c) => panic!("unexpected"),
|
||||
}
|
||||
}
|
||||
|
@ -290,6 +296,46 @@ impl ChunkWrapper {
|
|||
}
|
||||
}
|
||||
|
||||
/// Set crc32 of chunk data.
|
||||
pub fn set_crc32(&mut self, crc32: u32) {
|
||||
self.ensure_owned();
|
||||
match self {
|
||||
ChunkWrapper::V5(c) => c.crc32 = crc32,
|
||||
ChunkWrapper::V6(c) => c.crc32 = crc32,
|
||||
ChunkWrapper::Ref(_c) => panic!("unexpected"),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get crc32 of chunk data.
|
||||
pub fn crc32(&self) -> u32 {
|
||||
match self {
|
||||
ChunkWrapper::V5(c) => c.crc32,
|
||||
ChunkWrapper::V6(c) => c.crc32,
|
||||
ChunkWrapper::Ref(c) => as_blob_v5_chunk_info(c.deref()).crc32(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Check whether the chunk has CRC or not.
|
||||
pub fn has_crc32(&self) -> bool {
|
||||
match self {
|
||||
ChunkWrapper::V5(c) => c.flags.contains(BlobChunkFlags::HAS_CRC32),
|
||||
ChunkWrapper::V6(c) => c.flags.contains(BlobChunkFlags::HAS_CRC32),
|
||||
ChunkWrapper::Ref(c) => as_blob_v5_chunk_info(c.deref())
|
||||
.flags()
|
||||
.contains(BlobChunkFlags::HAS_CRC32),
|
||||
}
|
||||
}
|
||||
|
||||
/// Set flag for whether chunk has CRC.
|
||||
pub fn set_has_crc32(&mut self, has_crc: bool) {
|
||||
self.ensure_owned();
|
||||
match self {
|
||||
ChunkWrapper::V5(c) => c.flags.set(BlobChunkFlags::HAS_CRC32, has_crc),
|
||||
ChunkWrapper::V6(c) => c.flags.set(BlobChunkFlags::HAS_CRC32, has_crc),
|
||||
ChunkWrapper::Ref(_c) => panic!("unexpected"),
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
/// Set a group of chunk information fields.
|
||||
pub fn set_chunk_info(
|
||||
|
@ -303,6 +349,7 @@ impl ChunkWrapper {
|
|||
compressed_size: u32,
|
||||
is_compressed: bool,
|
||||
is_encrypted: bool,
|
||||
has_crc32: bool,
|
||||
) -> Result<()> {
|
||||
self.ensure_owned();
|
||||
match self {
|
||||
|
@ -317,6 +364,9 @@ impl ChunkWrapper {
|
|||
if is_compressed {
|
||||
c.flags |= BlobChunkFlags::COMPRESSED;
|
||||
}
|
||||
if has_crc32 {
|
||||
c.flags |= BlobChunkFlags::HAS_CRC32;
|
||||
}
|
||||
}
|
||||
ChunkWrapper::V6(c) => {
|
||||
c.index = chunk_index;
|
||||
|
@ -330,7 +380,10 @@ impl ChunkWrapper {
|
|||
c.flags |= BlobChunkFlags::COMPRESSED;
|
||||
}
|
||||
if is_encrypted {
|
||||
c.flags |= BlobChunkFlags::ENCYPTED;
|
||||
c.flags |= BlobChunkFlags::ENCRYPTED;
|
||||
}
|
||||
if has_crc32 {
|
||||
c.flags |= BlobChunkFlags::HAS_CRC32;
|
||||
}
|
||||
}
|
||||
ChunkWrapper::Ref(_c) => panic!("unexpected"),
|
||||
|
@ -418,7 +471,7 @@ fn to_rafs_v5_chunk_info(cki: &dyn BlobV5ChunkInfo) -> RafsV5ChunkInfo {
|
|||
uncompressed_offset: cki.uncompressed_offset(),
|
||||
file_offset: cki.file_offset(),
|
||||
index: cki.index(),
|
||||
reserved: 0u32,
|
||||
crc32: cki.crc32(),
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -451,7 +504,7 @@ mod tests {
|
|||
wrapper.set_batch(true);
|
||||
assert!(wrapper.is_batch());
|
||||
wrapper
|
||||
.set_chunk_info(2048, 2048, 2048, 2048, 2048, 2048, 2048, true, true)
|
||||
.set_chunk_info(2048, 2048, 2048, 2048, 2048, 2048, 2048, true, true, true)
|
||||
.unwrap();
|
||||
assert_eq!(wrapper.blob_index(), 2048);
|
||||
assert_eq!(wrapper.compressed_offset(), 2048);
|
||||
|
@ -567,7 +620,7 @@ mod tests {
|
|||
fn test_chunk_wrapper_ref_set_chunk_info() {
|
||||
let mut wrapper = ChunkWrapper::Ref(Arc::new(MockChunkInfo::default()));
|
||||
wrapper
|
||||
.set_chunk_info(2048, 2048, 2048, 2048, 2048, 2048, 2048, true, true)
|
||||
.set_chunk_info(2048, 2048, 2048, 2048, 2048, 2048, 2048, true, true, true)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
|
@ -635,4 +688,13 @@ mod tests {
|
|||
let wrapper_v5 = ChunkWrapper::Ref(Arc::new(CachedChunkInfoV5::default()));
|
||||
test_copy_from(wrapper_v5, wrapper_ref);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_fmt() {
|
||||
let wrapper_v5 = ChunkWrapper::Ref(Arc::new(CachedChunkInfoV5::default()));
|
||||
assert_eq!(
|
||||
format!("{:?}", wrapper_v5),
|
||||
"RafsV5ChunkInfo { block_id: RafsDigest { data: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] }, blob_index: 0, flags: (empty), compressed_size: 0, uncompressed_size: 0, compressed_offset: 0, uncompressed_offset: 0, file_offset: 0, index: 0, crc32: 0 }"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -412,7 +412,7 @@ impl OndiskInodeWrapper {
|
|||
|
||||
impl RafsInode for OndiskInodeWrapper {
|
||||
// Somehow we got invalid `inode_count` from superblock.
|
||||
fn validate(&self, _inode_count: u64, chunk_size: u64) -> Result<()> {
|
||||
fn validate(&self, _inode_count: u64, _chunk_size: u64) -> Result<()> {
|
||||
let state = self.state();
|
||||
let inode = state.file_map.get_ref::<RafsV5Inode>(self.offset)?;
|
||||
let max_inode = state.inode_table.len() as u64;
|
||||
|
@ -452,13 +452,13 @@ impl RafsInode for OndiskInodeWrapper {
|
|||
// chunk-dict doesn't support chunk_count check
|
||||
return Err(std::io::Error::from_raw_os_error(libc::EOPNOTSUPP));
|
||||
}
|
||||
let chunks = inode.i_size.div_ceil(chunk_size);
|
||||
if !inode.has_hole() && chunks != inode.i_child_count as u64 {
|
||||
return Err(einval!(format!(
|
||||
"invalid chunk count, ino {}, expected {}, actual {}",
|
||||
inode.i_ino, chunks, inode.i_child_count,
|
||||
)));
|
||||
}
|
||||
// let chunks = inode.i_size.div_ceil(chunk_size);
|
||||
// if !inode.has_hole() && chunks != inode.i_child_count as u64 {
|
||||
// return Err(einval!(format!(
|
||||
// "invalid chunk count, ino {}, expected {}, actual {}",
|
||||
// inode.i_ino, chunks, inode.i_child_count,
|
||||
// )));
|
||||
// }
|
||||
let size = inode.size()
|
||||
+ xattr_size
|
||||
+ inode.i_child_count as usize * size_of::<RafsV5ChunkInfo>();
|
||||
|
@ -854,6 +854,20 @@ impl BlobChunkInfo for DirectChunkInfoV5 {
|
|||
false
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
self.chunk(self.state().deref())
|
||||
.flags
|
||||
.contains(BlobChunkFlags::HAS_CRC32)
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
if self.has_crc32() {
|
||||
self.chunk(self.state().deref()).crc32
|
||||
} else {
|
||||
0
|
||||
}
|
||||
}
|
||||
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
|
|
@ -1458,9 +1458,19 @@ impl BlobChunkInfo for DirectChunkInfoV6 {
|
|||
let state = self.state();
|
||||
self.v5_chunk(&state)
|
||||
.flags
|
||||
.contains(BlobChunkFlags::ENCYPTED)
|
||||
.contains(BlobChunkFlags::ENCRYPTED)
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
let state = self.state();
|
||||
self.v5_chunk(&state)
|
||||
.flags
|
||||
.contains(BlobChunkFlags::HAS_CRC32)
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
self.v5_chunk(&self.state()).crc32
|
||||
}
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
@ -1552,6 +1562,14 @@ impl BlobChunkInfo for TarfsChunkInfoV6 {
|
|||
false
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
0
|
||||
}
|
||||
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
|
|
@ -1114,8 +1114,8 @@ pub struct RafsV5ChunkInfo {
|
|||
pub file_offset: u64, // 72
|
||||
/// chunk index, it's allocated sequentially and starting from 0 for one blob.
|
||||
pub index: u32,
|
||||
/// reserved
|
||||
pub reserved: u32, //80
|
||||
/// crc32 of the chunk
|
||||
pub crc32: u32, //80
|
||||
}
|
||||
|
||||
impl RafsV5ChunkInfo {
|
||||
|
@ -1143,7 +1143,7 @@ impl Display for RafsV5ChunkInfo {
|
|||
fn fmt(&self, f: &mut Formatter<'_>) -> FmtResult {
|
||||
write!(
|
||||
f,
|
||||
"file_offset {}, compress_offset {}, compress_size {}, uncompress_offset {}, uncompress_size {}, blob_index {}, block_id {}, index {}, is_compressed {}",
|
||||
"file_offset {}, compress_offset {}, compress_size {}, uncompress_offset {}, uncompress_size {}, blob_index {}, block_id {}, index {}, is_compressed {}, crc32 {}",
|
||||
self.file_offset,
|
||||
self.compressed_offset,
|
||||
self.compressed_size,
|
||||
|
@ -1153,6 +1153,7 @@ impl Display for RafsV5ChunkInfo {
|
|||
self.block_id,
|
||||
self.index,
|
||||
self.flags.contains(BlobChunkFlags::COMPRESSED),
|
||||
self.crc32,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
@ -1335,6 +1336,7 @@ fn add_chunk_to_bio_desc(
|
|||
compressed_size: chunk.compressed_size(),
|
||||
uncompressed_size: chunk.uncompressed_size(),
|
||||
flags: chunk.flags(),
|
||||
crc32: chunk.crc32(),
|
||||
}) as Arc<dyn BlobChunkInfo>;
|
||||
let bio = BlobIoDesc::new(
|
||||
blob,
|
||||
|
@ -1610,8 +1612,7 @@ pub mod tests {
|
|||
pub uncompress_offset: u64,
|
||||
pub file_offset: u64,
|
||||
pub index: u32,
|
||||
#[allow(unused)]
|
||||
pub reserved: u32,
|
||||
pub crc32: u32,
|
||||
}
|
||||
|
||||
impl MockChunkInfo {
|
||||
|
@ -1641,6 +1642,18 @@ pub mod tests {
|
|||
false
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
self.flags.contains(BlobChunkFlags::HAS_CRC32)
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
if self.has_crc32() {
|
||||
self.crc32
|
||||
} else {
|
||||
0
|
||||
}
|
||||
}
|
||||
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
|
|
@ -226,6 +226,7 @@ pub struct V5IoChunk {
|
|||
pub compressed_size: u32,
|
||||
pub uncompressed_size: u32,
|
||||
pub flags: BlobChunkFlags,
|
||||
pub crc32: u32,
|
||||
}
|
||||
|
||||
impl BlobChunkInfo for V5IoChunk {
|
||||
|
@ -249,6 +250,18 @@ impl BlobChunkInfo for V5IoChunk {
|
|||
false
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
self.flags.contains(BlobChunkFlags::HAS_CRC32)
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
if self.has_crc32() {
|
||||
self.crc32
|
||||
} else {
|
||||
0
|
||||
}
|
||||
}
|
||||
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
@ -275,6 +288,7 @@ mod tests {
|
|||
compressed_size: 10,
|
||||
uncompressed_size: 20,
|
||||
flags: BlobChunkFlags::BATCH,
|
||||
crc32: 0,
|
||||
};
|
||||
|
||||
assert_eq!(info.chunk_id(), &RafsDigest::default());
|
||||
|
|
|
@ -28,6 +28,7 @@ pub struct MockChunkInfo {
|
|||
c_compr_size: u32,
|
||||
c_decompress_size: u32,
|
||||
c_flags: BlobChunkFlags,
|
||||
crc32: u32,
|
||||
}
|
||||
|
||||
impl MockChunkInfo {
|
||||
|
@ -70,6 +71,18 @@ impl BlobChunkInfo for MockChunkInfo {
|
|||
false
|
||||
}
|
||||
|
||||
fn has_crc32(&self) -> bool {
|
||||
self.c_flags.contains(BlobChunkFlags::HAS_CRC32)
|
||||
}
|
||||
|
||||
fn crc32(&self) -> u32 {
|
||||
if self.has_crc32() {
|
||||
self.crc32
|
||||
} else {
|
||||
0
|
||||
}
|
||||
}
|
||||
|
||||
fn as_any(&self) -> &dyn Any {
|
||||
self
|
||||
}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
[package]
|
||||
name = "nydus-service"
|
||||
version = "0.3.0"
|
||||
version = "0.4.0"
|
||||
description = "Nydus Image Service Manager"
|
||||
authors = ["The Nydus Developers"]
|
||||
license = "Apache-2.0"
|
||||
|
@ -26,11 +26,11 @@ tokio = { version = "1.24", features = ["macros"] }
|
|||
versionize_derive = "0.1.6"
|
||||
versionize = "0.2.0"
|
||||
|
||||
nydus-api = { version = "0.3.0", path = "../api" }
|
||||
nydus-rafs = { version = "0.3.1", path = "../rafs" }
|
||||
nydus-storage = { version = "0.6.3", path = "../storage" }
|
||||
nydus-upgrade = { version = "0.1.0", path = "../upgrade" }
|
||||
nydus-utils = { version = "0.4.2", path = "../utils" }
|
||||
nydus-api = { version = "0.4.0", path = "../api" }
|
||||
nydus-rafs = { version = "0.4.0", path = "../rafs" }
|
||||
nydus-storage = { version = "0.7.0", path = "../storage" }
|
||||
nydus-upgrade = { version = "0.2.0", path = "../upgrade" }
|
||||
nydus-utils = { version = "0.5.0", path = "../utils" }
|
||||
|
||||
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
|
||||
vhost-user-backend = { version = "0.15.0", optional = true }
|
||||
|
@ -46,6 +46,9 @@ tokio-uring = "0.4"
|
|||
[dev-dependencies]
|
||||
vmm-sys-util = "0.12.1"
|
||||
|
||||
[target.'cfg(target_os = "linux")'.dev-dependencies]
|
||||
procfs = "0.17.0"
|
||||
|
||||
[features]
|
||||
default = ["fuse-backend-rs/fusedev"]
|
||||
virtiofs = [
|
||||
|
|
|
@ -370,6 +370,7 @@ mod tests {
|
|||
|
||||
use super::*;
|
||||
use mio::{Poll, Token};
|
||||
use procfs::sys::kernel::Version;
|
||||
use vmm_sys_util::tempdir::TempDir;
|
||||
|
||||
fn create_service_controller() -> ServiceController {
|
||||
|
@ -416,6 +417,24 @@ mod tests {
|
|||
.initialize_fscache_service(None, 1, p.to_str().unwrap(), None)
|
||||
.is_err());
|
||||
|
||||
// skip test if user is not root
|
||||
if !nix::unistd::Uid::effective().is_root() {
|
||||
println!("Skip test_initialize_fscache_service, not root");
|
||||
return;
|
||||
}
|
||||
|
||||
// skip test if kernel is older than 5.19
|
||||
if Version::current().unwrap() < Version::from_str("5.19.0").unwrap() {
|
||||
println!("Skip test_initialize_fscache_service, kernel version is older than 5.19");
|
||||
return;
|
||||
}
|
||||
|
||||
// skip test if /dev/cachefiles does not exist
|
||||
if !std::path::Path::new("/dev/cachefiles").exists() {
|
||||
println!("Skip test_initialize_fscache_service, /dev/cachefiles does not exist");
|
||||
return;
|
||||
}
|
||||
|
||||
let tmp_dir = TempDir::new().unwrap();
|
||||
let dir = tmp_dir.as_path().to_str().unwrap();
|
||||
assert!(service_controller
|
||||
|
|
|
@ -16,7 +16,7 @@ build:
|
|||
# SKIP_CASES=compressor=lz4_block,fs_version=5 \
|
||||
# make test
|
||||
test: build
|
||||
golangci-lint run
|
||||
golangci-lint run --timeout=5m
|
||||
sudo -E ./smoke.test -test.v -test.timeout 10m -test.parallel=16 -test.run=$(TESTS)
|
||||
|
||||
# PERFORMANCE_TEST_MODE=fs-version-5 \
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
module github.com/dragonflyoss/nydus/smoke
|
||||
|
||||
go 1.21
|
||||
go 1.23.1
|
||||
|
||||
require (
|
||||
github.com/containerd/containerd v1.7.11
|
||||
github.com/containerd/log v0.1.0
|
||||
github.com/containerd/nydus-snapshotter v0.13.4
|
||||
github.com/google/uuid v1.5.0
|
||||
github.com/opencontainers/go-digest v1.0.0
|
||||
github.com/pkg/errors v0.9.1
|
||||
|
@ -35,6 +34,8 @@ require (
|
|||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/rogpeppe/go-internal v1.12.0 // indirect
|
||||
github.com/sirupsen/logrus v1.9.3 // indirect
|
||||
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
|
||||
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
|
||||
go.opencensus.io v0.24.0 // indirect
|
||||
golang.org/x/mod v0.14.0 // indirect
|
||||
golang.org/x/sync v0.5.0 // indirect
|
||||
|
|
|
@ -17,8 +17,6 @@ github.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY
|
|||
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
|
||||
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
|
||||
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
|
||||
github.com/containerd/nydus-snapshotter v0.13.4 h1:veTQCgpfRGdPD031dVNGlU+vK/W9vBhZNlMWR9oupiQ=
|
||||
github.com/containerd/nydus-snapshotter v0.13.4/go.mod h1:y41TM10lXhskfHHvge7kf1VucM4CeWwsCmQ5Q51UJrc=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
|
@ -58,6 +56,8 @@ github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN
|
|||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.5.0 h1:1p67kYwdtXjb0gL0BPiP1Av9wiZPo5A8z2cWkTZ+eyU=
|
||||
github.com/google/uuid v1.5.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/imeoer/nydus-snapshotter v0.13.13 h1:u6hdoOt+Ja37NvcGcFQACNzsgM0FdgxadFrPp0WXUZ0=
|
||||
github.com/imeoer/nydus-snapshotter v0.13.13/go.mod h1:Hu/2wL52TBGWuKCJVBSF2url95C6YzmQ75V9+B4PKR4=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4=
|
||||
|
@ -101,6 +101,10 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
|
|||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IUPn0Bjt8=
|
||||
github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok=
|
||||
github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g=
|
||||
github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
|
||||
|
|
|
@ -0,0 +1,99 @@
|
|||
// HTTP tunneling proxy server implementation
|
||||
// Usage: go run ./smoke/proxy
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
server := &http.Server{
|
||||
Addr: ":4001",
|
||||
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
fmt.Printf("Handling: %s\n", r.URL.String())
|
||||
|
||||
if r.Method == http.MethodConnect {
|
||||
httpsProxy(w, r)
|
||||
} else {
|
||||
httpProxy(w, r)
|
||||
}
|
||||
}),
|
||||
}
|
||||
|
||||
log.Println("Starting proxy server on :4001")
|
||||
if err := server.ListenAndServe(); err != nil {
|
||||
log.Fatal("Server error:", err)
|
||||
}
|
||||
}
|
||||
|
||||
func httpsProxy(w http.ResponseWriter, r *http.Request) {
|
||||
destAddr := r.URL.Host
|
||||
|
||||
fmt.Println("Tunneling established", destAddr)
|
||||
|
||||
destConn, err := net.DialTimeout("tcp", destAddr, 10*time.Second)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusServiceUnavailable)
|
||||
return
|
||||
}
|
||||
defer destConn.Close()
|
||||
|
||||
hijacker, ok := w.(http.Hijacker)
|
||||
if !ok {
|
||||
http.Error(w, "Hijacking not supported", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
clientConn, _, err := hijacker.Hijack()
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusServiceUnavailable)
|
||||
return
|
||||
}
|
||||
defer clientConn.Close()
|
||||
|
||||
clientConn.Write([]byte("HTTP/1.1 200 Connection Established\r\n\r\n"))
|
||||
|
||||
go transfer(destConn, clientConn)
|
||||
transfer(clientConn, destConn)
|
||||
}
|
||||
|
||||
func copyHeader(dst, src http.Header) {
|
||||
for k, vv := range src {
|
||||
for _, v := range vv {
|
||||
dst.Add(k, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func httpProxy(w http.ResponseWriter, r *http.Request) {
|
||||
client := &http.Client{}
|
||||
|
||||
// http: Request.RequestURI can't be set in client requests.
|
||||
// http://golang.org/src/pkg/net/http/client.go
|
||||
r.RequestURI = ""
|
||||
|
||||
resp, err := client.Do(r)
|
||||
if err != nil {
|
||||
http.Error(w, "Server Error", http.StatusInternalServerError)
|
||||
log.Fatal("ServeHTTP:", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
log.Println(r.RemoteAddr, " ", resp.Status)
|
||||
|
||||
copyHeader(w.Header(), resp.Header)
|
||||
w.WriteHeader(resp.StatusCode)
|
||||
io.Copy(w, resp.Body)
|
||||
}
|
||||
|
||||
func transfer(destination io.WriteCloser, source io.ReadCloser) {
|
||||
defer destination.Close()
|
||||
defer source.Close()
|
||||
io.Copy(destination, source)
|
||||
}
|
|
@ -11,8 +11,8 @@ import (
|
|||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/BraveY/snapshotter-converter/converter"
|
||||
"github.com/containerd/log"
|
||||
"github.com/containerd/nydus-snapshotter/pkg/converter"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/dragonflyoss/nydus/smoke/tests/texture"
|
||||
|
|
|
@ -13,11 +13,11 @@ import (
|
|||
"time"
|
||||
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/dragonflyoss/nydus/smoke/tests/texture"
|
||||
"github.com/dragonflyoss/nydus/smoke/tests/tool"
|
||||
"github.com/dragonflyoss/nydus/smoke/tests/tool/test"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
type CasTestSuite struct{}
|
||||
|
@ -65,7 +65,7 @@ func (c *CasTestSuite) testCasTables(t *testing.T, enablePrefetch bool) {
|
|||
if expectedTable == "Blobs" {
|
||||
require.Equal(t, 1, count)
|
||||
} else {
|
||||
require.Equal(t, 8, count)
|
||||
require.Equal(t, 13, count)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,423 @@
|
|||
package tests
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/distribution/reference"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/containerd/containerd/content/local"
|
||||
"github.com/containerd/log"
|
||||
|
||||
"github.com/BraveY/snapshotter-converter/converter"
|
||||
checkerTool "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
|
||||
pkgConv "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/external/modctl"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
|
||||
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/viewer"
|
||||
"github.com/dragonflyoss/nydus/smoke/tests/tool"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var modelctlWorkDir = os.Getenv("NYDUS_MODELCTL_WORK_DIR")
|
||||
var modelctlContextDir = os.Getenv("NYDUS_MODELCTL_CONTEXT_DIR")
|
||||
var modelRegistryAuth = os.Getenv("NYDUS_MODEL_REGISTRY_AUTH")
|
||||
var modelImageRef = os.Getenv("NYDUS_MODEL_IMAGE_REF")
|
||||
|
||||
type proxy struct {
|
||||
CacheDir string `json:"cache_dir"`
|
||||
URL string `json:"url"`
|
||||
Fallback bool `json:"fallback"`
|
||||
Timeout int `json:"timeout"`
|
||||
ConnectTimeout int `json:"connect_timeout"`
|
||||
}
|
||||
|
||||
func walk(t *testing.T, root string) map[string]*tool.File {
|
||||
tree := map[string]*tool.File{}
|
||||
|
||||
err := filepath.WalkDir(root, func(path string, _ fs.DirEntry, err error) error {
|
||||
require.Nil(t, err)
|
||||
|
||||
targetPath, err := filepath.Rel(root, path)
|
||||
require.NoError(t, err)
|
||||
if targetPath == "." {
|
||||
return nil
|
||||
}
|
||||
|
||||
stat, err := os.Lstat(path)
|
||||
require.NoError(t, err)
|
||||
if stat.Size() > (1024<<10)*128 {
|
||||
t.Logf("skip large file verification: %s", targetPath)
|
||||
return nil
|
||||
}
|
||||
|
||||
file := tool.NewFile(t, path, targetPath)
|
||||
tree[targetPath] = file
|
||||
|
||||
return nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
return tree
|
||||
}
|
||||
|
||||
func check(t *testing.T, source, target string) {
|
||||
sourceTree := walk(t, source)
|
||||
targetTree := walk(t, target)
|
||||
|
||||
for targetPath, targetFile := range targetTree {
|
||||
if sourceFile := sourceTree[targetPath]; sourceFile != nil {
|
||||
sourceFile.Compare(t, targetFile)
|
||||
} else {
|
||||
t.Fatalf("not found file %s in source", targetPath)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func verify(t *testing.T, ctx tool.Context, externalBackendConfigPath string) {
|
||||
config := tool.NydusdConfig{
|
||||
EnablePrefetch: ctx.Runtime.EnablePrefetch,
|
||||
NydusdPath: ctx.Binary.Nydusd,
|
||||
BootstrapPath: ctx.Env.BootstrapPath,
|
||||
ConfigPath: filepath.Join(ctx.Env.WorkDir, "nydusd-config.fusedev.json"),
|
||||
BackendType: "localfs",
|
||||
BackendConfig: fmt.Sprintf(`{"dir": "%s"}`, ctx.Env.BlobDir),
|
||||
ExternalBackendConfigPath: externalBackendConfigPath,
|
||||
ExternalBackendProxyCacheDir: ctx.Env.CacheDir,
|
||||
BlobCacheDir: ctx.Env.CacheDir,
|
||||
APISockPath: filepath.Join(ctx.Env.WorkDir, "nydusd-api.sock"),
|
||||
MountPath: ctx.Env.MountDir,
|
||||
CacheType: ctx.Runtime.CacheType,
|
||||
CacheCompressed: ctx.Runtime.CacheCompressed,
|
||||
RafsMode: ctx.Runtime.RafsMode,
|
||||
DigestValidate: false,
|
||||
AmplifyIO: ctx.Runtime.AmplifyIO,
|
||||
}
|
||||
|
||||
nydusd, err := tool.NewNydusd(config)
|
||||
require.NoError(t, err)
|
||||
err = nydusd.Mount()
|
||||
require.NoError(t, err)
|
||||
|
||||
if os.Getenv("NYDUS_ONLY_MOUNT") == "true" {
|
||||
fmt.Printf("nydusd mounted: %s\n", ctx.Env.MountDir)
|
||||
time.Sleep(time.Hour * 5)
|
||||
}
|
||||
|
||||
check(t, modelctlContextDir, ctx.Env.MountDir)
|
||||
|
||||
defer func() {
|
||||
if err := nydusd.Umount(); err != nil {
|
||||
log.L.WithError(err).Errorf("umount")
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func packWithAttributes(t *testing.T, packOption converter.PackOption, blobDir, sourceDir string) (digest.Digest, digest.Digest) {
|
||||
blob, err := os.CreateTemp(blobDir, "blob-")
|
||||
require.NoError(t, err)
|
||||
defer blob.Close()
|
||||
|
||||
externalBlob, err := os.CreateTemp(blobDir, "external-blob-")
|
||||
require.NoError(t, err)
|
||||
defer externalBlob.Close()
|
||||
|
||||
blobDigester := digest.Canonical.Digester()
|
||||
blobWriter := io.MultiWriter(blob, blobDigester.Hash())
|
||||
externalBlobDigester := digest.Canonical.Digester()
|
||||
packOption.FromDir = sourceDir
|
||||
packOption.ExternalBlobWriter = io.MultiWriter(externalBlob, externalBlobDigester.Hash())
|
||||
_, err = converter.Pack(context.Background(), blobWriter, packOption)
|
||||
require.NoError(t, err)
|
||||
|
||||
blobDigest := blobDigester.Digest()
|
||||
err = os.Rename(blob.Name(), filepath.Join(blobDir, blobDigest.Hex()))
|
||||
require.NoError(t, err)
|
||||
|
||||
externalBlobDigest := externalBlobDigester.Digest()
|
||||
err = os.Rename(externalBlob.Name(), filepath.Join(blobDir, externalBlobDigest.Hex()))
|
||||
require.NoError(t, err)
|
||||
|
||||
return blobDigest, externalBlobDigest
|
||||
}
|
||||
|
||||
func parseReference(ref string) (string, string, string, error) {
|
||||
refs, err := reference.Parse(ref)
|
||||
if err != nil {
|
||||
return "", "", "", errors.Wrapf(err, "invalid image reference: %s", ref)
|
||||
}
|
||||
|
||||
if named, ok := refs.(reference.Named); ok {
|
||||
domain := reference.Domain(named)
|
||||
name := reference.Path(named)
|
||||
tag := ""
|
||||
if tagged, ok := named.(reference.Tagged); ok {
|
||||
tag = tagged.Tag()
|
||||
}
|
||||
return domain, name, tag, nil
|
||||
}
|
||||
|
||||
return "", "", "", fmt.Errorf("invalid image reference: %s", ref)
|
||||
}
|
||||
|
||||
func TestModctlExternal(t *testing.T) {
|
||||
if modelImageRef == "" {
|
||||
t.Skip("skipping external test because no model image is specified")
|
||||
}
|
||||
// Prepare work directory
|
||||
ctx := tool.DefaultContext(t)
|
||||
ctx.PrepareWorkDir(t)
|
||||
ctx.Build.Compressor = "lz4_block"
|
||||
ctx.Build.FSVersion = "5"
|
||||
defer ctx.Destroy(t)
|
||||
|
||||
host, name, tag, err := parseReference(modelImageRef)
|
||||
require.NoError(t, err)
|
||||
repo := strings.SplitN(name, "/", 2)
|
||||
require.Len(t, repo, 2)
|
||||
|
||||
bootstrapPath := os.Getenv("NYDUS_BOOTSTRAP")
|
||||
backendConfigPath := os.Getenv("NYDUS_EXTERNAL_BACKEND_CONFIG")
|
||||
|
||||
if bootstrapPath == "" {
|
||||
// Generate nydus attributes
|
||||
attributesPath := filepath.Join(ctx.Env.WorkDir, ".nydusattributes")
|
||||
backendMetaPath := filepath.Join(ctx.Env.WorkDir, "backend.meta")
|
||||
backendConfigPath = filepath.Join(ctx.Env.WorkDir, "build.backend.json")
|
||||
|
||||
opt := modctl.Option{
|
||||
Root: modelctlWorkDir,
|
||||
RegistryHost: host,
|
||||
Namespace: repo[0],
|
||||
ImageName: repo[1],
|
||||
Tag: tag,
|
||||
}
|
||||
handler, err := modctl.NewHandler(opt)
|
||||
require.NoError(t, err)
|
||||
err = external.Handle(context.Background(), external.Options{
|
||||
Dir: modelctlWorkDir,
|
||||
Handler: handler,
|
||||
MetaOutput: backendMetaPath,
|
||||
BackendOutput: backendConfigPath,
|
||||
AttributesOutput: attributesPath,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Build external bootstrap
|
||||
packOption := converter.PackOption{
|
||||
BuilderPath: ctx.Binary.Builder,
|
||||
Compressor: ctx.Build.Compressor,
|
||||
FsVersion: ctx.Build.FSVersion,
|
||||
ChunkSize: ctx.Build.ChunkSize,
|
||||
FromDir: modelctlContextDir,
|
||||
AttributesPath: attributesPath,
|
||||
}
|
||||
_, externalBlobDigest := packWithAttributes(t, packOption, ctx.Env.BlobDir, modelctlContextDir)
|
||||
|
||||
externalBlobRa, err := local.OpenReader(filepath.Join(ctx.Env.BlobDir, externalBlobDigest.Hex()))
|
||||
require.NoError(t, err)
|
||||
|
||||
bootstrapPath = filepath.Join(ctx.Env.WorkDir, "bootstrap")
|
||||
bootstrap, err := os.Create(filepath.Join(ctx.Env.WorkDir, "bootstrap"))
|
||||
require.NoError(t, err)
|
||||
defer bootstrap.Close()
|
||||
|
||||
_, err = converter.UnpackEntry(externalBlobRa, converter.EntryBootstrap, bootstrap)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Check external bootstrap
|
||||
err = tool.CheckBootstrap(tool.CheckOption{
|
||||
BuilderPath: ctx.Binary.Builder,
|
||||
}, bootstrapPath)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// Prepare external backend config
|
||||
err = buildRuntimeExternalBackendConfig(ctx, host, name, backendConfigPath)
|
||||
assert.NoError(t, err)
|
||||
// Verify nydus filesystem with model context directory
|
||||
ctx.Env.BootstrapPath = bootstrapPath
|
||||
verify(t, *ctx, backendConfigPath)
|
||||
}
|
||||
|
||||
func TestModctlExternalBinary(t *testing.T) {
|
||||
if modelImageRef == "" {
|
||||
t.Skip("skipping external test because no model image is specified")
|
||||
}
|
||||
nydusifyPath := os.Getenv("NYDUS_NYDUSIFY")
|
||||
if nydusifyPath == "" {
|
||||
t.Skip("skipping external test because nydusify binary is not specified")
|
||||
}
|
||||
|
||||
// Prepare work directory
|
||||
ctx := tool.DefaultContext(t)
|
||||
ctx.PrepareWorkDir(t)
|
||||
ctx.Build.Compressor = "lz4_block"
|
||||
ctx.Build.FSVersion = "5"
|
||||
defer ctx.Destroy(t)
|
||||
source := modelImageRef
|
||||
target := modelImageRef + "_smoke_test_nydus_v2" + strconv.Itoa(int(time.Now().Unix()))
|
||||
|
||||
t.Run("Convert with modelfile type", func(t *testing.T) {
|
||||
sourceBackendType := "modelfile"
|
||||
srcBkdCfg := pkgConv.SourceBackendConfig{
|
||||
Context: modelctlContextDir,
|
||||
WorkDir: modelctlWorkDir,
|
||||
}
|
||||
srcBkdCfgBytes, err := json.Marshal(srcBkdCfg)
|
||||
require.NoError(t, err)
|
||||
args := []string{
|
||||
"convert",
|
||||
"--source-backend-type",
|
||||
sourceBackendType,
|
||||
"--source-backend-config",
|
||||
string(srcBkdCfgBytes),
|
||||
"--source",
|
||||
source,
|
||||
"--target",
|
||||
target,
|
||||
"--nydus-image",
|
||||
ctx.Binary.Builder,
|
||||
"--fs-version",
|
||||
ctx.Build.FSVersion,
|
||||
"--compressor",
|
||||
ctx.Build.Compressor,
|
||||
}
|
||||
convertAndCheck(t, ctx, target, args)
|
||||
})
|
||||
|
||||
t.Run("Convert with model-artifact type", func(t *testing.T) {
|
||||
sourceBackendType := "model-artifact"
|
||||
args := []string{
|
||||
"convert",
|
||||
"--log-level",
|
||||
"warn",
|
||||
"--source-backend-type",
|
||||
sourceBackendType,
|
||||
"--source",
|
||||
source,
|
||||
"--target",
|
||||
target,
|
||||
"--nydus-image",
|
||||
ctx.Binary.Builder,
|
||||
"--fs-version",
|
||||
ctx.Build.FSVersion,
|
||||
"--compressor",
|
||||
ctx.Build.Compressor,
|
||||
}
|
||||
convertAndCheck(t, ctx, target, args)
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
// nydusify convert image to nydus
|
||||
// nydus-image check bootstrap
|
||||
// nydusd mount and compare file meta.
|
||||
func convertAndCheck(t *testing.T, ctx *tool.Context, target string, args []string) {
|
||||
host, name, _, err := parseReference(modelImageRef)
|
||||
require.NoError(t, err)
|
||||
repo := strings.SplitN(name, "/", 2)
|
||||
require.Len(t, repo, 2)
|
||||
|
||||
logger := logrus.NewEntry(logrus.New())
|
||||
logger.Infof("Command: %s %s", ctx.Binary.Nydusify, strings.Join(args, " "))
|
||||
nydusifyCmd := exec.CommandContext(context.Background(), ctx.Binary.Nydusify, args...)
|
||||
nydusifyCmd.Stdout = logger.WithField("module", "nydusify").Writer()
|
||||
nydusifyCmd.Stderr = logger.WithField("module", "nydusify").Writer()
|
||||
|
||||
err = nydusifyCmd.Run()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// check bootstrap
|
||||
targetRemote, err := provider.DefaultRemote(target, false)
|
||||
assert.NoError(t, err)
|
||||
arch := runtime.GOARCH
|
||||
targetParser, err := parser.New(targetRemote, arch)
|
||||
assert.NoError(t, err)
|
||||
targetParsed, err := targetParser.Parse(context.Background())
|
||||
assert.NoError(t, err)
|
||||
|
||||
bootstrapPath := filepath.Join(ctx.Env.WorkDir, "nydus_bootstrap")
|
||||
backendConfigPath := filepath.Join(ctx.Env.WorkDir, "nydus_backend.json")
|
||||
fsViewer := buildFsViewer(ctx, targetParser, bootstrapPath, backendConfigPath)
|
||||
err = fsViewer.PullBootstrap(context.Background(), targetParsed)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = tool.CheckBootstrap(tool.CheckOption{
|
||||
BuilderPath: ctx.Binary.Builder,
|
||||
}, bootstrapPath)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// mount and compare file metadata
|
||||
err = buildRuntimeExternalBackendConfig(ctx, host, name, backendConfigPath)
|
||||
assert.NoError(t, err)
|
||||
ctx.Env.BootstrapPath = bootstrapPath
|
||||
verify(t, *ctx, backendConfigPath)
|
||||
}
|
||||
|
||||
func buildFsViewer(ctx *tool.Context, targetParser *parser.Parser, bootstrapPath, backendConfigPath string) *viewer.FsViewer {
|
||||
return &viewer.FsViewer{
|
||||
Opt: viewer.Opt{
|
||||
WorkDir: ctx.Env.WorkDir,
|
||||
},
|
||||
NydusdConfig: checkerTool.NydusdConfig{
|
||||
BootstrapPath: bootstrapPath,
|
||||
ExternalBackendConfigPath: backendConfigPath,
|
||||
},
|
||||
Parser: targetParser,
|
||||
}
|
||||
}
|
||||
|
||||
func buildRuntimeExternalBackendConfig(ctx *tool.Context, host, name, backendConfigPath string) error {
|
||||
backendBytes, err := os.ReadFile(backendConfigPath)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to read backend config file")
|
||||
}
|
||||
backend := backend.Backend{}
|
||||
if err = json.Unmarshal(backendBytes, &backend); err != nil {
|
||||
return errors.Wrap(err, "failed to unmarshal backend config file")
|
||||
}
|
||||
|
||||
proxyURL := os.Getenv("NYDUS_EXTERNAL_PROXY_URL")
|
||||
cacheDir := os.Getenv("NYDUS_EXTERNAL_PROXY_CACHE_DIR")
|
||||
if cacheDir == "" {
|
||||
cacheDir = ctx.Env.CacheDir
|
||||
}
|
||||
backend.Backends[0].Config = map[string]interface{}{
|
||||
"scheme": "https",
|
||||
"host": host,
|
||||
"repo": name,
|
||||
"auth": modelRegistryAuth,
|
||||
"timeout": 30,
|
||||
"connect_timeout": 5,
|
||||
"proxy": proxy{
|
||||
CacheDir: cacheDir,
|
||||
URL: proxyURL,
|
||||
Fallback: true,
|
||||
},
|
||||
}
|
||||
|
||||
backendBytes, err = json.MarshalIndent(backend, "", " ")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to marshal backend config file")
|
||||
}
|
||||
return os.WriteFile(backendConfigPath, backendBytes, 0644)
|
||||
}
|
|
@ -10,7 +10,7 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/containerd/nydus-snapshotter/pkg/converter"
|
||||
"github.com/BraveY/snapshotter-converter/converter"
|
||||
"github.com/containerd/nydus-snapshotter/pkg/supervisor"
|
||||
"github.com/dragonflyoss/nydus/smoke/tests/texture"
|
||||
"github.com/dragonflyoss/nydus/smoke/tests/tool"
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue