Vendor in latest containers/(storage, image) and runtime-tools

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This commit is contained in:
Daniel J Walsh 2022-10-14 09:58:29 -04:00
parent e46d9ae23b
commit fc1a4a31ee
70 changed files with 1685 additions and 3127 deletions

View File

@ -7,9 +7,9 @@ require (
github.com/containerd/containerd v1.6.8
github.com/containernetworking/cni v1.1.2
github.com/containernetworking/plugins v1.1.1
github.com/containers/image/v5 v5.23.0
github.com/containers/image/v5 v5.23.1-0.20221013202101-87afcefe9766
github.com/containers/ocicrypt v1.1.6
github.com/containers/storage v1.43.0
github.com/containers/storage v1.43.1-0.20221014072257-a144fee6f51c
github.com/coreos/go-systemd/v22 v22.4.0
github.com/cyphar/filepath-securejoin v0.2.3
github.com/davecgh/go-spew v1.1.1
@ -28,8 +28,8 @@ require (
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc2
github.com/opencontainers/runc v1.1.4
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417
github.com/opencontainers/runtime-tools v0.9.1-0.20220714195903-17b3287fafb7
github.com/opencontainers/runtime-spec v1.0.3-0.20220825212826-86290f6a00fb
github.com/opencontainers/runtime-tools v0.9.1-0.20221014010322-58c91d646d86
github.com/opencontainers/selinux v1.10.2
github.com/pkg/sftp v1.13.5
github.com/pmezard/go-difflib v1.0.0
@ -52,7 +52,7 @@ require (
github.com/Microsoft/hcsshim v0.9.4 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/chzyer/readline v1.5.1 // indirect
github.com/containerd/cgroups v1.0.4 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.12.0 // indirect
@ -71,7 +71,7 @@ require (
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/klauspost/compress v1.15.11 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/klauspost/pgzip v1.2.6-0.20220930104621-17e8dac29df8 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/letsencrypt/boulder v0.0.0-20220723181115-27de4befb95e // indirect
github.com/manifoldco/promptui v0.9.0 // indirect

File diff suppressed because it is too large Load Diff

View File

@ -1,21 +0,0 @@
language: go
matrix:
include:
- go: 1.4.3
- go: 1.5.4
- go: 1.6.3
- go: 1.7
- go: tip
allow_failures:
- go: tip
install:
- go get golang.org/x/tools/cmd/cover
- go get github.com/mattn/goveralls
script:
- echo "Test and track coverage" ; $HOME/gopath/bin/goveralls -package "." -service=travis-ci
-repotoken $COVERALLS_TOKEN
- echo "Build examples" ; cd examples && go build
- echo "Check if gofmt'd" ; diff -u <(echo -n) <(gofmt -d -s .)
env:
global:
secure: HroGEAUQpVq9zX1b1VIkraLiywhGbzvNnTZq2TMxgK7JHP8xqNplAeF1izrR2i4QLL9nsY+9WtYss4QuPvEtZcVHUobw6XnL6radF7jS1LgfYZ9Y7oF+zogZ2I5QUMRLGA7rcxQ05s7mKq3XZQfeqaNts4bms/eZRefWuaFZbkw=

View File

@ -1,194 +0,0 @@
semver for golang [![Build Status](https://travis-ci.org/blang/semver.svg?branch=master)](https://travis-ci.org/blang/semver) [![GoDoc](https://godoc.org/github.com/blang/semver?status.png)](https://godoc.org/github.com/blang/semver) [![Coverage Status](https://img.shields.io/coveralls/blang/semver.svg)](https://coveralls.io/r/blang/semver?branch=master)
======
semver is a [Semantic Versioning](http://semver.org/) library written in golang. It fully covers spec version `2.0.0`.
Usage
-----
```bash
$ go get github.com/blang/semver
```
Note: Always vendor your dependencies or fix on a specific version tag.
```go
import github.com/blang/semver
v1, err := semver.Make("1.0.0-beta")
v2, err := semver.Make("2.0.0-beta")
v1.Compare(v2)
```
Also check the [GoDocs](http://godoc.org/github.com/blang/semver).
Why should I use this lib?
-----
- Fully spec compatible
- No reflection
- No regex
- Fully tested (Coverage >99%)
- Readable parsing/validation errors
- Fast (See [Benchmarks](#benchmarks))
- Only Stdlib
- Uses values instead of pointers
- Many features, see below
Features
-----
- Parsing and validation at all levels
- Comparator-like comparisons
- Compare Helper Methods
- InPlace manipulation
- Ranges `>=1.0.0 <2.0.0 || >=3.0.0 !3.0.1-beta.1`
- Wildcards `>=1.x`, `<=2.5.x`
- Sortable (implements sort.Interface)
- database/sql compatible (sql.Scanner/Valuer)
- encoding/json compatible (json.Marshaler/Unmarshaler)
Ranges
------
A `Range` is a set of conditions which specify which versions satisfy the range.
A condition is composed of an operator and a version. The supported operators are:
- `<1.0.0` Less than `1.0.0`
- `<=1.0.0` Less than or equal to `1.0.0`
- `>1.0.0` Greater than `1.0.0`
- `>=1.0.0` Greater than or equal to `1.0.0`
- `1.0.0`, `=1.0.0`, `==1.0.0` Equal to `1.0.0`
- `!1.0.0`, `!=1.0.0` Not equal to `1.0.0`. Excludes version `1.0.0`.
Note that spaces between the operator and the version will be gracefully tolerated.
A `Range` can link multiple `Ranges` separated by space:
Ranges can be linked by logical AND:
- `>1.0.0 <2.0.0` would match between both ranges, so `1.1.1` and `1.8.7` but not `1.0.0` or `2.0.0`
- `>1.0.0 <3.0.0 !2.0.3-beta.2` would match every version between `1.0.0` and `3.0.0` except `2.0.3-beta.2`
Ranges can also be linked by logical OR:
- `<2.0.0 || >=3.0.0` would match `1.x.x` and `3.x.x` but not `2.x.x`
AND has a higher precedence than OR. It's not possible to use brackets.
Ranges can be combined by both AND and OR
- `>1.0.0 <2.0.0 || >3.0.0 !4.2.1` would match `1.2.3`, `1.9.9`, `3.1.1`, but not `4.2.1`, `2.1.1`
Range usage:
```
v, err := semver.Parse("1.2.3")
range, err := semver.ParseRange(">1.0.0 <2.0.0 || >=3.0.0")
if range(v) {
//valid
}
```
Example
-----
Have a look at full examples in [examples/main.go](examples/main.go)
```go
import github.com/blang/semver
v, err := semver.Make("0.0.1-alpha.preview+123.github")
fmt.Printf("Major: %d\n", v.Major)
fmt.Printf("Minor: %d\n", v.Minor)
fmt.Printf("Patch: %d\n", v.Patch)
fmt.Printf("Pre: %s\n", v.Pre)
fmt.Printf("Build: %s\n", v.Build)
// Prerelease versions array
if len(v.Pre) > 0 {
fmt.Println("Prerelease versions:")
for i, pre := range v.Pre {
fmt.Printf("%d: %q\n", i, pre)
}
}
// Build meta data array
if len(v.Build) > 0 {
fmt.Println("Build meta data:")
for i, build := range v.Build {
fmt.Printf("%d: %q\n", i, build)
}
}
v001, err := semver.Make("0.0.1")
// Compare using helpers: v.GT(v2), v.LT, v.GTE, v.LTE
v001.GT(v) == true
v.LT(v001) == true
v.GTE(v) == true
v.LTE(v) == true
// Or use v.Compare(v2) for comparisons (-1, 0, 1):
v001.Compare(v) == 1
v.Compare(v001) == -1
v.Compare(v) == 0
// Manipulate Version in place:
v.Pre[0], err = semver.NewPRVersion("beta")
if err != nil {
fmt.Printf("Error parsing pre release version: %q", err)
}
fmt.Println("\nValidate versions:")
v.Build[0] = "?"
err = v.Validate()
if err != nil {
fmt.Printf("Validation failed: %s\n", err)
}
```
Benchmarks
-----
BenchmarkParseSimple-4 5000000 390 ns/op 48 B/op 1 allocs/op
BenchmarkParseComplex-4 1000000 1813 ns/op 256 B/op 7 allocs/op
BenchmarkParseAverage-4 1000000 1171 ns/op 163 B/op 4 allocs/op
BenchmarkStringSimple-4 20000000 119 ns/op 16 B/op 1 allocs/op
BenchmarkStringLarger-4 10000000 206 ns/op 32 B/op 2 allocs/op
BenchmarkStringComplex-4 5000000 324 ns/op 80 B/op 3 allocs/op
BenchmarkStringAverage-4 5000000 273 ns/op 53 B/op 2 allocs/op
BenchmarkValidateSimple-4 200000000 9.33 ns/op 0 B/op 0 allocs/op
BenchmarkValidateComplex-4 3000000 469 ns/op 0 B/op 0 allocs/op
BenchmarkValidateAverage-4 5000000 256 ns/op 0 B/op 0 allocs/op
BenchmarkCompareSimple-4 100000000 11.8 ns/op 0 B/op 0 allocs/op
BenchmarkCompareComplex-4 50000000 30.8 ns/op 0 B/op 0 allocs/op
BenchmarkCompareAverage-4 30000000 41.5 ns/op 0 B/op 0 allocs/op
BenchmarkSort-4 3000000 419 ns/op 256 B/op 2 allocs/op
BenchmarkRangeParseSimple-4 2000000 850 ns/op 192 B/op 5 allocs/op
BenchmarkRangeParseAverage-4 1000000 1677 ns/op 400 B/op 10 allocs/op
BenchmarkRangeParseComplex-4 300000 5214 ns/op 1440 B/op 30 allocs/op
BenchmarkRangeMatchSimple-4 50000000 25.6 ns/op 0 B/op 0 allocs/op
BenchmarkRangeMatchAverage-4 30000000 56.4 ns/op 0 B/op 0 allocs/op
BenchmarkRangeMatchComplex-4 10000000 153 ns/op 0 B/op 0 allocs/op
See benchmark cases at [semver_test.go](semver_test.go)
Motivation
-----
I simply couldn't find any lib supporting the full spec. Others were just wrong or used reflection and regex which i don't like.
Contribution
-----
Feel free to make a pull request. For bigger changes create a issue first to discuss about it.
License
-----
See [LICENSE](LICENSE) file.

View File

@ -1,17 +0,0 @@
{
"author": "blang",
"bugs": {
"URL": "https://github.com/blang/semver/issues",
"url": "https://github.com/blang/semver/issues"
},
"gx": {
"dvcsimport": "github.com/blang/semver"
},
"gxVersion": "0.10.0",
"language": "go",
"license": "MIT",
"name": "semver",
"releaseCmd": "git commit -a -m \"gx publish $VERSION\"",
"version": "3.5.1"
}

View File

@ -327,7 +327,7 @@ func expandWildcardVersion(parts [][]string) ([][]string, error) {
for _, p := range parts {
var newParts []string
for _, ap := range p {
if strings.Index(ap, "x") != -1 {
if strings.Contains(ap, "x") {
opStr, vStr, err := splitComparatorVersion(ap)
if err != nil {
return nil, err

View File

@ -26,7 +26,7 @@ type Version struct {
Minor uint64
Patch uint64
Pre []PRVersion
Build []string //No Precendence
Build []string //No Precedence
}
// Version to string
@ -61,6 +61,18 @@ func (v Version) String() string {
return string(b)
}
// FinalizeVersion discards prerelease and build number and only returns
// major, minor and patch number.
func (v Version) FinalizeVersion() string {
b := make([]byte, 0, 5)
b = strconv.AppendUint(b, v.Major, 10)
b = append(b, '.')
b = strconv.AppendUint(b, v.Minor, 10)
b = append(b, '.')
b = strconv.AppendUint(b, v.Patch, 10)
return string(b)
}
// Equals checks if v is equal to o.
func (v Version) Equals(o Version) bool {
return (v.Compare(o) == 0)
@ -161,6 +173,27 @@ func (v Version) Compare(o Version) int {
}
// IncrementPatch increments the patch version
func (v *Version) IncrementPatch() error {
v.Patch++
return nil
}
// IncrementMinor increments the minor version
func (v *Version) IncrementMinor() error {
v.Minor++
v.Patch = 0
return nil
}
// IncrementMajor increments the major version
func (v *Version) IncrementMajor() error {
v.Major++
v.Minor = 0
v.Patch = 0
return nil
}
// Validate validates v and returns error in case
func (v Version) Validate() error {
// Major, Minor, Patch already validated using uint64
@ -189,10 +222,10 @@ func (v Version) Validate() error {
}
// New is an alias for Parse and returns a pointer, parses version string and returns a validated Version or error
func New(s string) (vp *Version, err error) {
func New(s string) (*Version, error) {
v, err := Parse(s)
vp = &v
return
vp := &v
return vp, err
}
// Make is an alias for Parse, parses version string and returns a validated Version or error
@ -202,14 +235,25 @@ func Make(s string) (Version, error) {
// ParseTolerant allows for certain version specifications that do not strictly adhere to semver
// specs to be parsed by this library. It does so by normalizing versions before passing them to
// Parse(). It currently trims spaces, removes a "v" prefix, and adds a 0 patch number to versions
// with only major and minor components specified
// Parse(). It currently trims spaces, removes a "v" prefix, adds a 0 patch number to versions
// with only major and minor components specified, and removes leading 0s.
func ParseTolerant(s string) (Version, error) {
s = strings.TrimSpace(s)
s = strings.TrimPrefix(s, "v")
// Split into major.minor.(patch+pr+meta)
parts := strings.SplitN(s, ".", 3)
// Remove leading zeros.
for i, p := range parts {
if len(p) > 1 {
p = strings.TrimLeft(p, "0")
if len(p) == 0 || !strings.ContainsAny(p[0:1], "0123456789") {
p = "0" + p
}
parts[i] = p
}
}
// Fill up shortened versions.
if len(parts) < 3 {
if strings.ContainsAny(parts[len(parts)-1], "+-") {
return Version{}, errors.New("Short version cannot contain PreRelease/Build meta data")
@ -217,8 +261,8 @@ func ParseTolerant(s string) (Version, error) {
for len(parts) < 3 {
parts = append(parts, "0")
}
s = strings.Join(parts, ".")
}
s = strings.Join(parts, ".")
return Parse(s)
}
@ -416,3 +460,17 @@ func NewBuildVersion(s string) (string, error) {
}
return s, nil
}
// FinalizeVersion returns the major, minor and patch number only and discards
// prerelease and build number.
func FinalizeVersion(s string) (string, error) {
v, err := Parse(s)
if err != nil {
return "", err
}
v.Pre = nil
v.Build = nil
finalVer := v.String()
return finalVer, nil
}

View File

@ -14,7 +14,7 @@ func (v *Version) Scan(src interface{}) (err error) {
case []byte:
str = string(src)
default:
return fmt.Errorf("Version.Scan: cannot convert %T to string.", src)
return fmt.Errorf("version.Scan: cannot convert %T to string", src)
}
if t, err := Parse(str); err == nil {

View File

@ -105,7 +105,7 @@ func newImageDestination(sys *types.SystemContext, ref dirReference) (private.Im
AcceptsForeignLayerURLs: false,
MustMatchRuntimeOS: false,
IgnoresEmbeddedDockerReference: false, // N/A, DockerReference() returns nil.
HasThreadSafePutBlob: false,
HasThreadSafePutBlob: true,
}),
NoPutBlobPartialInitialize: stubs.NoPutBlobPartial(ref),

View File

@ -313,8 +313,14 @@ func CheckAuth(ctx context.Context, sys *types.SystemContext, username, password
return err
}
defer resp.Body.Close()
return httpResponseToError(resp, "")
if resp.StatusCode != http.StatusOK {
err := registryHTTPResponseToError(resp)
if resp.StatusCode == http.StatusUnauthorized {
err = ErrUnauthorizedForCredentials{Err: err}
}
return err
}
return nil
}
// SearchResult holds the information of each matching image
@ -411,7 +417,7 @@ func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, ima
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
err := httpResponseToError(resp, "")
err := registryHTTPResponseToError(resp)
logrus.Errorf("error getting search results from v2 endpoint %q: %v", registry, err)
return nil, fmt.Errorf("couldn't search registry %q: %w", registry, err)
}
@ -816,7 +822,7 @@ func (c *dockerClient) detectPropertiesHelper(ctx context.Context) error {
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", url.Redacted(), resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return httpResponseToError(resp, "")
return registryHTTPResponseToError(resp)
}
c.challenges = parseAuthHeader(resp.Header)
c.scheme = scheme
@ -956,9 +962,10 @@ func (c *dockerClient) getBlob(ctx context.Context, ref dockerReference, info ty
if err != nil {
return nil, 0, err
}
if err := httpResponseToError(res, "Error fetching blob"); err != nil {
if res.StatusCode != http.StatusOK {
err := registryHTTPResponseToError(res)
res.Body.Close()
return nil, 0, err
return nil, 0, fmt.Errorf("fetching blob: %w", err)
}
cache.RecordKnownLocation(ref.Transport(), bicTransportScope(ref), info.Digest, newBICLocationReference(ref))
return res.Body, getBlobSize(res), nil
@ -982,13 +989,8 @@ func (c *dockerClient) getOCIDescriptorContents(ctx context.Context, ref dockerR
// isManifestUnknownError returns true iff err from fetchManifest is a “manifest unknown” error.
func isManifestUnknownError(err error) bool {
var errs errcode.Errors
if !errors.As(err, &errs) || len(errs) == 0 {
return false
}
err = errs[0]
ec, ok := err.(errcode.ErrorCoder)
if !ok {
var ec errcode.ErrorCoder
if !errors.As(err, &ec) {
return false
}
return ec.ErrorCode() == v2.ErrorCodeManifestUnknown
@ -1037,9 +1039,8 @@ func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerRe
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, fmt.Errorf("downloading signatures for %s in %s: %w", manifestDigest, ref.ref.Name(), handleErrorResponse(res))
return nil, fmt.Errorf("downloading signatures for %s in %s: %w", manifestDigest, ref.ref.Name(), registryHTTPResponseToError(res))
}
body, err := iolimits.ReadAtMost(res.Body, iolimits.MaxSignatureListBodySize)

View File

@ -77,8 +77,8 @@ func GetRepositoryTags(ctx context.Context, sys *types.SystemContext, ref types.
return nil, err
}
defer res.Body.Close()
if err := httpResponseToError(res, "fetching tags list"); err != nil {
return nil, err
if res.StatusCode != http.StatusOK {
return nil, fmt.Errorf("fetching tags list: %w", registryHTTPResponseToError(res))
}
var tagsHolder struct {

View File

@ -244,7 +244,7 @@ func (d *dockerImageDestination) blobExists(ctx context.Context, repo reference.
logrus.Debugf("... not present")
return false, -1, nil
default:
return false, -1, fmt.Errorf("failed to read from destination repository %s: %d (%s)", reference.Path(d.ref.ref), res.StatusCode, http.StatusText(res.StatusCode))
return false, -1, fmt.Errorf("checking whether a blob %s exists in %s: %w", digest, repo.Name(), registryHTTPResponseToError(res))
}
}
@ -487,15 +487,10 @@ func successStatus(status int) bool {
return status >= 200 && status <= 399
}
// isManifestInvalidError returns true iff err from client.HandleErrorResponse is a “manifest invalid” error.
// isManifestInvalidError returns true iff err from registryHTTPResponseToError is a “manifest invalid” error.
func isManifestInvalidError(err error) bool {
errors, ok := err.(errcode.Errors)
if !ok || len(errors) == 0 {
return false
}
err = errors[0]
ec, ok := err.(errcode.ErrorCoder)
if !ok {
var ec errcode.ErrorCoder
if ok := errors.As(err, &ec); !ok {
return false
}

View File

@ -28,6 +28,10 @@ import (
"github.com/sirupsen/logrus"
)
// maxLookasideSignatures is an arbitrary limit for the total number of signatures we would try to read from a lookaside server,
// even if it were broken or malicious and it continued serving an enormous number of items.
const maxLookasideSignatures = 128
type dockerImageSource struct {
impl.Compat
impl.PropertyMethodsInitialize
@ -372,12 +376,9 @@ func (s *dockerImageSource) GetBlobAt(ctx context.Context, info types.BlobInfo,
res.Body.Close()
return nil, nil, private.BadPartialRequestError{Status: res.Status}
default:
err := httpResponseToError(res, "Error fetching partial blob")
if err == nil {
err = fmt.Errorf("invalid status code returned when fetching blob %d (%s)", res.StatusCode, http.StatusText(res.StatusCode))
}
err := registryHTTPResponseToError(res)
res.Body.Close()
return nil, nil, err
return nil, nil, fmt.Errorf("fetching partial blob: %w", err)
}
}
@ -451,6 +452,10 @@ func (s *dockerImageSource) getSignaturesFromLookaside(ctx context.Context, inst
// NOTE: Keep this in sync with docs/signature-protocols.md!
signatures := []signature.Signature{}
for i := 0; ; i++ {
if i >= maxLookasideSignatures {
return nil, fmt.Errorf("server provided %d signatures, assuming that's unreasonable and a server error", maxLookasideSignatures)
}
url := lookasideStorageURL(s.c.signatureBase, manifestDigest, i)
signature, missing, err := s.getOneSignature(ctx, url)
if err != nil {
@ -496,10 +501,19 @@ func (s *dockerImageSource) getOneSignature(ctx context.Context, url *url.URL) (
}
defer res.Body.Close()
if res.StatusCode == http.StatusNotFound {
logrus.Debugf("... got status 404, as expected = end of signatures")
return nil, true, nil
} else if res.StatusCode != http.StatusOK {
return nil, false, fmt.Errorf("reading signature from %s: status %d (%s)", url.Redacted(), res.StatusCode, http.StatusText(res.StatusCode))
}
contentType := res.Header.Get("Content-Type")
if mimeType := simplifyContentType(contentType); mimeType == "text/html" {
logrus.Warnf("Signature %q has Content-Type %q, unexpected for a signature", url.Redacted(), contentType)
// Dont immediately fail; the lookaside spec does not place any requirements on Content-Type.
// If the content really is HTML, its going to fail in signature.FromBlob.
}
sigBlob, err := iolimits.ReadAtMost(res.Body, iolimits.MaxSignatureBodySize)
if err != nil {
return nil, false, err
@ -605,16 +619,16 @@ func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerRefere
return err
}
defer get.Body.Close()
manifestBody, err := iolimits.ReadAtMost(get.Body, iolimits.MaxManifestBodySize)
if err != nil {
return err
}
switch get.StatusCode {
case http.StatusOK:
case http.StatusNotFound:
return fmt.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry", ref.ref)
default:
return fmt.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
return fmt.Errorf("deleting %v: %w", ref.ref, registryHTTPResponseToError(get))
}
manifestBody, err := iolimits.ReadAtMost(get.Body, iolimits.MaxManifestBodySize)
if err != nil {
return err
}
manifestDigest, err := manifest.Digest(manifestBody)
@ -630,13 +644,8 @@ func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerRefere
return err
}
defer delete.Body.Close()
body, err := iolimits.ReadAtMost(delete.Body, iolimits.MaxErrorBodySize)
if err != nil {
return err
}
if delete.StatusCode != http.StatusAccepted {
return fmt.Errorf("Failed to delete %v: %s (%v)", deletePath, string(body), delete.Status)
return fmt.Errorf("deleting %v: %w", ref.ref, registryHTTPResponseToError(delete))
}
for i := 0; ; i++ {

View File

@ -4,6 +4,9 @@ import (
"errors"
"fmt"
"net/http"
"github.com/docker/distribution/registry/api/errcode"
"github.com/sirupsen/logrus"
)
var (
@ -33,7 +36,7 @@ func httpResponseToError(res *http.Response, context string) error {
case http.StatusTooManyRequests:
return ErrTooManyRequests
case http.StatusUnauthorized:
err := handleErrorResponse(res)
err := registryHTTPResponseToError(res)
return ErrUnauthorizedForCredentials{Err: err}
default:
if context != "" {
@ -47,12 +50,47 @@ func httpResponseToError(res *http.Response, context string) error {
// registry
func registryHTTPResponseToError(res *http.Response) error {
err := handleErrorResponse(res)
if e, ok := err.(*unexpectedHTTPResponseError); ok {
// len(errs) == 0 should never be returned by handleErrorResponse; if it does, we don't modify it and let the caller report it as is.
if errs, ok := err.(errcode.Errors); ok && len(errs) > 0 {
// The docker/distribution registry implementation almost never returns
// more than one error in the HTTP body; it seems there is only one
// possible instance, where the second error reports a cleanup failure
// we don't really care about.
//
// The only _common_ case where a multi-element error is returned is
// created by the handleErrorResponse parser when OAuth authorization fails:
// the first element contains errors from a WWW-Authenticate header, the second
// element contains errors from the response body.
//
// In that case the first one is currently _slightly_ more informative (ErrorCodeUnauthorized
// for invalid tokens, ErrorCodeDenied for permission denied with a valid token
// for the first error, vs. ErrorCodeUnauthorized for both cases for the second error.)
//
// Also, docker/docker similarly only logs the other errors and returns the
// first one.
if len(errs) > 1 {
logrus.Debugf("Discarding non-primary errors:")
for _, err := range errs[1:] {
logrus.Debugf(" %s", err.Error())
}
}
err = errs[0]
}
switch e := err.(type) {
case *unexpectedHTTPResponseError:
response := string(e.Response)
if len(response) > 50 {
response = response[:50] + "..."
}
err = fmt.Errorf("StatusCode: %d, %s", e.StatusCode, response)
// %.0w makes e visible to error.Unwrap() without including any text
err = fmt.Errorf("StatusCode: %d, %s%.0w", e.StatusCode, response, e)
case errcode.Error:
// e.Error() is fmt.Sprintf("%s: %s", e.Code.Error(), e.Message, which is usually
// rather redundant. So reword it without using e.Code.Error() if e.Message is the default.
if e.Message == e.Code.Message() {
// %.0w makes e visible to error.Unwrap() without including any text
err = fmt.Errorf("%s%.0w", e.Message, e)
}
}
return err
}

View File

@ -17,6 +17,17 @@ import (
"github.com/sirupsen/logrus"
)
// ImageNotFoundError is used when the OCI structure, in principle, exists and seems valid enough,
// but nothing matches the “image” part of the provided reference.
type ImageNotFoundError struct {
ref ociArchiveReference
// We may make members public, or add methods, in the future.
}
func (e ImageNotFoundError) Error() string {
return fmt.Sprintf("no descriptor found for reference %q", e.ref.image)
}
type ociArchiveImageSource struct {
impl.Compat
@ -35,6 +46,10 @@ func newImageSource(ctx context.Context, sys *types.SystemContext, ref ociArchiv
unpackedSrc, err := tempDirRef.ociRefExtracted.NewImageSource(ctx, sys)
if err != nil {
var notFound ocilayout.ImageNotFoundError
if errors.As(err, &notFound) {
err = ImageNotFoundError{ref: ref}
}
if err := tempDirRef.deleteTempDir(); err != nil {
return nil, fmt.Errorf("deleting temp directory %q: %w", tempDirRef.tempDirectory, err)
}

View File

@ -21,6 +21,17 @@ import (
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// ImageNotFoundError is used when the OCI structure, in principle, exists and seems valid enough,
// but nothing matches the “image” part of the provided reference.
type ImageNotFoundError struct {
ref ociReference
// We may make members public, or add methods, in the future.
}
func (e ImageNotFoundError) Error() string {
return fmt.Sprintf("no descriptor found for reference %q", e.ref.image)
}
type ociImageSource struct {
impl.Compat
impl.PropertyMethodsInitialize

View File

@ -205,7 +205,7 @@ func (ref ociReference) getManifestDescriptor() (imgspecv1.Descriptor, error) {
}
}
if d == nil {
return imgspecv1.Descriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.image)
return imgspecv1.Descriptor{}, ImageNotFoundError{ref}
}
return *d, nil
}

View File

@ -8,10 +8,10 @@ const (
// VersionMinor is for functionality in a backwards-compatible manner
VersionMinor = 23
// VersionPatch is for backwards-compatible bug fixes
VersionPatch = 0
VersionPatch = 1
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = ""
VersionDev = "-dev"
)
// Version is the specification version that the package types support.

View File

@ -1 +1 @@
1.43.0
1.43.1-dev

View File

@ -66,12 +66,12 @@ type Container struct {
Flags map[string]interface{} `json:"flags,omitempty"`
}
// ContainerStore provides bookkeeping for information about Containers.
type ContainerStore interface {
FileBasedStore
MetadataStore
ContainerBigDataStore
FlaggableStore
// rwContainerStore provides bookkeeping for information about Containers.
type rwContainerStore interface {
fileBasedStore
metadataStore
containerBigDataStore
flaggableStore
// Create creates a container that has a specified ID (or generates a
// random one if an empty value is supplied) and optional names,
@ -221,7 +221,7 @@ func (r *containerStore) Load() error {
}
}
r.containers = containers
r.idindex = truncindex.NewTruncIndex(idlist)
r.idindex = truncindex.NewTruncIndex(idlist) // Invalid values in idlist are ignored: they are not a reason to refuse processing the whole store.
r.byid = ids
r.bylayer = layers
r.byname = names
@ -243,11 +243,13 @@ func (r *containerStore) Save() error {
if err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(rpath, jdata, 0600)
if err := ioutils.AtomicWriteFile(rpath, jdata, 0600); err != nil {
return err
}
return r.Touch()
}
func newContainerStore(dir string) (ContainerStore, error) {
func newContainerStore(dir string) (rwContainerStore, error) {
if err := os.MkdirAll(dir, 0700); err != nil {
return nil, err
}
@ -255,8 +257,6 @@ func newContainerStore(dir string) (ContainerStore, error) {
if err != nil {
return nil, err
}
lockfile.Lock()
defer lockfile.Unlock()
cstore := containerStore{
lockfile: lockfile,
dir: dir,
@ -265,6 +265,8 @@ func newContainerStore(dir string) (ContainerStore, error) {
bylayer: make(map[string]*Container),
byname: make(map[string]*Container),
}
cstore.Lock()
defer cstore.Unlock()
if err := cstore.Load(); err != nil {
return nil, err
}
@ -354,7 +356,9 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
}
r.containers = append(r.containers, container)
r.byid[id] = container
r.idindex.Add(id)
// This can only fail on duplicate IDs, which shouldnt happen — and in that case the index is already in the desired state anyway.
// Implementing recovery from an unlikely and unimportant failure here would be too risky.
_ = r.idindex.Add(id)
r.bylayer[layer] = container
for _, name := range names {
r.byname[name] = container
@ -434,7 +438,9 @@ func (r *containerStore) Delete(id string) error {
}
}
delete(r.byid, id)
r.idindex.Delete(id)
// This can only fail if the ID is already missing, which shouldnt happen — and in that case the index is already in the desired state anyway.
// The stores Delete method is used on various paths to recover from failures, so this should be robust against partially missing data.
_ = r.idindex.Delete(id)
delete(r.bylayer, container.LayerID)
for _, name := range container.Names {
delete(r.byname, name)
@ -617,10 +623,6 @@ func (r *containerStore) Lock() {
r.lockfile.Lock()
}
func (r *containerStore) RecursiveLock() {
r.lockfile.RecursiveLock()
}
func (r *containerStore) RLock() {
r.lockfile.RLock()
}

View File

@ -0,0 +1,216 @@
package storage
import (
"io"
"time"
drivers "github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/archive"
digest "github.com/opencontainers/go-digest"
)
// The type definitions in this file exist ONLY to maintain formal API compatibility.
// DO NOT ADD ANY NEW METHODS TO THESE INTERFACES.
// ROFileBasedStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ROFileBasedStore interface {
Locker
Load() error
ReloadIfChanged() error
}
// RWFileBasedStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type RWFileBasedStore interface {
Save() error
}
// FileBasedStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type FileBasedStore interface {
ROFileBasedStore
RWFileBasedStore
}
// ROMetadataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ROMetadataStore interface {
Metadata(id string) (string, error)
}
// RWMetadataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type RWMetadataStore interface {
SetMetadata(id, metadata string) error
}
// MetadataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type MetadataStore interface {
ROMetadataStore
RWMetadataStore
}
// ROBigDataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ROBigDataStore interface {
BigData(id, key string) ([]byte, error)
BigDataSize(id, key string) (int64, error)
BigDataDigest(id, key string) (digest.Digest, error)
BigDataNames(id string) ([]string, error)
}
// RWImageBigDataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type RWImageBigDataStore interface {
SetBigData(id, key string, data []byte, digestManifest func([]byte) (digest.Digest, error)) error
}
// ContainerBigDataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ContainerBigDataStore interface {
ROBigDataStore
SetBigData(id, key string, data []byte) error
}
// ROLayerBigDataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ROLayerBigDataStore interface {
BigData(id, key string) (io.ReadCloser, error)
BigDataNames(id string) ([]string, error)
}
// RWLayerBigDataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type RWLayerBigDataStore interface {
SetBigData(id, key string, data io.Reader) error
}
// LayerBigDataStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type LayerBigDataStore interface {
ROLayerBigDataStore
RWLayerBigDataStore
}
// FlaggableStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type FlaggableStore interface {
ClearFlag(id string, flag string) error
SetFlag(id string, flag string, value interface{}) error
}
// ContainerStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ContainerStore interface {
FileBasedStore
MetadataStore
ContainerBigDataStore
FlaggableStore
Create(id string, names []string, image, layer, metadata string, options *ContainerOptions) (*Container, error)
SetNames(id string, names []string) error
AddNames(id string, names []string) error
RemoveNames(id string, names []string) error
Get(id string) (*Container, error)
Exists(id string) bool
Delete(id string) error
Wipe() error
Lookup(name string) (string, error)
Containers() ([]Container, error)
}
// ROImageStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ROImageStore interface {
ROFileBasedStore
ROMetadataStore
ROBigDataStore
Exists(id string) bool
Get(id string) (*Image, error)
Lookup(name string) (string, error)
Images() ([]Image, error)
ByDigest(d digest.Digest) ([]*Image, error)
}
// ImageStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ImageStore interface {
ROImageStore
RWFileBasedStore
RWMetadataStore
RWImageBigDataStore
FlaggableStore
Create(id string, names []string, layer, metadata string, created time.Time, searchableDigest digest.Digest) (*Image, error)
SetNames(id string, names []string) error
AddNames(id string, names []string) error
RemoveNames(id string, names []string) error
Delete(id string) error
Wipe() error
}
// ROLayerStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type ROLayerStore interface {
ROFileBasedStore
ROMetadataStore
ROLayerBigDataStore
Exists(id string) bool
Get(id string) (*Layer, error)
Status() ([][2]string, error)
Changes(from, to string) ([]archive.Change, error)
Diff(from, to string, options *DiffOptions) (io.ReadCloser, error)
DiffSize(from, to string) (int64, error)
Size(name string) (int64, error)
Lookup(name string) (string, error)
LayersByCompressedDigest(d digest.Digest) ([]Layer, error)
LayersByUncompressedDigest(d digest.Digest) ([]Layer, error)
Layers() ([]Layer, error)
}
// LayerStore is a deprecated interface with no documented way to use it from callers outside of c/storage.
//
// Deprecated: There is no way to use this from any external user of c/storage to invoke c/storage functionality.
type LayerStore interface {
ROLayerStore
RWFileBasedStore
RWMetadataStore
FlaggableStore
RWLayerBigDataStore
Create(id string, parent *Layer, names []string, mountLabel string, options map[string]string, moreOptions *LayerOptions, writeable bool) (*Layer, error)
CreateWithFlags(id string, parent *Layer, names []string, mountLabel string, options map[string]string, moreOptions *LayerOptions, writeable bool, flags map[string]interface{}) (layer *Layer, err error)
Put(id string, parent *Layer, names []string, mountLabel string, options map[string]string, moreOptions *LayerOptions, writeable bool, flags map[string]interface{}, diff io.Reader) (*Layer, int64, error)
SetNames(id string, names []string) error
AddNames(id string, names []string) error
RemoveNames(id string, names []string) error
Delete(id string) error
Wipe() error
Mount(id string, options drivers.MountOpts) (string, error)
Unmount(id string, force bool) (bool, error)
Mounted(id string) (int, error)
ParentOwners(id string) (uids, gids []int, err error)
ApplyDiff(to string, diff io.Reader) (int64, error)
ApplyDiffWithDiffer(to string, options *drivers.ApplyDiffOpts, differ drivers.Differ) (*drivers.DriverWithDifferOutput, error)
CleanupStagingDirectory(stagingDirectory string) error
ApplyDiffFromStagingDirectory(id, stagingDirectory string, diffOutput *drivers.DriverWithDifferOutput, options *drivers.ApplyDiffOpts) error
DifferTarget(id string) (string, error)
LoadLocked() error
PutAdditionalLayer(id string, parentLayer *Layer, names []string, aLayer drivers.AdditionalLayer) (layer *Layer, err error)
}

View File

@ -67,7 +67,7 @@ var (
const defaultPerms = os.FileMode(0555)
func init() {
graphdriver.Register("aufs", Init)
graphdriver.MustRegister("aufs", Init)
}
// Driver contains information about the filesystem mounted.

View File

@ -42,7 +42,7 @@ import (
const defaultPerms = os.FileMode(0555)
func init() {
graphdriver.Register("btrfs", Init)
graphdriver.MustRegister("btrfs", Init)
}
type btrfsOptions struct {

View File

@ -115,7 +115,7 @@ func NewNaiveLayerIDMapUpdater(driver ProtoDriver) LayerIDMapUpdater {
// on-disk owner UIDs and GIDs which are "host" values in the first map with
// UIDs and GIDs for "host" values from the second map which correspond to the
// same "container" IDs.
func (n *naiveLayerIDMapUpdater) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMappings, mountLabel string) error {
func (n *naiveLayerIDMapUpdater) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMappings, mountLabel string) (retErr error) {
driver := n.ProtoDriver
options := MountOpts{
MountLabel: mountLabel,
@ -124,9 +124,7 @@ func (n *naiveLayerIDMapUpdater) UpdateLayerIDMap(id string, toContainer, toHost
if err != nil {
return err
}
defer func() {
driver.Put(id)
}()
defer driverPut(driver, id, &retErr)
return ChownPathByMaps(layerFs, toContainer, toHost)
}

View File

@ -83,7 +83,7 @@ func (c *platformChowner) LChown(path string, info os.FileInfo, toHost, toContai
uid, gid = mappedPair.UID, mappedPair.GID
}
if uid != int(st.Uid) || gid != int(st.Gid) {
cap, err := system.Lgetxattr(path, "security.capability")
capability, err := system.Lgetxattr(path, "security.capability")
if err != nil && !errors.Is(err, system.EOPNOTSUPP) && err != system.ErrNotSupportedPlatform {
return fmt.Errorf("%s: %w", os.Args[0], err)
}
@ -98,8 +98,8 @@ func (c *platformChowner) LChown(path string, info os.FileInfo, toHost, toContai
return fmt.Errorf("%s: %w", os.Args[0], err)
}
}
if cap != nil {
if err := system.Lsetxattr(path, "security.capability", cap, 0); err != nil {
if capability != nil {
if err := system.Lsetxattr(path, "security.capability", capability, 0); err != nil {
return fmt.Errorf("%s: %w", os.Args[0], err)
}
}

View File

@ -1,6 +1,7 @@
//go:build !linux || !cgo
// +build !linux !cgo
package copy
package copy //nolint: predeclared
import (
"io"
@ -24,7 +25,7 @@ func DirCopy(srcDir, dstDir string, _ Mode, _ bool) error {
}
// CopyRegularToFile copies the content of a file to another
func CopyRegularToFile(srcPath string, dstFile *os.File, fileinfo os.FileInfo, copyWithFileRange, copyWithFileClone *bool) error {
func CopyRegularToFile(srcPath string, dstFile *os.File, fileinfo os.FileInfo, copyWithFileRange, copyWithFileClone *bool) error { //nolint: revive // "func name will be used as copy.CopyRegularToFile by other packages, and that stutters"
f, err := os.Open(srcPath)
if err != nil {
return err
@ -35,6 +36,6 @@ func CopyRegularToFile(srcPath string, dstFile *os.File, fileinfo os.FileInfo, c
}
// CopyRegular copies the content of a file to another
func CopyRegular(srcPath, dstPath string, fileinfo os.FileInfo, copyWithFileRange, copyWithFileClone *bool) error {
func CopyRegular(srcPath, dstPath string, fileinfo os.FileInfo, copyWithFileRange, copyWithFileClone *bool) error { //nolint:revive // "func name will be used as copy.CopyRegular by other packages, and that stutters"
return chrootarchive.NewArchiver(nil).CopyWithTar(srcPath, dstPath)
}

View File

@ -23,7 +23,7 @@ import (
const defaultPerms = os.FileMode(0555)
func init() {
graphdriver.Register("devicemapper", Init)
graphdriver.MustRegister("devicemapper", Init)
}
// Driver contains the device set mounted and the home directory

View File

@ -53,7 +53,7 @@ type MountOpts struct {
// Mount label is the MAC Labels to assign to mount point (SELINUX)
MountLabel string
// UidMaps & GidMaps are the User Namespace mappings to be assigned to content in the mount point
UidMaps []idtools.IDMap // nolint: golint
UidMaps []idtools.IDMap //nolint: golint,revive
GidMaps []idtools.IDMap //nolint: golint
Options []string
@ -279,6 +279,14 @@ func init() {
drivers = make(map[string]InitFunc)
}
// MustRegister registers an InitFunc for the driver, or panics.
// It is suitable for packages init() sections.
func MustRegister(name string, initFunc InitFunc) {
if err := Register(name, initFunc); err != nil {
panic(fmt.Sprintf("failed to register containers/storage graph driver %q: %v", name, err))
}
}
// Register registers an InitFunc for the driver.
func Register(name string, initFunc InitFunc) error {
if _, exists := drivers[name]; exists {
@ -405,3 +413,21 @@ func scanPriorDrivers(root string) map[string]bool {
}
return driversMap
}
// driverPut is driver.Put, but errors are handled either by updating mainErr or just logging.
// Typical usage:
//
// func …(…) (err error) {
// …
// defer driverPut(driver, id, &err)
// }
func driverPut(driver ProtoDriver, id string, mainErr *error) {
if err := driver.Put(id); err != nil {
err = fmt.Errorf("unmounting layer %s: %w", id, err)
if *mainErr == nil {
*mainErr = err
} else {
logrus.Errorf(err.Error())
}
}
}

View File

@ -65,7 +65,7 @@ func (gdw *NaiveDiffDriver) Diff(id string, idMappings *idtools.IDMappings, pare
defer func() {
if err != nil {
driver.Put(id)
driverPut(driver, id, &err)
}
}()
@ -80,7 +80,7 @@ func (gdw *NaiveDiffDriver) Diff(id string, idMappings *idtools.IDMappings, pare
}
return ioutils.NewReadCloserWrapper(archive, func() error {
err := archive.Close()
driver.Put(id)
driverPut(driver, id, &err)
return err
}), nil
}
@ -90,7 +90,7 @@ func (gdw *NaiveDiffDriver) Diff(id string, idMappings *idtools.IDMappings, pare
if err != nil {
return nil, err
}
defer driver.Put(parent)
defer driverPut(driver, parent, &err)
changes, err := archive.ChangesDirs(layerFs, idMappings, parentFs, parentMappings)
if err != nil {
@ -104,7 +104,7 @@ func (gdw *NaiveDiffDriver) Diff(id string, idMappings *idtools.IDMappings, pare
return ioutils.NewReadCloserWrapper(archive, func() error {
err := archive.Close()
driver.Put(id)
driverPut(driver, id, &err)
// NaiveDiffDriver compares file metadata with parent layers. Parent layers
// are extracted from tar's with full second precision on modified time.
@ -117,7 +117,7 @@ func (gdw *NaiveDiffDriver) Diff(id string, idMappings *idtools.IDMappings, pare
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (gdw *NaiveDiffDriver) Changes(id string, idMappings *idtools.IDMappings, parent string, parentMappings *idtools.IDMappings, mountLabel string) ([]archive.Change, error) {
func (gdw *NaiveDiffDriver) Changes(id string, idMappings *idtools.IDMappings, parent string, parentMappings *idtools.IDMappings, mountLabel string) (_ []archive.Change, retErr error) {
driver := gdw.ProtoDriver
if idMappings == nil {
@ -134,7 +134,7 @@ func (gdw *NaiveDiffDriver) Changes(id string, idMappings *idtools.IDMappings, p
if err != nil {
return nil, err
}
defer driver.Put(id)
defer driverPut(driver, id, &retErr)
parentFs := ""
@ -147,7 +147,7 @@ func (gdw *NaiveDiffDriver) Changes(id string, idMappings *idtools.IDMappings, p
if err != nil {
return nil, err
}
defer driver.Put(parent)
defer driverPut(driver, parent, &retErr)
}
return archive.ChangesDirs(layerFs, idMappings, parentFs, parentMappings)
@ -171,7 +171,7 @@ func (gdw *NaiveDiffDriver) ApplyDiff(id, parent string, options ApplyDiffOpts)
if err != nil {
return
}
defer driver.Put(id)
defer driverPut(driver, id, &err)
defaultForceMask := os.FileMode(0700)
var forceMask *os.FileMode = nil
@ -224,7 +224,7 @@ func (gdw *NaiveDiffDriver) DiffSize(id string, idMappings *idtools.IDMappings,
if err != nil {
return
}
defer driver.Put(id)
defer driverPut(driver, id, &err)
return archive.ChangesSize(layerFs, changes), nil
}

View File

@ -140,8 +140,8 @@ var (
)
func init() {
graphdriver.Register("overlay", Init)
graphdriver.Register("overlay2", Init)
graphdriver.MustRegister("overlay", Init)
graphdriver.MustRegister("overlay2", Init)
}
func hasMetacopyOption(opts []string) bool {
@ -309,9 +309,11 @@ func Init(home string, options graphdriver.Options) (graphdriver.Driver, error)
if err != nil {
return nil, err
}
if fsName, ok := graphdriver.FsNames[fsMagic]; ok {
backingFs = fsName
fsName, ok := graphdriver.FsNames[fsMagic]
if !ok {
return nil, fmt.Errorf("filesystem type %#x reported for %s is not supported with 'overlay': %w", fsMagic, filepath.Dir(home), graphdriver.ErrIncompatibleFS)
}
backingFs = fsName
runhome := filepath.Join(options.RunRoot, filepath.Base(home))
rootUID, rootGID, err := idtools.GetRootUIDGID(options.UIDMaps, options.GIDMaps)

View File

@ -28,7 +28,7 @@ var (
const defaultPerms = os.FileMode(0555)
func init() {
graphdriver.Register("vfs", Init)
graphdriver.MustRegister("vfs", Init)
}
// Init returns a new VFS driver.
@ -98,7 +98,7 @@ func (d *Driver) Status() [][2]string {
// Metadata is used for implementing the graphdriver.ProtoDriver interface. VFS does not currently have any meta data.
func (d *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
return nil, nil //nolint: nilnil
}
// Cleanup is used to implement graphdriver.ProtoDriver. There is no cleanup required for this driver.

View File

@ -53,7 +53,7 @@ var (
// init registers the windows graph drivers to the register.
func init() {
graphdriver.Register("windowsfilter", InitFilter)
graphdriver.MustRegister("windowsfilter", InitFilter)
// DOCKER_WINDOWSFILTER_NOREEXEC allows for inline processing which makes
// debugging issues in the re-exec codepath significantly easier.
if os.Getenv("DOCKER_WINDOWSFILTER_NOREEXEC") != "" {

View File

@ -33,7 +33,7 @@ type zfsOptions struct {
const defaultPerms = os.FileMode(0555)
func init() {
graphdriver.Register("zfs", Init)
graphdriver.MustRegister("zfs", Init)
}
// Logger returns a zfs logger implementation.

View File

@ -94,11 +94,11 @@ type Image struct {
Flags map[string]interface{} `json:"flags,omitempty"`
}
// ROImageStore provides bookkeeping for information about Images.
type ROImageStore interface {
ROFileBasedStore
ROMetadataStore
ROBigDataStore
// roImageStore provides bookkeeping for information about Images.
type roImageStore interface {
roFileBasedStore
roMetadataStore
roBigDataStore
// Exists checks if there is an image with the given ID or name.
Exists(id string) bool
@ -106,10 +106,6 @@ type ROImageStore interface {
// Get retrieves information about an image given an ID or name.
Get(id string) (*Image, error)
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
// Images returns a slice enumerating the known images.
Images() ([]Image, error)
@ -120,13 +116,13 @@ type ROImageStore interface {
ByDigest(d digest.Digest) ([]*Image, error)
}
// ImageStore provides bookkeeping for information about Images.
type ImageStore interface {
ROImageStore
RWFileBasedStore
RWMetadataStore
RWImageBigDataStore
FlaggableStore
// rwImageStore provides bookkeeping for information about Images.
type rwImageStore interface {
roImageStore
rwFileBasedStore
rwMetadataStore
rwImageBigDataStore
flaggableStore
// Create creates an image that has a specified ID (or a random one) and
// optional names, using the specified layer as its topmost (hopefully
@ -299,7 +295,7 @@ func (r *imageStore) Load() error {
return ErrDuplicateImageNames
}
r.images = images
r.idindex = truncindex.NewTruncIndex(idlist)
r.idindex = truncindex.NewTruncIndex(idlist) // Invalid values in idlist are ignored: they are not a reason to refuse processing the whole store.
r.byid = ids
r.byname = names
r.bydigest = digests
@ -324,11 +320,13 @@ func (r *imageStore) Save() error {
if err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(rpath, jdata, 0600)
if err := ioutils.AtomicWriteFile(rpath, jdata, 0600); err != nil {
return err
}
return r.Touch()
}
func newImageStore(dir string) (ImageStore, error) {
func newImageStore(dir string) (rwImageStore, error) {
if err := os.MkdirAll(dir, 0700); err != nil {
return nil, err
}
@ -336,8 +334,6 @@ func newImageStore(dir string) (ImageStore, error) {
if err != nil {
return nil, err
}
lockfile.Lock()
defer lockfile.Unlock()
istore := imageStore{
lockfile: lockfile,
dir: dir,
@ -346,19 +342,19 @@ func newImageStore(dir string) (ImageStore, error) {
byname: make(map[string]*Image),
bydigest: make(map[digest.Digest][]*Image),
}
istore.Lock()
defer istore.Unlock()
if err := istore.Load(); err != nil {
return nil, err
}
return &istore, nil
}
func newROImageStore(dir string) (ROImageStore, error) {
func newROImageStore(dir string) (roImageStore, error) {
lockfile, err := GetROLockfile(filepath.Join(dir, "images.lock"))
if err != nil {
return nil, err
}
lockfile.RLock()
defer lockfile.Unlock()
istore := imageStore{
lockfile: lockfile,
dir: dir,
@ -367,6 +363,8 @@ func newROImageStore(dir string) (ROImageStore, error) {
byname: make(map[string]*Image),
bydigest: make(map[digest.Digest][]*Image),
}
istore.RLock()
defer istore.Unlock()
if err := istore.Load(); err != nil {
return nil, err
}
@ -455,7 +453,9 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string, c
return nil, fmt.Errorf("validating digests for new image: %w", err)
}
r.images = append(r.images, image)
r.idindex.Add(id)
// This can only fail on duplicate IDs, which shouldnt happen — and in that case the index is already in the desired state anyway.
// Implementing recovery from an unlikely and unimportant failure here would be too risky.
_ = r.idindex.Add(id)
r.byid[id] = image
for _, name := range names {
r.byname[name] = image
@ -572,7 +572,9 @@ func (r *imageStore) Delete(id string) error {
}
}
delete(r.byid, id)
r.idindex.Delete(id)
// This can only fail if the ID is already missing, which shouldnt happen — and in that case the index is already in the desired state anyway.
// The stores Delete method is used on various paths to recover from failures, so this should be robust against partially missing data.
_ = r.idindex.Delete(id)
for _, name := range image.Names {
delete(r.byname, name)
}
@ -608,13 +610,6 @@ func (r *imageStore) Get(id string) (*Image, error) {
return nil, fmt.Errorf("locating image with ID %q: %w", id, ErrImageUnknown)
}
func (r *imageStore) Lookup(name string) (id string, err error) {
if image, ok := r.lookup(name); ok {
return image.ID, nil
}
return "", fmt.Errorf("locating image with ID %q: %w", id, ErrImageUnknown)
}
func (r *imageStore) Exists(id string) bool {
_, ok := r.lookup(id)
return ok
@ -798,10 +793,6 @@ func (r *imageStore) Lock() {
r.lockfile.Lock()
}
func (r *imageStore) RecursiveLock() {
r.lockfile.RecursiveLock()
}
func (r *imageStore) RLock() {
r.lockfile.RLock()
}

View File

@ -26,7 +26,7 @@ import (
multierror "github.com/hashicorp/go-multierror"
"github.com/klauspost/pgzip"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/opencontainers/selinux/go-selinux"
"github.com/sirupsen/logrus"
"github.com/vbatts/tar-split/archive/tar"
"github.com/vbatts/tar-split/tar/asm"
@ -137,13 +137,13 @@ type DiffOptions struct {
Compression *archive.Compression
}
// ROLayerStore wraps a graph driver, adding the ability to refer to layers by
// roLayerStore wraps a graph driver, adding the ability to refer to layers by
// name, and keeping track of parent-child relationships, along with a list of
// all known layers.
type ROLayerStore interface {
ROFileBasedStore
ROMetadataStore
ROLayerBigDataStore
type roLayerStore interface {
roFileBasedStore
roMetadataStore
roLayerBigDataStore
// Exists checks if a layer with the specified name or ID is known.
Exists(id string) bool
@ -177,10 +177,6 @@ type ROLayerStore interface {
// found, it returns an error.
Size(name string) (int64, error)
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
// LayersByCompressedDigest returns a slice of the layers with the
// specified compressed digest value recorded for them.
LayersByCompressedDigest(d digest.Digest) ([]Layer, error)
@ -193,15 +189,15 @@ type ROLayerStore interface {
Layers() ([]Layer, error)
}
// LayerStore wraps a graph driver, adding the ability to refer to layers by
// rwLayerStore wraps a graph driver, adding the ability to refer to layers by
// name, and keeping track of parent-child relationships, along with a list of
// all known layers.
type LayerStore interface {
ROLayerStore
RWFileBasedStore
RWMetadataStore
FlaggableStore
RWLayerBigDataStore
type rwLayerStore interface {
roLayerStore
rwFileBasedStore
rwMetadataStore
flaggableStore
rwLayerBigDataStore
// Create creates a new layer, optionally giving it a specified ID rather than
// a randomly-generated one, either inheriting data from another specified
@ -270,10 +266,6 @@ type LayerStore interface {
// DifferTarget gets the location where files are stored for the layer.
DifferTarget(id string) (string, error)
// LoadLocked wraps Load in a locked state. This means it loads the store
// and cleans-up invalid layers if needed.
LoadLocked() error
// PutAdditionalLayer creates a layer using the diff contained in the additional layer
// store.
// This API is experimental and can be changed without bumping the major version number.
@ -293,8 +285,6 @@ type layerStore struct {
bymount map[string]*Layer
bycompressedsum map[digest.Digest][]string
byuncompressedsum map[digest.Digest][]string
uidMap []idtools.IDMap
gidMap []idtools.IDMap
loadMut sync.Mutex
layerspathModified time.Time
}
@ -362,7 +352,7 @@ func (r *layerStore) Load() error {
compressedsums := make(map[digest.Digest][]string)
uncompressedsums := make(map[digest.Digest][]string)
if r.IsReadWrite() {
label.ClearLabels()
selinux.ClearLabels()
}
if err = json.Unmarshal(data, &layers); len(data) == 0 || err == nil {
idlist = make([]string, 0, len(layers))
@ -383,7 +373,7 @@ func (r *layerStore) Load() error {
uncompressedsums[layer.UncompressedDigest] = append(uncompressedsums[layer.UncompressedDigest], layer.ID)
}
if layer.MountLabel != "" {
label.ReserveLabel(layer.MountLabel)
selinux.ReserveLabel(layer.MountLabel)
}
layer.ReadOnly = !r.IsReadWrite()
}
@ -393,7 +383,7 @@ func (r *layerStore) Load() error {
return ErrDuplicateLayerNames
}
r.layers = layers
r.idindex = truncindex.NewTruncIndex(idlist)
r.idindex = truncindex.NewTruncIndex(idlist) // Invalid values in idlist are ignored: they are not a reason to refuse processing the whole store.
r.byid = ids
r.byname = names
r.bycompressedsum = compressedsums
@ -433,12 +423,6 @@ func (r *layerStore) Load() error {
return err
}
func (r *layerStore) LoadLocked() error {
r.lockfile.Lock()
defer r.lockfile.Unlock()
return r.Load()
}
func (r *layerStore) loadMounts() error {
mounts := make(map[string]*Layer)
mpath := r.mountspath()
@ -479,7 +463,6 @@ func (r *layerStore) loadMounts() error {
func (r *layerStore) Save() error {
r.mountsLockfile.Lock()
defer r.mountsLockfile.Unlock()
defer r.mountsLockfile.Touch()
if err := r.saveLayers(); err != nil {
return err
}
@ -501,8 +484,10 @@ func (r *layerStore) saveLayers() error {
if err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(rpath, jldata, 0600)
if err := ioutils.AtomicWriteFile(rpath, jldata, 0600); err != nil {
return err
}
return r.Touch()
}
func (r *layerStore) saveMounts() error {
@ -533,10 +518,13 @@ func (r *layerStore) saveMounts() error {
if err = ioutils.AtomicWriteFile(mpath, jmdata, 0600); err != nil {
return err
}
if err := r.mountsLockfile.Touch(); err != nil {
return err
}
return r.loadMounts()
}
func (s *store) newLayerStore(rundir string, layerdir string, driver drivers.Driver) (LayerStore, error) {
func (s *store) newLayerStore(rundir string, layerdir string, driver drivers.Driver) (rwLayerStore, error) {
if err := os.MkdirAll(rundir, 0700); err != nil {
return nil, err
}
@ -560,8 +548,6 @@ func (s *store) newLayerStore(rundir string, layerdir string, driver drivers.Dri
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
uidMap: copyIDMap(s.uidMap),
gidMap: copyIDMap(s.gidMap),
}
rlstore.Lock()
defer rlstore.Unlock()
@ -571,7 +557,7 @@ func (s *store) newLayerStore(rundir string, layerdir string, driver drivers.Dri
return &rlstore, nil
}
func newROLayerStore(rundir string, layerdir string, driver drivers.Driver) (ROLayerStore, error) {
func newROLayerStore(rundir string, layerdir string, driver drivers.Driver) (roLayerStore, error) {
lockfile, err := GetROLockfile(filepath.Join(layerdir, "layers.lock"))
if err != nil {
return nil, err
@ -685,7 +671,9 @@ func (r *layerStore) PutAdditionalLayer(id string, parentLayer *Layer, names []s
// TODO: check if necessary fields are filled
r.layers = append(r.layers, layer)
r.idindex.Add(id)
// This can only fail on duplicate IDs, which shouldnt happen — and in that case the index is already in the desired state anyway.
// Implementing recovery from an unlikely and unimportant failure here would be too risky.
_ = r.idindex.Add(id)
r.byid[id] = layer
for _, name := range names { // names got from the additional layer store won't be used
r.byname[name] = layer
@ -697,7 +685,9 @@ func (r *layerStore) PutAdditionalLayer(id string, parentLayer *Layer, names []s
r.byuncompressedsum[layer.UncompressedDigest] = append(r.byuncompressedsum[layer.UncompressedDigest], layer.ID)
}
if err := r.Save(); err != nil {
r.driver.Remove(id)
if err2 := r.driver.Remove(id); err2 != nil {
logrus.Errorf("While recovering from a failure to save layers, error deleting layer %#v: %v", id, err2)
}
return nil, err
}
return copyLayer(layer), nil
@ -770,7 +760,7 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
parentMappings = &idtools.IDMappings{}
}
if mountLabel != "" {
label.ReserveLabel(mountLabel)
selinux.ReserveLabel(mountLabel)
}
// Before actually creating the layer, make a persistent record of it with incompleteFlag,
@ -795,7 +785,9 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
BigDataNames: []string{},
}
r.layers = append(r.layers, layer)
r.idindex.Add(id)
// This can only fail if the ID is already missing, which shouldnt happen — and in that case the index is already in the desired state anyway.
// This is on various paths to recover from failures, so this should be robust against partially missing data.
_ = r.idindex.Add(id)
r.byid[id] = layer
for _, name := range names {
r.byname[name] = layer
@ -947,7 +939,6 @@ func (r *layerStore) Mount(id string, options drivers.MountOpts) (string, error)
return "", err
}
}
defer r.mountsLockfile.Touch()
layer, ok := r.lookup(id)
if !ok {
return "", ErrLayerUnknown
@ -998,7 +989,6 @@ func (r *layerStore) Unmount(id string, force bool) (bool, error) {
return false, err
}
}
defer r.mountsLockfile.Touch()
layer, ok := r.lookup(id)
if !ok {
layerByMount, ok := r.bymount[filepath.Clean(id)]
@ -1279,7 +1269,9 @@ func (r *layerStore) deleteInternal(id string) error {
for _, name := range layer.Names {
delete(r.byname, name)
}
r.idindex.Delete(id)
// This can only fail if the ID is already missing, which shouldnt happen — and in that case the index is already in the desired state anyway.
// The stores Delete method is used on various paths to recover from failures, so this should be robust against partially missing data.
_ = r.idindex.Delete(id)
mountLabel := layer.MountLabel
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
@ -1309,7 +1301,7 @@ func (r *layerStore) deleteInternal(id string) error {
}
}
if !found {
label.ReleaseLabel(mountLabel)
selinux.ReleaseLabel(mountLabel)
}
}
@ -1365,13 +1357,6 @@ func (r *layerStore) Delete(id string) error {
return r.Save()
}
func (r *layerStore) Lookup(name string) (id string, err error) {
if layer, ok := r.lookup(name); ok {
return layer.ID, nil
}
return "", ErrLayerUnknown
}
func (r *layerStore) Exists(id string) bool {
_, ok := r.lookup(id)
return ok
@ -1472,6 +1457,24 @@ func (r *layerStore) newFileGetter(id string) (drivers.FileGetCloser, error) {
}, nil
}
// writeCompressedData copies data from source to compressor, which is on top of pwriter.
func writeCompressedData(compressor io.WriteCloser, source io.ReadCloser) error {
defer compressor.Close()
defer source.Close()
_, err := io.Copy(compressor, source)
return err
}
// writeCompressedDataGoroutine copies data from source to compressor, which is on top of pwriter.
// All error must be reported by updating pwriter.
func writeCompressedDataGoroutine(pwriter *io.PipeWriter, compressor io.WriteCloser, source io.ReadCloser) {
err := errors.New("internal error: unexpected panic in writeCompressedDataGoroutine")
defer func() { // Note that this is not the same as {defer dest.CloseWithError(err)}; we need err to be evaluated lazily.
_ = pwriter.CloseWithError(err) // CloseWithError(nil) is equivalent to Close(), always returns nil
}()
err = writeCompressedData(compressor, source)
}
func (r *layerStore) Diff(from, to string, options *DiffOptions) (io.ReadCloser, error) {
var metadata storage.Unpacker
@ -1503,12 +1506,7 @@ func (r *layerStore) Diff(from, to string, options *DiffOptions) (io.ReadCloser,
preader.Close()
return nil, err
}
go func() {
defer pwriter.Close()
defer compressor.Close()
defer rc.Close()
io.Copy(compressor, rc)
}()
go writeCompressedDataGoroutine(pwriter, compressor, rc)
return preader, nil
}
@ -1825,7 +1823,9 @@ func (r *layerStore) ApplyDiffFromStagingDirectory(id, stagingDirectory string,
}
for k, v := range diffOutput.BigData {
if err := r.SetBigData(id, k, bytes.NewReader(v)); err != nil {
r.Delete(id)
if err2 := r.Delete(id); err2 != nil {
logrus.Errorf("While recovering from a failure to set big data, error deleting layer %#v: %v", id, err2)
}
return err
}
}
@ -1895,10 +1895,6 @@ func (r *layerStore) Lock() {
r.lockfile.Lock()
}
func (r *layerStore) RecursiveLock() {
r.lockfile.RecursiveLock()
}
func (r *layerStore) RLock() {
r.lockfile.RLock()
}

View File

@ -874,7 +874,7 @@ func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error)
if err != nil || (!options.IncludeSourceDir && relFilePath == "." && d.IsDir()) {
// Error getting relative path OR we are looking
// at the source directory path. Skip in both situations.
return nil
return nil //nolint: nilerr
}
if options.IncludeSourceDir && include == "." && relFilePath != "." {

View File

@ -0,0 +1,19 @@
//go:build freebsd || darwin
// +build freebsd darwin
package archive
import (
"archive/tar"
"os"
"golang.org/x/sys/unix"
)
func handleLChmod(hdr *tar.Header, path string, hdrInfo os.FileInfo, forceMask *os.FileMode) error {
permissionsMask := hdrInfo.Mode()
if forceMask != nil {
permissionsMask = *forceMask
}
return unix.Fchmodat(unix.AT_FDCWD, path, uint32(permissionsMask), unix.AT_SYMLINK_NOFOLLOW)
}

View File

@ -1,129 +0,0 @@
//go:build freebsd
// +build freebsd
package archive
import (
"archive/tar"
"errors"
"os"
"path/filepath"
"syscall"
"unsafe"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/system"
"github.com/containers/storage/pkg/unshare"
"golang.org/x/sys/unix"
)
// fixVolumePathPrefix does platform specific processing to ensure that if
// the path being passed in is not in a volume path format, convert it to one.
func fixVolumePathPrefix(srcPath string) string {
return srcPath
}
// getWalkRoot calculates the root path when performing a TarWithOptions.
// We use a separate function as this is platform specific. On Linux, we
// can't use filepath.Join(srcPath,include) because this will clean away
// a trailing "." or "/" which may be important.
func getWalkRoot(srcPath string, include string) string {
return srcPath + string(filepath.Separator) + include
}
// CanonicalTarNameForPath returns platform-specific filepath
// to canonical posix-style path for tar archival. p is relative
// path.
func CanonicalTarNameForPath(p string) (string, error) {
return p, nil // already unix-style
}
// chmodTarEntry is used to adjust the file permissions used in tar header based
// on the platform the archival is done.
func chmodTarEntry(perm os.FileMode) os.FileMode {
return perm // noop for unix as golang APIs provide perm bits correctly
}
func setHeaderForSpecialDevice(hdr *tar.Header, name string, stat interface{}) (err error) {
s, ok := stat.(*syscall.Stat_t)
if ok {
// Currently go does not fill in the major/minors
if s.Mode&unix.S_IFBLK != 0 ||
s.Mode&unix.S_IFCHR != 0 {
hdr.Devmajor = int64(major(uint64(s.Rdev))) // nolint: unconvert
hdr.Devminor = int64(minor(uint64(s.Rdev))) // nolint: unconvert
}
}
return
}
func getInodeFromStat(stat interface{}) (inode uint64, err error) {
s, ok := stat.(*syscall.Stat_t)
if ok {
inode = s.Ino
}
return
}
func getFileUIDGID(stat interface{}) (idtools.IDPair, error) {
s, ok := stat.(*syscall.Stat_t)
if !ok {
return idtools.IDPair{}, errors.New("cannot convert stat value to syscall.Stat_t")
}
return idtools.IDPair{UID: int(s.Uid), GID: int(s.Gid)}, nil
}
func major(device uint64) uint64 {
return (device >> 8) & 0xfff
}
func minor(device uint64) uint64 {
return (device & 0xff) | ((device >> 12) & 0xfff00)
}
// handleTarTypeBlockCharFifo is an OS-specific helper function used by
// createTarFile to handle the following types of header: Block; Char; Fifo
func handleTarTypeBlockCharFifo(hdr *tar.Header, path string) error {
if unshare.IsRootless() {
// cannot create a device if running in user namespace
return nil
}
mode := uint32(hdr.Mode & 07777)
switch hdr.Typeflag {
case tar.TypeBlock:
mode |= unix.S_IFBLK
case tar.TypeChar:
mode |= unix.S_IFCHR
case tar.TypeFifo:
mode |= unix.S_IFIFO
}
return system.Mknod(path, mode, uint64(system.Mkdev(hdr.Devmajor, hdr.Devminor)))
}
func handleLChmod(hdr *tar.Header, path string, hdrInfo os.FileInfo, forceMask *os.FileMode) error {
permissionsMask := hdrInfo.Mode()
if forceMask != nil {
permissionsMask = *forceMask
}
p, err := unix.BytePtrFromString(path)
if err != nil {
return err
}
_, _, e1 := unix.Syscall(unix.SYS_LCHMOD, uintptr(unsafe.Pointer(p)), uintptr(permissionsMask), 0)
if e1 != 0 {
return e1
}
return nil
}
// Hardlink without following symlinks
func handleLLink(targetPath string, path string) error {
return unix.Linkat(unix.AT_FDCWD, targetPath, unix.AT_FDCWD, path, 0)
}

View File

@ -189,3 +189,22 @@ func GetFileOwner(path string) (uint32, uint32, uint32, error) {
}
return 0, 0, uint32(f.Mode()), nil
}
func handleLChmod(hdr *tar.Header, path string, hdrInfo os.FileInfo, forceMask *os.FileMode) error {
permissionsMask := hdrInfo.Mode()
if forceMask != nil {
permissionsMask = *forceMask
}
if hdr.Typeflag == tar.TypeLink {
if fi, err := os.Lstat(hdr.Linkname); err == nil && (fi.Mode()&os.ModeSymlink == 0) {
if err := os.Chmod(path, permissionsMask); err != nil {
return err
}
}
} else if hdr.Typeflag != tar.TypeSymlink {
if err := os.Chmod(path, permissionsMask); err != nil {
return err
}
}
return nil
}

View File

@ -1,5 +1,5 @@
//go:build !windows && !freebsd
// +build !windows,!freebsd
//go:build !windows
// +build !windows
package archive
@ -101,25 +101,6 @@ func handleTarTypeBlockCharFifo(hdr *tar.Header, path string) error {
return system.Mknod(path, mode, system.Mkdev(hdr.Devmajor, hdr.Devminor))
}
func handleLChmod(hdr *tar.Header, path string, hdrInfo os.FileInfo, forceMask *os.FileMode) error {
permissionsMask := hdrInfo.Mode()
if forceMask != nil {
permissionsMask = *forceMask
}
if hdr.Typeflag == tar.TypeLink {
if fi, err := os.Lstat(hdr.Linkname); err == nil && (fi.Mode()&os.ModeSymlink == 0) {
if err := os.Chmod(path, permissionsMask); err != nil {
return err
}
}
} else if hdr.Typeflag != tar.TypeSymlink {
if err := os.Chmod(path, permissionsMask); err != nil {
return err
}
}
return nil
}
// Hardlink without symlinks
func handleLLink(targetPath, path string) error {
// Note: on Linux, the link syscall will not follow symlinks.

View File

@ -56,7 +56,7 @@ func (change *Change) String() string {
return fmt.Sprintf("%s %s", change.Kind, change.Path)
}
// for sort.Sort
// changesByPath implements sort.Interface.
type changesByPath []Change
func (c changesByPath) Less(i, j int) bool { return c[i].Path < c[j].Path }

View File

@ -245,7 +245,9 @@ func applyLayerHandler(dest string, layer io.Reader, options *TarOptions, decomp
if err != nil {
return 0, err
}
defer system.Umask(oldmask) // ignore err, ErrNotSupportedPlatform
defer func() {
_, _ = system.Umask(oldmask) // Ignore err. This can only fail with ErrNotSupportedPlatform, in which case we would have failed above.
}()
if decompress {
layer, err = DecompressStream(layer)

View File

@ -78,7 +78,7 @@ func (f *holesFinder) ReadByte() (int64, byte, error) {
f.state = holesFinderStateFound
}
} else {
if f.reader.UnreadByte(); err != nil {
if err := f.reader.UnreadByte(); err != nil {
return 0, 0, err
}
f.state = holesFinderStateRead
@ -95,7 +95,7 @@ func (f *holesFinder) ReadByte() (int64, byte, error) {
return holeLen, 0, nil
}
if b != 0 {
if f.reader.UnreadByte(); err != nil {
if err := f.reader.UnreadByte(); err != nil {
return 0, 0, err
}
f.state = holesFinderStateRead
@ -429,7 +429,7 @@ func zstdChunkedWriterWithLevel(out io.Writer, metadata map[string]string, level
go func() {
ch <- writeZstdChunkedStream(out, metadata, r, level)
io.Copy(io.Discard, r)
_, _ = io.Copy(io.Discard, r) // Ordinarily writeZstdChunkedStream consumes all of r. If it fails, ensure the write end never blocks and eventually terminates.
r.Close()
close(ch)
}()

View File

@ -17,7 +17,7 @@ type ImageSourceSeekable interface {
}
// ErrBadRequest is returned when the request is not valid
type ErrBadRequest struct {
type ErrBadRequest struct { //nolint: errname
}
func (e ErrBadRequest) Error() string {

View File

@ -63,7 +63,7 @@ func StickRuntimeDirContents(files []string) ([]string, error) {
runtimeDir, err := GetRuntimeDir()
if err != nil {
// ignore error if runtimeDir is empty
return nil, nil
return nil, nil //nolint: nilerr
}
runtimeDir, err = filepath.Abs(runtimeDir)
if err != nil {

View File

@ -27,6 +27,13 @@ func SetDefaultOptions(opts AtomicFileWriterOptions) {
// temporary file and closing it atomically changes the temporary file to
// destination path. Writing and closing concurrently is not allowed.
func NewAtomicFileWriterWithOpts(filename string, perm os.FileMode, opts *AtomicFileWriterOptions) (io.WriteCloser, error) {
return newAtomicFileWriter(filename, perm, opts)
}
// newAtomicFileWriter returns WriteCloser so that writing to it writes to a
// temporary file and closing it atomically changes the temporary file to
// destination path. Writing and closing concurrently is not allowed.
func newAtomicFileWriter(filename string, perm os.FileMode, opts *AtomicFileWriterOptions) (*atomicFileWriter, error) {
f, err := os.CreateTemp(filepath.Dir(filename), ".tmp-"+filepath.Base(filename))
if err != nil {
return nil, err
@ -55,14 +62,14 @@ func NewAtomicFileWriter(filename string, perm os.FileMode) (io.WriteCloser, err
// AtomicWriteFile atomically writes data to a file named by filename.
func AtomicWriteFile(filename string, data []byte, perm os.FileMode) error {
f, err := NewAtomicFileWriter(filename, perm)
f, err := newAtomicFileWriter(filename, perm, nil)
if err != nil {
return err
}
n, err := f.Write(data)
if err == nil && n < len(data) {
err = io.ErrShortWrite
f.(*atomicFileWriter).writeErr = err
f.writeErr = err
}
if err1 := f.Close(); err == nil {
err = err1

View File

@ -17,10 +17,6 @@ type Locker interface {
// - tried to lock a read-only lock-file
Lock()
// Acquire a writer lock recursively, allowing for recursive acquisitions
// within the same process space.
RecursiveLock()
// Unlock the lock.
// The default unix implementation panics if:
// - unlocking an unlocked lock

View File

@ -30,7 +30,6 @@ type lockfile struct {
locktype int16
locked bool
ro bool
recursive bool
}
const lastWriterIDSize = 64 // This must be the same as len(stringid.GenerateRandomID)
@ -131,7 +130,7 @@ func createLockerForPath(path string, ro bool) (Locker, error) {
// lock locks the lockfile via FCTNL(2) based on the specified type and
// command.
func (l *lockfile) lock(lType int16, recursive bool) {
func (l *lockfile) lock(lType int16) {
lk := unix.Flock_t{
Type: lType,
Whence: int16(os.SEEK_SET),
@ -142,13 +141,7 @@ func (l *lockfile) lock(lType int16, recursive bool) {
case unix.F_RDLCK:
l.rwMutex.RLock()
case unix.F_WRLCK:
if recursive {
// NOTE: that's okay as recursive is only set in RecursiveLock(), so
// there's no need to protect against hypothetical RDLCK cases.
l.rwMutex.RLock()
} else {
l.rwMutex.Lock()
}
default:
panic(fmt.Sprintf("attempted to acquire a file lock of unrecognized type %d", lType))
}
@ -171,7 +164,6 @@ func (l *lockfile) lock(lType int16, recursive bool) {
}
l.locktype = lType
l.locked = true
l.recursive = recursive
l.counter++
}
@ -180,24 +172,13 @@ func (l *lockfile) Lock() {
if l.ro {
panic("can't take write lock on read-only lock file")
} else {
l.lock(unix.F_WRLCK, false)
}
}
// RecursiveLock locks the lockfile as a writer but allows for recursive
// acquisitions within the same process space. Note that RLock() will be called
// if it's a lockTypReader lock.
func (l *lockfile) RecursiveLock() {
if l.ro {
l.RLock()
} else {
l.lock(unix.F_WRLCK, true)
l.lock(unix.F_WRLCK)
}
}
// LockRead locks the lockfile as a reader.
func (l *lockfile) RLock() {
l.lock(unix.F_RDLCK, false)
l.lock(unix.F_RDLCK)
}
// Unlock unlocks the lockfile.
@ -224,7 +205,7 @@ func (l *lockfile) Unlock() {
// file lock.
unix.Close(int(l.fd))
}
if l.locktype == unix.F_RDLCK || l.recursive {
if l.locktype == unix.F_RDLCK {
l.rwMutex.RUnlock()
} else {
l.rwMutex.Unlock()

View File

@ -1,3 +1,4 @@
//go:build windows
// +build windows
package lockfile
@ -36,12 +37,6 @@ func (l *lockfile) Lock() {
l.locked = true
}
func (l *lockfile) RecursiveLock() {
// We don't support Windows but a recursive writer-lock in one process-space
// is really a writer lock, so just panic.
panic("not supported")
}
func (l *lockfile) RLock() {
l.mu.Lock()
l.locked = true

View File

@ -1,16 +1,30 @@
//go:build !windows
// +build !windows
package mount
import "golang.org/x/sys/unix"
import (
"time"
"golang.org/x/sys/unix"
)
func unmount(target string, flags int) error {
err := unix.Unmount(target, flags)
if err == nil || err == unix.EINVAL {
var err error
for i := 0; i < 50; i++ {
err = unix.Unmount(target, flags)
switch err {
case unix.EBUSY:
time.Sleep(50 * time.Millisecond)
continue
case unix.EINVAL, nil:
// Ignore "not mounted" error here. Note the same error
// can be returned if flags are invalid, so this code
// assumes that the flags value is always correct.
return nil
default:
break
}
}
return &mountError{

View File

@ -43,7 +43,7 @@ func getRelease() (string, error) {
prettyNames, err := shellwords.Parse(content[1])
if err != nil {
return "", fmt.Errorf("kernel version is invalid: %s", err.Error())
return "", fmt.Errorf("kernel version is invalid: %w", err)
}
if len(prettyNames) != 2 {

View File

@ -6,7 +6,7 @@ import (
"unsafe"
)
// Used by chtimes
// maxTime is used by chtimes.
var maxTime time.Time
func init() {

View File

@ -3,7 +3,6 @@ package system
import (
"fmt"
"os"
"syscall"
"time"
"github.com/containers/storage/pkg/mount"
@ -65,7 +64,7 @@ func EnsureRemoveAll(dir string) error {
continue
}
if pe.Err != syscall.EBUSY {
if !IsEBUSY(pe.Err) {
return err
}

View File

@ -25,7 +25,7 @@ var (
// ErrAmbiguousPrefix is returned if the prefix was ambiguous
// (multiple ids for the prefix).
type ErrAmbiguousPrefix struct {
type ErrAmbiguousPrefix struct { //nolint: errname
prefix string
}
@ -42,6 +42,7 @@ type TruncIndex struct {
}
// NewTruncIndex creates a new TruncIndex and initializes with a list of IDs.
// Invalid IDs are _silently_ ignored.
func NewTruncIndex(ids []string) (idx *TruncIndex) {
idx = &TruncIndex{
ids: make(map[string]struct{}),
@ -51,7 +52,7 @@ func NewTruncIndex(ids []string) (idx *TruncIndex) {
trie: patricia.NewTrie(patricia.MaxPrefixPerNode(64)),
}
for _, id := range ids {
idx.addID(id)
_ = idx.addID(id) // Ignore invalid IDs. Duplicate IDs are not a problem.
}
return
}
@ -132,7 +133,8 @@ func (idx *TruncIndex) Get(s string) (string, error) {
func (idx *TruncIndex) Iterate(handler func(id string)) {
idx.Lock()
defer idx.Unlock()
idx.trie.Visit(func(prefix patricia.Prefix, item patricia.Item) error {
// Ignore the error from Visit: it can only fail if the provided visitor fails, and ours never does.
_ = idx.trie.Visit(func(prefix patricia.Prefix, item patricia.Item) error {
handler(string(prefix))
return nil
})

File diff suppressed because it is too large Load Diff

View File

@ -336,7 +336,7 @@ func ReloadConfigurationFile(configFile string, storeOptions *StoreOptions) erro
}
} else {
if !os.IsNotExist(err) {
fmt.Printf("Failed to read %s %v\n", configFile, err.Error())
logrus.Warningf("Failed to read %s %v\n", configFile, err.Error())
return err
}
}
@ -399,7 +399,7 @@ func ReloadConfigurationFile(configFile string, storeOptions *StoreOptions) erro
if config.Storage.Options.RemapUser != "" && config.Storage.Options.RemapGroup != "" {
mappings, err := idtools.NewIDMappings(config.Storage.Options.RemapUser, config.Storage.Options.RemapGroup)
if err != nil {
fmt.Printf("Error initializing ID mappings for %s:%s %v\n", config.Storage.Options.RemapUser, config.Storage.Options.RemapGroup, err)
logrus.Warningf("Error initializing ID mappings for %s:%s %v\n", config.Storage.Options.RemapUser, config.Storage.Options.RemapGroup, err)
return err
}
storeOptions.UIDMap = mappings.UIDs()

View File

@ -193,7 +193,7 @@ func reloadConfigurationFileIfNeeded(configFile string, storeOptions *StoreOptio
fi, err := os.Stat(configFile)
if err != nil {
if !os.IsNotExist(err) {
fmt.Printf("Failed to read %s %v\n", configFile, err.Error())
logrus.Warningf("Failed to read %s %v\n", configFile, err.Error())
}
return
}

View File

@ -124,12 +124,8 @@ func parseMountedFiles(containerMount, passwdFile, groupFile string) uint32 {
// getMaxSizeFromImage returns the maximum ID used by the specified image.
// The layer stores must be already locked.
func (s *store) getMaxSizeFromImage(image *Image, passwdFile, groupFile string) (uint32, error) {
lstore, err := s.LayerStore()
if err != nil {
return 0, err
}
lstores, err := s.ROLayerStores()
func (s *store) getMaxSizeFromImage(image *Image, passwdFile, groupFile string) (_ uint32, retErr error) {
layerStores, err := s.allLayerStores()
if err != nil {
return 0, err
}
@ -140,7 +136,7 @@ func (s *store) getMaxSizeFromImage(image *Image, passwdFile, groupFile string)
layerName := image.TopLayer
outer:
for {
for _, ls := range append([]ROLayerStore{lstore}, lstores...) {
for _, ls := range layerStores {
layer, err := ls.Get(layerName)
if err != nil {
continue
@ -167,7 +163,7 @@ outer:
return 0, fmt.Errorf("cannot find layer %q", layerName)
}
rlstore, err := s.LayerStore()
rlstore, err := s.getLayerStore()
if err != nil {
return 0, err
}
@ -187,7 +183,15 @@ outer:
if err != nil {
return 0, err
}
defer rlstore.Delete(clayer.ID)
defer func() {
if err2 := rlstore.Delete(clayer.ID); err2 != nil {
if retErr == nil {
retErr = fmt.Errorf("deleting temporary layer %#v: %w", clayer.ID, err2)
} else {
logrus.Errorf("Error deleting temporary layer %#v: %v", clayer.ID, err2)
}
}
}()
mountOptions := drivers.MountOpts{
MountLabel: "",
@ -200,7 +204,15 @@ outer:
if err != nil {
return 0, err
}
defer rlstore.Unmount(clayer.ID, true)
defer func() {
if _, err2 := rlstore.Unmount(clayer.ID, true); err2 != nil {
if retErr == nil {
retErr = fmt.Errorf("unmounting temporary layer %#v: %w", clayer.ID, err2)
} else {
logrus.Errorf("Error unmounting temporary layer %#v: %v", clayer.ID, err2)
}
}
}()
userFilesSize := parseMountedFiles(mountpoint, passwdFile, groupFile)
if userFilesSize > size {

View File

@ -1,3 +1,7 @@
arch:
- amd64
- ppc64le
language: go
os:

View File

@ -1,4 +1,4 @@
MIT License
The MIT License (MIT)
Copyright (c) 2014 Klaus Post
@ -19,3 +19,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -104,13 +104,12 @@ Content is [Matt Mahoneys 10GB corpus](http://mattmahoney.net/dc/10gb.html). Com
Compressor | MB/sec | speedup | size | size overhead (lower=better)
------------|----------|---------|------|---------
[gzip](http://golang.org/pkg/compress/gzip) (golang) | 15.44MB/s (1 thread) | 1.0x | 4781329307 | 0%
[gzip](http://github.com/klauspost/compress/gzip) (klauspost) | 135.04MB/s (1 thread) | 8.74x | 4894858258 | +2.37%
[pgzip](https://github.com/klauspost/pgzip) (klauspost) | 1573.23MB/s| 101.9x | 4902285651 | +2.53%
[bgzf](https://godoc.org/github.com/biogo/hts/bgzf) (biogo) | 361.40MB/s | 23.4x | 4869686090 | +1.85%
[pargzip](https://godoc.org/github.com/golang/build/pargzip) (builder) | 306.01MB/s | 19.8x | 4786890417 | +0.12%
[gzip](http://golang.org/pkg/compress/gzip) (golang) | 16.91MB/s (1 thread) | 1.0x | 4781329307 | 0%
[gzip](http://github.com/klauspost/compress/gzip) (klauspost) | 127.10MB/s (1 thread) | 7.52x | 4885366806 | +2.17%
[pgzip](https://github.com/klauspost/pgzip) (klauspost) | 2085.35MB/s| 123.34x | 4886132566 | +2.19%
[pargzip](https://godoc.org/github.com/golang/build/pargzip) (builder) | 334.04MB/s | 19.76x | 4786890417 | +0.12%
pgzip also contains a [linear time compression](https://github.com/klauspost/compress#linear-time-compression-huffman-only) mode, that will allow compression at ~250MB per core per second, independent of the content.
pgzip also contains a [huffman only compression](https://github.com/klauspost/compress#linear-time-compression-huffman-only) mode, that will allow compression at ~450MB per core per second, largely independent of the content.
See the [complete sheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing) for different content types and compression settings.
@ -123,7 +122,7 @@ In the example above, the numbers are as follows on a 4 CPU machine:
Decompressor | Time | Speedup
-------------|------|--------
[gzip](http://golang.org/pkg/compress/gzip) (golang) | 1m28.85s | 0%
[pgzip](https://github.com/klauspost/pgzip) (golang) | 43.48s | 104%
[pgzip](https://github.com/klauspost/pgzip) (klauspost) | 43.48s | 104%
But wait, since gzip decompression is inherently singlethreaded (aside from CRC calculation) how can it be more than 100% faster? Because pgzip due to its design also acts as a buffer. When using unbuffered gzip, you are also waiting for io when you are decompressing. If the gzip decoder can keep up, it will always have data ready for your reader, and you will not be waiting for input to the gzip decompressor to complete.

View File

@ -513,6 +513,19 @@ func (z *Reader) Read(p []byte) (n int, err error) {
func (z *Reader) WriteTo(w io.Writer) (n int64, err error) {
total := int64(0)
avail := z.current[z.roff:]
if len(avail) != 0 {
n, err := w.Write(avail)
if n != len(avail) {
return total, io.ErrShortWrite
}
total += int64(n)
if err != nil {
return total, err
}
z.blockPool <- z.current
z.current = nil
}
for {
if z.err != nil {
return total, z.err

View File

@ -12,10 +12,12 @@ type Spec struct {
Root *Root `json:"root,omitempty"`
// Hostname configures the container's hostname.
Hostname string `json:"hostname,omitempty"`
// Domainname configures the container's domainname.
Domainname string `json:"domainname,omitempty"`
// Mounts configures additional mounts (on top of Root).
Mounts []Mount `json:"mounts,omitempty"`
// Hooks configures callbacks for container lifecycle events.
Hooks *Hooks `json:"hooks,omitempty" platform:"linux,solaris"`
Hooks *Hooks `json:"hooks,omitempty" platform:"linux,solaris,zos"`
// Annotations contains arbitrary metadata for the container.
Annotations map[string]string `json:"annotations,omitempty"`
@ -27,6 +29,8 @@ type Spec struct {
Windows *Windows `json:"windows,omitempty" platform:"windows"`
// VM specifies configuration for virtual-machine-based containers.
VM *VM `json:"vm,omitempty" platform:"vm"`
// ZOS is platform-specific configuration for z/OS based containers.
ZOS *ZOS `json:"zos,omitempty" platform:"zos"`
}
// Process contains information to start a specific application inside the container.
@ -49,7 +53,7 @@ type Process struct {
// Capabilities are Linux capabilities that are kept for the process.
Capabilities *LinuxCapabilities `json:"capabilities,omitempty" platform:"linux"`
// Rlimits specifies rlimit options to apply to the process.
Rlimits []POSIXRlimit `json:"rlimits,omitempty" platform:"linux,solaris"`
Rlimits []POSIXRlimit `json:"rlimits,omitempty" platform:"linux,solaris,zos"`
// NoNewPrivileges controls whether additional privileges could be gained by processes in the container.
NoNewPrivileges bool `json:"noNewPrivileges,omitempty" platform:"linux"`
// ApparmorProfile specifies the apparmor profile for the container.
@ -86,11 +90,11 @@ type Box struct {
// User specifies specific user (and group) information for the container process.
type User struct {
// UID is the user id.
UID uint32 `json:"uid" platform:"linux,solaris"`
UID uint32 `json:"uid" platform:"linux,solaris,zos"`
// GID is the group id.
GID uint32 `json:"gid" platform:"linux,solaris"`
GID uint32 `json:"gid" platform:"linux,solaris,zos"`
// Umask is the umask for the init process.
Umask *uint32 `json:"umask,omitempty" platform:"linux,solaris"`
Umask *uint32 `json:"umask,omitempty" platform:"linux,solaris,zos"`
// AdditionalGids are additional group ids set for the container's process.
AdditionalGids []uint32 `json:"additionalGids,omitempty" platform:"linux,solaris"`
// Username is the user name.
@ -110,11 +114,16 @@ type Mount struct {
// Destination is the absolute path where the mount will be placed in the container.
Destination string `json:"destination"`
// Type specifies the mount kind.
Type string `json:"type,omitempty" platform:"linux,solaris"`
Type string `json:"type,omitempty" platform:"linux,solaris,zos"`
// Source specifies the source path of the mount.
Source string `json:"source,omitempty"`
// Options are fstab style mount options.
Options []string `json:"options,omitempty"`
// UID/GID mappings used for changing file owners w/o calling chown, fs should support it.
// Every mount point could have its own mapping.
UIDMappings []LinuxIDMapping `json:"uidMappings,omitempty" platform:"linux"`
GIDMappings []LinuxIDMapping `json:"gidMappings,omitempty" platform:"linux"`
}
// Hook specifies a command that is run at a particular event in the lifecycle of a container
@ -178,7 +187,7 @@ type Linux struct {
// MountLabel specifies the selinux context for the mounts in the container.
MountLabel string `json:"mountLabel,omitempty"`
// IntelRdt contains Intel Resource Director Technology (RDT) information for
// handling resource constraints (e.g., L3 cache, memory bandwidth) for the container
// handling resource constraints and monitoring metrics (e.g., L3 cache, memory bandwidth) for the container
IntelRdt *LinuxIntelRdt `json:"intelRdt,omitempty"`
// Personality contains configuration for the Linux personality syscall
Personality *LinuxPersonality `json:"personality,omitempty"`
@ -250,8 +259,8 @@ type LinuxInterfacePriority struct {
Priority uint32 `json:"priority"`
}
// linuxBlockIODevice holds major:minor format supported in blkio cgroup
type linuxBlockIODevice struct {
// LinuxBlockIODevice holds major:minor format supported in blkio cgroup
type LinuxBlockIODevice struct {
// Major is the device's major number.
Major int64 `json:"major"`
// Minor is the device's minor number.
@ -260,7 +269,7 @@ type linuxBlockIODevice struct {
// LinuxWeightDevice struct holds a `major:minor weight` pair for weightDevice
type LinuxWeightDevice struct {
linuxBlockIODevice
LinuxBlockIODevice
// Weight is the bandwidth rate for the device.
Weight *uint16 `json:"weight,omitempty"`
// LeafWeight is the bandwidth rate for the device while competing with the cgroup's child cgroups, CFQ scheduler only
@ -269,7 +278,7 @@ type LinuxWeightDevice struct {
// LinuxThrottleDevice struct holds a `major:minor rate_per_second` pair
type LinuxThrottleDevice struct {
linuxBlockIODevice
LinuxBlockIODevice
// Rate is the IO rate limit per cgroup per device
Rate uint64 `json:"rate"`
}
@ -328,6 +337,8 @@ type LinuxCPU struct {
Cpus string `json:"cpus,omitempty"`
// List of memory nodes in the cpuset. Default is to use any available memory node.
Mems string `json:"mems,omitempty"`
// cgroups are configured with minimum weight, 0: default behavior, 1: SCHED_IDLE.
Idle *int64 `json:"idle,omitempty"`
}
// LinuxPids for Linux cgroup 'pids' resource management (Linux 4.3)
@ -522,11 +533,21 @@ type WindowsMemoryResources struct {
// WindowsCPUResources contains CPU resource management settings.
type WindowsCPUResources struct {
// Number of CPUs available to the container.
// Count is the number of CPUs available to the container. It represents the
// fraction of the configured processor `count` in a container in relation
// to the processors available in the host. The fraction ultimately
// determines the portion of processor cycles that the threads in a
// container can use during each scheduling interval, as the number of
// cycles per 10,000 cycles.
Count *uint64 `json:"count,omitempty"`
// CPU shares (relative weight to other containers with cpu shares).
// Shares limits the share of processor time given to the container relative
// to other workloads on the processor. The processor `shares` (`weight` at
// the platform level) is a value between 0 and 10000.
Shares *uint16 `json:"shares,omitempty"`
// Specifies the portion of processor cycles that this container can use as a percentage times 100.
// Maximum determines the portion of processor cycles that the threads in a
// container can use during each scheduling interval, as the number of
// cycles per 10,000 cycles. Set processor `maximum` to a percentage times
// 100.
Maximum *uint16 `json:"maximum,omitempty"`
}
@ -613,6 +634,19 @@ type Arch string
// LinuxSeccompFlag is a flag to pass to seccomp(2).
type LinuxSeccompFlag string
const (
// LinuxSeccompFlagLog is a seccomp flag to request all returned
// actions except SECCOMP_RET_ALLOW to be logged. An administrator may
// override this filter flag by preventing specific actions from being
// logged via the /proc/sys/kernel/seccomp/actions_logged file. (since
// Linux 4.14)
LinuxSeccompFlagLog LinuxSeccompFlag = "SECCOMP_FILTER_FLAG_LOG"
// LinuxSeccompFlagSpecAllow can be used to disable Speculative Store
// Bypass mitigation. (since Linux 4.17)
LinuxSeccompFlagSpecAllow LinuxSeccompFlag = "SECCOMP_FILTER_FLAG_SPEC_ALLOW"
)
// Additional architectures permitted to be used for system calls
// By default only the native architecture of the kernel is permitted
const (
@ -683,8 +717,9 @@ type LinuxSyscall struct {
Args []LinuxSeccompArg `json:"args,omitempty"`
}
// LinuxIntelRdt has container runtime resource constraints for Intel RDT
// CAT and MBA features which introduced in Linux 4.10 and 4.12 kernel
// LinuxIntelRdt has container runtime resource constraints for Intel RDT CAT and MBA
// features and flags enabling Intel RDT CMT and MBM features.
// Intel RDT features are available in Linux 4.14 and newer kernel versions.
type LinuxIntelRdt struct {
// The identity for RDT Class of Service
ClosID string `json:"closID,omitempty"`
@ -697,4 +732,36 @@ type LinuxIntelRdt struct {
// The unit of memory bandwidth is specified in "percentages" by
// default, and in "MBps" if MBA Software Controller is enabled.
MemBwSchema string `json:"memBwSchema,omitempty"`
// EnableCMT is the flag to indicate if the Intel RDT CMT is enabled. CMT (Cache Monitoring Technology) supports monitoring of
// the last-level cache (LLC) occupancy for the container.
EnableCMT bool `json:"enableCMT,omitempty"`
// EnableMBM is the flag to indicate if the Intel RDT MBM is enabled. MBM (Memory Bandwidth Monitoring) supports monitoring of
// total and local memory bandwidth for the container.
EnableMBM bool `json:"enableMBM,omitempty"`
}
// ZOS contains platform-specific configuration for z/OS based containers.
type ZOS struct {
// Devices are a list of device nodes that are created for the container
Devices []ZOSDevice `json:"devices,omitempty"`
}
// ZOSDevice represents the mknod information for a z/OS special device file
type ZOSDevice struct {
// Path to the device.
Path string `json:"path"`
// Device type, block, char, etc.
Type string `json:"type"`
// Major is the device's major number.
Major int64 `json:"major"`
// Minor is the device's minor number.
Minor int64 `json:"minor"`
// FileMode permission bits for the device.
FileMode *os.FileMode `json:"fileMode,omitempty"`
// UID of the device.
UID *uint32 `json:"uid,omitempty"`
// Gid of the device.
GID *uint32 `json:"gid,omitempty"`
}

View File

@ -1621,6 +1621,12 @@ func (g *Generator) SetDefaultSeccompActionForce(action string) error {
return seccomp.ParseDefaultActionForce(action, g.Config.Linux.Seccomp)
}
// SetDomainName sets g.Config.Domainname
func (g *Generator) SetDomainName(domain string) {
g.initConfig()
g.Config.Domainname = domain
}
// SetSeccompArchitecture sets the supported seccomp architectures
func (g *Generator) SetSeccompArchitecture(architecture string) error {
g.initConfigLinuxSeccomp()

View File

@ -16,7 +16,7 @@ import (
"unicode"
"unicode/utf8"
"github.com/blang/semver"
"github.com/blang/semver/v4"
"github.com/hashicorp/go-multierror"
rspec "github.com/opencontainers/runtime-spec/specs-go"
osFilepath "github.com/opencontainers/runtime-tools/filepath"
@ -170,8 +170,8 @@ func (v *Validator) CheckJSONSchema() (errs error) {
func (v *Validator) CheckRoot() (errs error) {
logrus.Debugf("check root")
if v.platform == "windows" && v.spec.Windows != nil {
if v.spec.Windows.HyperV != nil {
if v.platform == "windows" {
if v.spec.Windows != nil && v.spec.Windows.HyperV != nil {
if v.spec.Root != nil {
errs = multierror.Append(errs,
specerror.NewError(specerror.RootOnHyperVNotSet, fmt.Errorf("for Hyper-V containers, Root must not be set"), rspec.Version))
@ -179,12 +179,12 @@ func (v *Validator) CheckRoot() (errs error) {
return
} else if v.spec.Root == nil {
errs = multierror.Append(errs,
specerror.NewError(specerror.RootOnWindowsRequired, fmt.Errorf("on Windows, for Windows Server Containers, this field is REQUIRED"), rspec.Version))
specerror.NewError(specerror.RootOnWindowsRequired, fmt.Errorf("on Windows, for Windows Server Containers, Root is REQUIRED"), rspec.Version))
return
}
} else if v.platform != "windows" && v.spec.Root == nil {
} else if v.spec.Root == nil {
errs = multierror.Append(errs,
specerror.NewError(specerror.RootOnNonWindowsRequired, fmt.Errorf("on all other platforms, this field is REQUIRED"), rspec.Version))
specerror.NewError(specerror.RootOnNonWindowsRequired, fmt.Errorf("on all other platforms, Root is REQUIRED"), rspec.Version))
return
}

View File

@ -39,9 +39,9 @@ github.com/VividCortex/ewma
# github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d
## explicit
github.com/acarl005/stripansi
# github.com/blang/semver v3.5.1+incompatible
## explicit
github.com/blang/semver
# github.com/blang/semver/v4 v4.0.0
## explicit; go 1.14
github.com/blang/semver/v4
# github.com/chzyer/readline v1.5.1
## explicit; go 1.15
github.com/chzyer/readline
@ -72,7 +72,7 @@ github.com/containernetworking/cni/pkg/version
# github.com/containernetworking/plugins v1.1.1
## explicit; go 1.17
github.com/containernetworking/plugins/pkg/ns
# github.com/containers/image/v5 v5.23.0
# github.com/containers/image/v5 v5.23.1-0.20221013202101-87afcefe9766
## explicit; go 1.17
github.com/containers/image/v5/copy
github.com/containers/image/v5/directory
@ -151,7 +151,7 @@ github.com/containers/ocicrypt/keywrap/pkcs7
github.com/containers/ocicrypt/spec
github.com/containers/ocicrypt/utils
github.com/containers/ocicrypt/utils/keyprovider
# github.com/containers/storage v1.43.0
# github.com/containers/storage v1.43.1-0.20221014072257-a144fee6f51c
## explicit; go 1.16
github.com/containers/storage
github.com/containers/storage/drivers
@ -321,7 +321,7 @@ github.com/klauspost/compress/internal/cpuinfo
github.com/klauspost/compress/internal/snapref
github.com/klauspost/compress/zstd
github.com/klauspost/compress/zstd/internal/xxhash
# github.com/klauspost/pgzip v1.2.5
# github.com/klauspost/pgzip v1.2.6-0.20220930104621-17e8dac29df8
## explicit
github.com/klauspost/pgzip
# github.com/kr/fs v0.1.0
@ -407,10 +407,10 @@ github.com/opencontainers/runc/libcontainer/devices
github.com/opencontainers/runc/libcontainer/user
github.com/opencontainers/runc/libcontainer/userns
github.com/opencontainers/runc/libcontainer/utils
# github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417
# github.com/opencontainers/runtime-spec v1.0.3-0.20220825212826-86290f6a00fb
## explicit
github.com/opencontainers/runtime-spec/specs-go
# github.com/opencontainers/runtime-tools v0.9.1-0.20220714195903-17b3287fafb7
# github.com/opencontainers/runtime-tools v0.9.1-0.20221014010322-58c91d646d86
## explicit; go 1.16
github.com/opencontainers/runtime-tools/error
github.com/opencontainers/runtime-tools/filepath