Compare commits

...

23 Commits

Author SHA1 Message Date
imeoer d7924384ac
Merge pull request #644 from DataDog/fricounet/referrer-detect-shared-layer
[mountRemote] include intermediate snapshots in mount slice during referrer detection
2025-06-13 16:45:27 +08:00
Baptiste Girard-Carrabin d9a8e57451
[mountRemote] include intermediate snapshots in mount slice in referrer detection
This fixes #640.
When the nydus-snapshotter returns the final mount slice for an image that is not nydus but shares layers with an images that is nydus, it currently doesn't include all the layers between the topmost and the nydus layer which causes missing files in the final running container. This commit attempts to fix the issue by adding all the missing layers to the final mount slice.
2025-06-13 10:37:51 +02:00
imeoer 00c3cbe005
Merge pull request #639 from DataDog/fricounet/converter-push-ecr
Fix converter to be able to push manifests to ECR
2025-05-26 10:03:32 +08:00
Baptiste Girard-Carrabin 6542a8949d
[converter] Remove useless logic from convertIndex
This code is useless now since we return the first manifest in the index directly without updating the index itself
2025-05-23 09:44:56 +02:00
Baptiste Girard-Carrabin 5f82310ee5
[converter] Remove `nydus.remoteimage.v1` OS feature from index manifest
This field is causing issues when attempting to push multi-arch images to ECR because the feature field is not supported in index manifests. According to https://github.com/dragonflyoss/nydus/issues/1692, the goal of this feature is to help the container runtime pick the nydus image in an index manifest if there are multiple entries for the same architecture. However, as this is a feature that is not supported in any container runtime yet and the only way to get a manifest with both architecture is to use `nydusify convert --merge-platform`, which uses harbor acceleration-service to add the feature in the manifest, this means that we can remove it from the converter.
2025-05-21 18:25:34 +02:00
Baptiste Girard-Carrabin 2e932b5b9c
[converter] Remove platform in subject when using WithReferrer
This field is not supported by certain registries like ECR and is not really needed according to https://github.com/dragonflyoss/nydus/issues/1691.
2025-05-21 18:11:27 +02:00
imeoer 88a72d945c
Merge pull request #636 from Apokleos/fix-bug-01
nydus sn: fix issues with "not found" in CoCo cases.
2025-04-09 18:22:34 +08:00
alex.lyn 314951f620 nydus sn: fix issues with "not found" in CoCo cases.
To make nydus-snapshotter work well in CoCo:

(1) Just go pass when do AddRafsInstance failed.
(2) When sn can't get label.CRIImageRef, just assign it with
overlayfs.

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-04-09 10:18:22 +08:00
imeoer 9089ad11ad
Merge pull request #630 from gane5hvarma/bug.fixVrNameInCleanup
echo image instead of images
2025-01-26 10:00:09 +08:00
imeoer 9983eb470c
Merge pull request #627 from gane5hvarma/chore.AddVendorInGitignore
chore: Add vendor directory in .gitignore file
2025-01-26 09:59:42 +08:00
Ganesh varma Raghava raju e3fe566b3e echo image instead of images 2025-01-24 12:08:08 +05:30
imeoer 456c3fcad9
Merge pull request #629 from thaJeztah/vendor_containerd_2.0.2
go.mod: update github.com/containerd/containerd v2.0.2
2025-01-16 20:52:06 +08:00
Sebastiaan van Stijn 7ca785d176
go.mod: update github.com/containerd/containerd v2.0.2
Also downgrade github.com/davecgh/go-spew and
github.com/pmezard/go-difflib back to tagged releases.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-01-14 20:12:14 +01:00
Ganesh varma Raghava raju b4e07f5558 add vendor directory in .gitignore file 2025-01-03 12:21:42 +05:30
imeoer fbf6bb5573
Merge pull request #626 from gaius-qi/feature/rename-dragonfly
docs: rename repo Dragonfly2 to dragonfly
2024-12-20 17:23:06 +08:00
Gaius 5aee43c99d
docs: rename repo Dragonfly2 to dragonfly
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-20 17:12:32 +08:00
imeoer aec799e1b5
Merge pull request #625 from imeoer/revert-config
Revert "feat: Remove DumpFile operations"
2024-12-11 10:14:18 +08:00
Yan Song 9ec5a73739 misc: bump nydusd v2.3.0
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-12-10 01:59:09 +00:00
Yan Song 38dc1f5f5b misc: stablize the kubectl version
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-12-10 01:55:54 +00:00
Yan Song 1bdecad6f4 Revert "feat: Remove DumpFile operations"
This reverts commit 29243e35cc.

For the design of non-persistent configuration, we need to consider more factors
and cannot simply place the configuration files in the local metadata database.
2024-12-10 01:52:54 +00:00
imeoer 021c50521a
Merge pull request #622 from coder-y04/deal_zombie_processes
Dealing with optimizer-server zombie processes
2024-12-04 11:36:08 +08:00
coder-y04 917375e9b7 Change the print content 2024-11-14 15:55:11 +08:00
coder-y04 ebbf5d122e Dealing with optimizer-server zombie processes 2024-11-14 15:27:40 +08:00
22 changed files with 202 additions and 247 deletions

1
.gitignore vendored
View File

@ -5,3 +5,4 @@ coverage.txt
tests/output/
smoke.tests
tools/optimizer-server/target
vendor/

View File

@ -140,7 +140,7 @@ We can also use the `nydus-snapshotter` container image when we want to put Nydu
## Integrate with Dragonfly to Distribute Images by P2P
Nydus is a sub-project of [Dragonfly](https://github.com/dragonflyoss/Dragonfly2). So it closely works with Dragonfly to distribute container images in a fast and efficient P2P fashion to reduce network latency and lower the pressure on a single-point of the registry.
Nydus is a sub-project of [Dragonfly](https://github.com/dragonflyoss/dragonfly). So it closely works with Dragonfly to distribute container images in a fast and efficient P2P fashion to reduce network latency and lower the pressure on a single-point of the registry.
### Quickstart Dragonfly & Nydus in Kubernetes

View File

@ -9,9 +9,9 @@ package daemonconfig
import (
"encoding/json"
"os"
"reflect"
"strings"
"sync"
"github.com/pkg/errors"
@ -36,6 +36,7 @@ type DaemonConfig interface {
StorageBackend() (StorageBackendType, *BackendConfig)
UpdateMirrors(mirrorsConfigDir, registryHost string) error
DumpString() (string, error)
DumpFile(path string) error
}
// Daemon configurations factory
@ -121,14 +122,19 @@ type DeviceConfig struct {
} `json:"cache"`
}
var configRWMutex sync.RWMutex
// For nydusd as FUSE daemon. Serialize Daemon info and persist to a json file
// We don't have to persist configuration file for fscache since its configuration
// is passed through HTTP API.
func DumpConfigFile(c interface{}, path string) error {
if config.IsBackendSourceEnabled() {
c = serializeWithSecretFilter(c)
}
b, err := json.Marshal(c)
if err != nil {
return errors.Wrapf(err, "marshal config")
}
type SupplementInfoInterface interface {
GetImageID() string
GetSnapshotID() string
IsVPCRegistry() bool
GetLabels() map[string]string
GetParams() map[string]string
return os.WriteFile(path, b, 0600)
}
func DumpConfigString(c interface{}) (string, error) {
@ -137,14 +143,12 @@ func DumpConfigString(c interface{}) (string, error) {
}
// Achieve a daemon configuration from template or snapshotter's configuration
func SupplementDaemonConfig(c DaemonConfig, info SupplementInfoInterface) error {
func SupplementDaemonConfig(c DaemonConfig, imageID, snapshotID string,
vpcRegistry bool, labels map[string]string, params map[string]string) error {
configRWMutex.Lock()
defer configRWMutex.Unlock()
image, err := registry.ParseImage(info.GetImageID())
image, err := registry.ParseImage(imageID)
if err != nil {
return errors.Wrapf(err, "parse image %s", info.GetImageID())
return errors.Wrapf(err, "parse image %s", imageID)
}
backendType, _ := c.StorageBackend()
@ -152,7 +156,7 @@ func SupplementDaemonConfig(c DaemonConfig, info SupplementInfoInterface) error
switch backendType {
case backendTypeRegistry:
registryHost := image.Host
if info.IsVPCRegistry() {
if vpcRegistry {
registryHost = registry.ConvertToVPCHost(registryHost)
} else if registryHost == "docker.io" {
// For docker.io images, we should use index.docker.io
@ -166,8 +170,8 @@ func SupplementDaemonConfig(c DaemonConfig, info SupplementInfoInterface) error
// If no auth is provided, don't touch auth from provided nydusd configuration file.
// We don't validate the original nydusd auth from configuration file since it can be empty
// when repository is public.
keyChain := auth.GetRegistryKeyChain(registryHost, info.GetImageID(), info.GetLabels())
c.Supplement(registryHost, image.Repo, info.GetSnapshotID(), info.GetParams())
keyChain := auth.GetRegistryKeyChain(registryHost, imageID, labels)
c.Supplement(registryHost, image.Repo, snapshotID, params)
c.FillAuth(keyChain)
// Localfs and OSS backends don't need any update,

View File

@ -9,6 +9,7 @@ package daemonconfig
import (
"encoding/json"
"os"
"path"
"github.com/containerd/log"
"github.com/containerd/nydus-snapshotter/pkg/auth"
@ -120,3 +121,11 @@ func (c *FscacheDaemonConfig) FillAuth(kc *auth.PassKeyChain) {
func (c *FscacheDaemonConfig) DumpString() (string, error) {
return DumpConfigString(c)
}
func (c *FscacheDaemonConfig) DumpFile(f string) error {
if err := os.MkdirAll(path.Dir(f), 0755); err != nil {
return err
}
return DumpConfigFile(c, f)
}

View File

@ -9,6 +9,7 @@ package daemonconfig
import (
"encoding/json"
"os"
"path"
"github.com/pkg/errors"
@ -91,7 +92,12 @@ func (c *FuseDaemonConfig) StorageBackend() (string, *BackendConfig) {
}
func (c *FuseDaemonConfig) DumpString() (string, error) {
configRWMutex.Lock()
defer configRWMutex.Unlock()
return DumpConfigString(c)
}
func (c *FuseDaemonConfig) DumpFile(f string) error {
if err := os.MkdirAll(path.Dir(f), 0755); err != nil {
return err
}
return DumpConfigFile(c, f)
}

View File

@ -108,7 +108,7 @@ Nydus provides a [nydusify](https://github.com/dragonflyoss/nydus/blob/master/do
We can install the `nydusify` cli tool from the nydus package.
```console
VERSION=v2.1.5
VERSION=v2.3.0
wget https://github.com/dragonflyoss/nydus/releases/download/$VERSION/nydus-static-$VERSION-linux-amd64.tgz
tar -zxvf nydus-static-$VERSION-linux-amd64.tgz

29
go.mod
View File

@ -14,13 +14,13 @@ require (
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.2
github.com/containerd/cgroups/v3 v3.0.3
github.com/containerd/containerd/api v1.8.0
github.com/containerd/containerd/v2 v2.0.0
github.com/containerd/containerd/v2 v2.0.2
github.com/containerd/continuity v0.4.4
github.com/containerd/errdefs v1.0.0
github.com/containerd/fifo v1.1.0
github.com/containerd/log v0.1.0
github.com/containerd/nri v0.8.0
github.com/containerd/platforms v1.0.0-rc.0
github.com/containerd/platforms v1.0.0-rc.1
github.com/containerd/plugin v1.0.0
github.com/containerd/stargz-snapshotter v0.15.2-0.20240709063920-1dac5ef89319
github.com/containerd/stargz-snapshotter/estargz v0.15.2-0.20240709063920-1dac5ef89319
@ -44,7 +44,7 @@ require (
github.com/prometheus/client_model v0.6.1
github.com/rs/xid v1.5.0
github.com/sirupsen/logrus v1.9.3
github.com/stretchr/testify v1.9.0
github.com/stretchr/testify v1.10.0
github.com/urfave/cli/v2 v2.27.5
go.etcd.io/bbolt v1.3.11
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56
@ -83,11 +83,11 @@ require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cilium/ebpf v0.11.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/ttrpc v1.2.6 // indirect
github.com/containerd/typeurl/v2 v2.2.2 // indirect
github.com/containerd/ttrpc v1.2.7 // indirect
github.com/containerd/typeurl/v2 v2.2.3 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
@ -126,7 +126,7 @@ require (
github.com/opencontainers/runtime-tools v0.9.1-0.20221107090550-2e043c6bd626 // indirect
github.com/opencontainers/selinux v1.11.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/common v0.55.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
@ -169,3 +169,18 @@ retract (
v0.11.0
v0.3.0 // retagged: cd604c1b597558ea045a79c4f80a8c780909801b -> 85653575c7dafb6b06548478ee1dc61ac5905d00
)
exclude (
// These dependencies were updated to "master" in some modules we depend on,
// but have no code-changes since their last release. Unfortunately, this also
// causes a ripple effect, forcing all users of the containerd module to also
// update these dependencies to an unrelease / un-tagged version.
//
// Both these dependencies will unlikely do a new release in the near future,
// so exclude these versions so that we can downgrade to the current release.
//
// For additional details, see this PR and links mentioned in that PR:
// https://github.com/kubernetes-sigs/kustomize/pull/5830#issuecomment-2569960859
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2
)

26
go.sum
View File

@ -67,8 +67,8 @@ github.com/containerd/cgroups/v3 v3.0.3 h1:S5ByHZ/h9PMe5IOQoN7E+nMc2UcLEM/V48DGD
github.com/containerd/cgroups/v3 v3.0.3/go.mod h1:8HBe7V3aWGLFPd/k03swSIsGjZhHI2WzJmticMgVuz0=
github.com/containerd/containerd/api v1.8.0 h1:hVTNJKR8fMc/2Tiw60ZRijntNMd1U+JVMyTRdsD2bS0=
github.com/containerd/containerd/api v1.8.0/go.mod h1:dFv4lt6S20wTu/hMcP4350RL87qPWLVa/OHOwmmdnYc=
github.com/containerd/containerd/v2 v2.0.0 h1:qLDdFaAykQrIyLiqwQrNLLz95wiC36bAZVwioUwqShM=
github.com/containerd/containerd/v2 v2.0.0/go.mod h1:j25kDy9P48/ngb1sxWIFfK6GsnqOHoSqo1EpAod20VQ=
github.com/containerd/containerd/v2 v2.0.2 h1:GmH/tRBlTvrXOLwSpWE2vNAm8+MqI6nmxKpKBNKY8Wc=
github.com/containerd/containerd/v2 v2.0.2/go.mod h1:wIqEvQ/6cyPFUGJ5yMFanspPabMLor+bF865OHvNTTI=
github.com/containerd/continuity v0.4.4 h1:/fNVfTJ7wIl/YPMHjf+5H32uFhl63JucB34PlCpMKII=
github.com/containerd/continuity v0.4.4/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
@ -81,18 +81,18 @@ github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/nri v0.8.0 h1:n1S753B9lX8RFrHYeSgwVvS1yaUcHjxbB+f+xzEncRI=
github.com/containerd/nri v0.8.0/go.mod h1:uSkgBrCdEtAiEz4vnrq8gmAC4EnVAM5Klt0OuK5rZYQ=
github.com/containerd/platforms v1.0.0-rc.0 h1:GuHWSKgVVO3POn6nRBB4sH63uPOLa87yuuhsGLWaXAA=
github.com/containerd/platforms v1.0.0-rc.0/go.mod h1:T1XAzzOdYs3it7l073MNXyxRwQofJfqwi/8cRjufIk4=
github.com/containerd/platforms v1.0.0-rc.1 h1:83KIq4yy1erSRgOVHNk1HYdPvzdJ5CnsWaRoJX4C41E=
github.com/containerd/platforms v1.0.0-rc.1/go.mod h1:J71L7B+aiM5SdIEqmd9wp6THLVRzJGXfNuWCZCllLA4=
github.com/containerd/plugin v1.0.0 h1:c8Kf1TNl6+e2TtMHZt+39yAPDbouRH9WAToRjex483Y=
github.com/containerd/plugin v1.0.0/go.mod h1:hQfJe5nmWfImiqT1q8Si3jLv3ynMUIBB47bQ+KexvO8=
github.com/containerd/stargz-snapshotter v0.15.2-0.20240709063920-1dac5ef89319 h1:Td/dlhRp/kIk9W1rjXHSL87zZZiBQaKPV18OnoEREUA=
github.com/containerd/stargz-snapshotter v0.15.2-0.20240709063920-1dac5ef89319/go.mod h1:dgo5lVziOOnWX8SxxHqYuc8ShsQou54eKLdahxFlHVc=
github.com/containerd/stargz-snapshotter/estargz v0.15.2-0.20240709063920-1dac5ef89319 h1:BRxgmkGWi5vAvajiCwEK+xit4FeFU3GRjbiX4DKTLtM=
github.com/containerd/stargz-snapshotter/estargz v0.15.2-0.20240709063920-1dac5ef89319/go.mod h1:9WSor0wu2swhtYoFkrjy3GHt7aNgKR2A7FhnpP+CH5o=
github.com/containerd/ttrpc v1.2.6 h1:zG+Kn5EZ6MUYCS1t2Hmt2J4tMVaLSFEJVOraDQwNPC4=
github.com/containerd/ttrpc v1.2.6/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o=
github.com/containerd/typeurl/v2 v2.2.2 h1:3jN/k2ysKuPCsln5Qv8bzR9cxal8XjkxPogJfSNO31k=
github.com/containerd/typeurl/v2 v2.2.2/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk=
github.com/containerd/ttrpc v1.2.7 h1:qIrroQvuOL9HQ1X6KHe2ohc7p+HP/0VE6XPU7elJRqQ=
github.com/containerd/ttrpc v1.2.7/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o=
github.com/containerd/typeurl/v2 v2.2.3 h1:yNA/94zxWdvYACdYO8zofhrTVuQY73fFU1y++dYSw40=
github.com/containerd/typeurl/v2 v2.2.3/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk=
github.com/containers/ocicrypt v1.2.0 h1:X14EgRK3xNFvJEfI5O4Qn4T3E25ANudSOZz/sirVuPM=
github.com/containers/ocicrypt v1.2.0/go.mod h1:ZNviigQajtdlxIZGibvblVuIFBKIuUI2M0QM12SD31U=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
@ -101,9 +101,8 @@ github.com/cpuguy83/go-md2man/v2 v2.0.5 h1:ZtcqGrnekaHpVLArFSe4HK5DoKx1T0rq2DwVB
github.com/cpuguy83/go-md2man/v2 v2.0.5/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/cli v27.1.0+incompatible h1:P0KSYmPtNbmx59wHZvG6+rjivhKDRA1BvvWM0f5DgHc=
@ -278,9 +277,8 @@ github.com/pelletier/go-toml/v2 v2.2.3 h1:YmeHyLY8mFWbdkNWwpr+qIL2bEqT0o95WSdkNH
github.com/pelletier/go-toml/v2 v2.2.3/go.mod h1:MfCQTFTvCcUyyvvwm1+G6H/jORL20Xlb6rzQu9GuUkc=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@ -312,8 +310,8 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/urfave/cli v1.19.1/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=

View File

@ -3,7 +3,7 @@ ARG CONTAINERD_PROJECT=/containerd
ARG RUNC_VER=1.1.4
ARG NYDUS_SNAPSHOTTER_PROJECT=/nydus-snapshotter
ARG DOWNLOADS_MIRROR="https://github.com"
ARG NYDUS_VER=2.2.5
ARG NYDUS_VER=2.3.0
ARG NERDCTL_VER=1.7.6
ARG DELVE_VER=1.23.0
ARG GO_VER=1.22.5-bookworm

View File

@ -230,7 +230,7 @@ function remove_images() {
for IMAGE in $IMAGES; do
# Delete the image
$ctr_args images rm $IMAGE > /dev/null 2>&1 || true
echo "Images $IMAGES removed"
echo "Image $IMAGE removed"
done
done
# Delete the content

View File

@ -913,7 +913,7 @@ func ConvertHookFunc(opt MergeOption) converter.ConvertHookFunc {
}
switch {
case images.IsIndexType(newDesc.MediaType):
return convertIndex(ctx, cs, orgDesc, newDesc)
return convertIndex(ctx, cs, newDesc)
case images.IsManifestType(newDesc.MediaType):
return convertManifest(ctx, cs, orgDesc, newDesc, opt)
default:
@ -922,49 +922,20 @@ func ConvertHookFunc(opt MergeOption) converter.ConvertHookFunc {
}
}
// convertIndex modifies the original index by appending "nydus.remoteimage.v1"
// to the Platform.OSFeatures of each modified manifest descriptors.
func convertIndex(ctx context.Context, cs content.Store, orgDesc ocispec.Descriptor, newDesc *ocispec.Descriptor) (*ocispec.Descriptor, error) {
var orgIndex ocispec.Index
if _, err := readJSON(ctx, cs, &orgIndex, orgDesc); err != nil {
return nil, errors.Wrap(err, "read target image index json")
}
// isManifestModified is a function to check whether the manifest is modified.
isManifestModified := func(manifest ocispec.Descriptor) bool {
for _, oldManifest := range orgIndex.Manifests {
if manifest.Digest == oldManifest.Digest {
return false
}
}
return true
}
// convertIndex modifies the original index converting it to manifest directly if it contains only one manifest.
func convertIndex(ctx context.Context, cs content.Store, newDesc *ocispec.Descriptor) (*ocispec.Descriptor, error) {
var index ocispec.Index
indexLabels, err := readJSON(ctx, cs, &index, *newDesc)
_, err := readJSON(ctx, cs, &index, *newDesc)
if err != nil {
return nil, errors.Wrap(err, "read index json")
}
for i, manifest := range index.Manifests {
if !isManifestModified(manifest) {
// Skip the manifest which is not modified.
continue
}
manifest.Platform.OSFeatures = append(manifest.Platform.OSFeatures, ManifestOSFeatureNydus)
index.Manifests[i] = manifest
}
// If the converted manifest list contains only one manifest,
// convert it directly to manifest.
if len(index.Manifests) == 1 {
return &index.Manifests[0], nil
}
// Update image index in content store.
newIndexDesc, err := writeJSON(ctx, cs, index, *newDesc, indexLabels)
if err != nil {
return nil, errors.Wrap(err, "write index json")
}
return newIndexDesc, nil
return newDesc, nil
}
// convertManifest merges all the nydus blob layers into a
@ -1054,6 +1025,8 @@ func convertManifest(ctx context.Context, cs content.Store, oldDesc ocispec.Desc
// See the `subject` field description in
// https://github.com/opencontainers/image-spec/blob/main/manifest.md#image-manifest-property-descriptions
manifest.Subject = &oldDesc
// Remove the platform field as it is not supported by certain registries like ECR.
manifest.Subject.Platform = nil
}
// Update image manifest in content store.

View File

@ -87,21 +87,6 @@ type Daemon struct {
state types.DaemonState
}
type NydusdSupplementInfo struct {
DaemonState ConfigState
ImageID string
SnapshotID string
Vpc bool
Labels map[string]string
Params map[string]string
}
func (s *NydusdSupplementInfo) GetImageID() string { return s.ImageID }
func (s *NydusdSupplementInfo) GetSnapshotID() string { return s.SnapshotID }
func (s *NydusdSupplementInfo) IsVPCRegistry() bool { return s.Vpc }
func (s *NydusdSupplementInfo) GetLabels() map[string]string { return s.Labels }
func (s *NydusdSupplementInfo) GetParams() map[string]string { return s.Params }
func (d *Daemon) Lock() {
d.mu.Lock()
}
@ -265,7 +250,12 @@ func (d *Daemon) sharedFusedevMount(rafs *rafs.Rafs) error {
return err
}
c := d.Config
c, err := daemonconfig.NewDaemonConfig(d.States.FsDriver, d.ConfigFile(rafs.SnapshotID))
if err != nil {
return errors.Wrapf(err, "Failed to reload instance configuration %s",
d.ConfigFile(rafs.SnapshotID))
}
cfg, err := c.DumpString()
if err != nil {
return errors.Wrap(err, "dump instance configuration")
@ -290,7 +280,12 @@ func (d *Daemon) sharedErofsMount(ra *rafs.Rafs) error {
return errors.Wrapf(err, "failed to create fscache work dir %s", ra.FscacheWorkDir())
}
c := d.Config
c, err := daemonconfig.NewDaemonConfig(d.States.FsDriver, d.ConfigFile(ra.SnapshotID))
if err != nil {
log.L.Errorf("Failed to reload daemon configuration %s, %s", d.ConfigFile(ra.SnapshotID), err)
return err
}
cfgStr, err := c.DumpString()
if err != nil {
return err
@ -655,29 +650,3 @@ func NewDaemon(opt ...NewDaemonOpt) (*Daemon, error) {
return d, nil
}
func (d *Daemon) MountByAPI() error {
rafs := d.RafsCache.Head()
if rafs == nil {
return errors.Wrapf(errdefs.ErrNotFound, "daemon %s no rafs instance associated", d.ID())
}
client, err := d.GetClient()
if err != nil {
return errors.Wrapf(err, "mount instance %s", rafs.SnapshotID)
}
bootstrap, err := rafs.BootstrapFile()
if err != nil {
return err
}
c := d.Config
cfg, err := c.DumpString()
if err != nil {
return errors.Wrap(err, "dump instance configuration")
}
err = client.Mount("/", bootstrap, cfg)
if err != nil {
return errors.Wrapf(err, "mount rafs instance MountByAPI()")
}
return nil
}

View File

@ -78,6 +78,12 @@ func (fserver *Server) RunServer() error {
}
fserver.Cmd = cmd
go func() {
if err := cmd.Wait(); err != nil {
logrus.WithError(err).Errorf("Failed to wait for fserver to finish")
}
}()
go func() {
if err := fserver.RunReceiver(); err != nil {
logrus.WithError(err).Errorf("Failed to receive event information from server")

View File

@ -130,19 +130,6 @@ func NewFileSystem(ctx context.Context, opt ...NewFSOpt) (*Filesystem, error) {
if err != nil {
return errors.Wrapf(err, "get filesystem manager for daemon %s", d.States.ID)
}
supplementInfo, err := fsManager.GetInfo(d.ID())
if err != nil {
return errors.Wrap(err, "GetInfo failed")
}
cfg := d.Config
err = daemonconfig.SupplementDaemonConfig(cfg, supplementInfo)
if err != nil {
return errors.Wrap(err, "supplement configuration")
}
d.Config = cfg
if err := fsManager.StartDaemon(d); err != nil {
return errors.Wrapf(err, "start daemon %s", d.ID())
}
@ -245,6 +232,7 @@ func (fs *Filesystem) Mount(ctx context.Context, snapshotID string, labels map[s
// Instance already exists, how could this happen? Can containerd handle this case?
return nil
}
fsDriver := config.GetFsDriver()
if label.IsTarfsDataLayer(labels) {
fsDriver = config.FsDriverBlockdev
@ -255,7 +243,7 @@ func (fs *Filesystem) Mount(ctx context.Context, snapshotID string, labels map[s
var imageID string
imageID, ok := labels[snpkg.TargetRefLabel]
if !ok {
// FIXME: Buildkit does not pass labels defined in containerds fashion. So
// FIXME: Buildkit does not pass labels defined in containerd's fashion. So
// we have to use stargz snapshotter specific labels until Buildkit generalize it the necessary
// labels for all remote snapshotters.
imageID, ok = labels["containerd.io/snapshot/remote/stargz.reference"]
@ -314,25 +302,34 @@ func (fs *Filesystem) Mount(ctx context.Context, snapshotID string, labels map[s
daemonconfig.WorkDir: workDir,
daemonconfig.CacheDir: cacheDir,
}
supplementInfo := &daemon.NydusdSupplementInfo{
DaemonState: d.States,
ImageID: imageID,
SnapshotID: snapshotID,
Vpc: false,
Labels: labels,
Params: params,
}
cfg := deepcopy.Copy(*fsManager.DaemonConfig).(daemonconfig.DaemonConfig)
err = daemonconfig.SupplementDaemonConfig(cfg, supplementInfo)
err = daemonconfig.SupplementDaemonConfig(cfg, imageID, snapshotID, false, labels, params)
if err != nil {
return errors.Wrap(err, "supplement configuration")
}
if errs := fsManager.AddSupplementInfo(supplementInfo); errs != nil {
return errors.Wrapf(err, "AddSupplementInfo failed %s", d.States.ID)
}
// TODO: How to manage rafs configurations on-disk? separated json config file or DB record?
// In order to recover erofs mount, the configuration file has to be persisted.
d.Config = cfg
var configSubDir string
if useSharedDaemon {
configSubDir = snapshotID
} else {
// Associate daemon config object when creating a new daemon object to avoid
// reading disk file again and again.
// For shared daemon, each rafs instance has its own configuration, so we don't
// attach a config interface to daemon in this case.
d.Config = cfg
}
err = cfg.DumpFile(d.ConfigFile(configSubDir))
if err != nil {
if errors.Is(err, errdefs.ErrAlreadyExists) {
log.L.Debugf("Configuration file %s already exits", d.ConfigFile(configSubDir))
} else {
return errors.Wrap(err, "dump daemon configuration file")
}
}
d.AddRafsInstance(rafs)
// if publicKey is not empty we should verify bootstrap file of image
@ -375,6 +372,13 @@ func (fs *Filesystem) Mount(ctx context.Context, snapshotID string, labels map[s
// Persist it after associate instance after all the states are calculated.
if err == nil {
if err := fsManager.AddRafsInstance(rafs); err != nil {
// In the CoCo scenario, the existence of a rafs instance is not a concern, as the CoCo guest image pull
// does not utilize snapshots on the host. Therefore, we expect it to pass normally regardless of its existence.
// However, for the convenience of troubleshooting, we tend to print relevant logs.
if config.GetFsDriver() == config.FsDriverProxy {
log.L.Warnf("RAFS instance has associated with snapshot %s possibly: %v", snapshotID, err)
return nil
}
return errors.Wrapf(err, "create instance %s", snapshotID)
}
}
@ -599,6 +603,10 @@ func (fs *Filesystem) initSharedDaemon(fsManager *manager.Manager) (err error) {
// it is loaded when requesting mount api
// Dump the configuration file since it is reloaded when recovering the nydusd
d.Config = *fsManager.DaemonConfig
err = d.Config.DumpFile(d.ConfigFile(""))
if err != nil && !errors.Is(err, errdefs.ErrAlreadyExists) {
return errors.Wrapf(err, "dump configuration file %s", d.ConfigFile(""))
}
if err := fsManager.StartDaemon(d); err != nil {
return errors.Wrap(err, "start shared daemon")

View File

@ -44,16 +44,6 @@ func (m *Manager) StartDaemon(d *daemon.Daemon) error {
if err := cmd.Start(); err != nil {
return err
}
fsDriver := config.GetFsDriver()
isSharedFusedev := fsDriver == config.FsDriverFusedev && config.GetDaemonMode() == config.DaemonModeShared
useSharedDaemon := fsDriver == config.FsDriverFscache || isSharedFusedev
if !useSharedDaemon {
errs := d.MountByAPI()
if errs != nil {
return errors.Wrapf(err, "failed to mount")
}
}
d.Lock()
defer d.Unlock()
@ -165,6 +155,10 @@ func (m *Manager) BuildDaemonCommand(d *daemon.Daemon, bin string, upgrade bool)
return nil, errors.Wrapf(err, "locate bootstrap %s", bootstrap)
}
cmdOpts = append(cmdOpts,
command.WithConfig(d.ConfigFile("")),
command.WithBootstrap(bootstrap),
)
if config.IsBackendSourceEnabled() {
configAPIPath := fmt.Sprintf(endpointGetBackend, d.States.ID)
cmdOpts = append(cmdOpts,

View File

@ -197,23 +197,6 @@ func (m *Manager) AddDaemon(daemon *daemon.Daemon) error {
return nil
}
func (m *Manager) AddSupplementInfo(supplementInfo *daemon.NydusdSupplementInfo) error {
m.mu.Lock()
defer m.mu.Unlock()
if err := m.store.AddInfo(supplementInfo); err != nil {
return errors.Wrapf(err, "add supplementInfo %s", supplementInfo.DaemonState.ID)
}
return nil
}
func (m *Manager) GetInfo(daemonID string) (*daemon.NydusdSupplementInfo, error) {
info, err := m.store.GetInfo(daemonID)
if err != nil {
return nil, errors.Wrapf(err, "add supplementInfo %s", daemonID)
}
return info, nil
}
func (m *Manager) UpdateDaemon(daemon *daemon.Daemon) error {
m.mu.Lock()
defer m.mu.Unlock()
@ -339,7 +322,13 @@ func (m *Manager) recoverDaemons(ctx context.Context,
}
if d.States.FsDriver == config.FsDriverFusedev {
d.Config = *m.DaemonConfig
cfg, err := daemonconfig.NewDaemonConfig(d.States.FsDriver, d.ConfigFile(""))
if err != nil {
log.L.Errorf("Failed to reload daemon configuration %s, %s", d.ConfigFile(""), err)
return err
}
d.Config = cfg
}
state, err := d.GetState()

View File

@ -29,9 +29,6 @@ type Store interface {
WalkRafsInstances(ctx context.Context, cb func(*rafs.Rafs) error) error
NextInstanceSeq() (uint64, error)
AddInfo(supplementInfo *daemon.NydusdSupplementInfo) error
GetInfo(daemonID string) (*daemon.NydusdSupplementInfo, error)
}
var _ Store = &store.DaemonRafsStore{}

View File

@ -31,14 +31,6 @@ func (s *DaemonRafsStore) AddDaemon(d *daemon.Daemon) error {
return s.db.SaveDaemon(context.TODO(), d)
}
func (s *DaemonRafsStore) AddInfo(supplementInfo *daemon.NydusdSupplementInfo) error {
return s.db.SaveInfo(context.TODO(), supplementInfo)
}
func (s *DaemonRafsStore) GetInfo(imageID string) (*daemon.NydusdSupplementInfo, error) {
return s.db.GetSupplementInfo(context.TODO(), imageID)
}
func (s *DaemonRafsStore) UpdateDaemon(d *daemon.Daemon) error {
return s.db.UpdateDaemon(context.TODO(), d)
}

View File

@ -41,8 +41,7 @@ var (
daemonsBucket = []byte("daemons")
// RAFS filesystem instances.
// A RAFS filesystem may have associated daemon or not.
instancesBucket = []byte("instances")
supplementInfoBucket = []byte("supplement_info")
instancesBucket = []byte("instances")
)
// Database keeps infos that need to survive among snapshotter restart
@ -88,11 +87,6 @@ func getInstancesBucket(tx *bolt.Tx) *bolt.Bucket {
return bucket.Bucket(instancesBucket)
}
func getSupplementInfoBucket(tx *bolt.Tx) *bolt.Bucket {
bucket := tx.Bucket(v1RootBucket)
return bucket.Bucket(supplementInfoBucket)
}
func updateObject(bucket *bolt.Bucket, key string, obj interface{}) error {
keyBytes := []byte(key)
@ -169,10 +163,6 @@ func (db *Database) initDatabase() error {
return errors.Wrapf(err, "bucket %s", instancesBucket)
}
if _, err := bk.CreateBucketIfNotExists(supplementInfoBucket); err != nil {
return err
}
if val := bk.Get(versionKey); val == nil {
version = "v1.0"
} else {
@ -220,25 +210,6 @@ func (db *Database) SaveDaemon(_ context.Context, d *daemon.Daemon) error {
})
}
func (db *Database) SaveInfo(_ context.Context, supplementInfo *daemon.NydusdSupplementInfo) error {
return db.db.Update(func(tx *bolt.Tx) error {
bucket := getSupplementInfoBucket(tx)
key := []byte(supplementInfo.DaemonState.ID)
if existing := bucket.Get(key); existing != nil {
log.L.Infof("Supplement info already exists for ID: %s", supplementInfo.DaemonState.ID)
return nil
}
value, err := json.Marshal(supplementInfo)
if err != nil {
return errors.Wrap(err, "failed to marshal supplement info")
}
if err := bucket.Put(key, value); err != nil {
return errors.Wrap(err, "failed to save supplement info")
}
return nil
})
}
func (db *Database) UpdateDaemon(_ context.Context, d *daemon.Daemon) error {
return db.db.Update(func(tx *bolt.Tx) error {
bucket := getDaemonsBucket(tx)
@ -291,25 +262,6 @@ func (db *Database) WalkDaemons(_ context.Context, cb func(info *daemon.ConfigSt
})
}
func (db *Database) GetSupplementInfo(_ context.Context, daemonID string) (*daemon.NydusdSupplementInfo, error) {
var info daemon.NydusdSupplementInfo
err := db.db.View(func(tx *bolt.Tx) error {
bucket := getSupplementInfoBucket(tx)
if bucket == nil {
return errdefs.ErrNotFound
}
value := bucket.Get([]byte(daemonID))
if value == nil {
return errdefs.ErrNotFound
}
return json.Unmarshal(value, &info)
})
if err != nil {
return nil, err
}
return &info, nil
}
// WalkDaemons iterates all daemon records and invoke callback on each
func (db *Database) WalkRafsInstances(_ context.Context, cb func(r *rafs.Rafs) error) error {
return db.db.View(func(tx *bolt.Tx) error {

View File

@ -27,6 +27,11 @@ import (
"github.com/pkg/errors"
)
const (
KataVirtualVolumeDefaultSource = "overlay"
KataVirtualVolumeDummySource = "dummy-image-reference"
)
type ExtraOption struct {
Source string `json:"source"`
Config string `json:"config"`
@ -103,7 +108,7 @@ func (o *snapshotter) remoteMountWithExtraOptions(ctx context.Context, s storage
return []mount.Mount{
{
Type: mountType,
Source: "overlay",
Source: KataVirtualVolumeDefaultSource,
Options: overlayOptions,
},
}, nil
@ -152,7 +157,7 @@ func (o *snapshotter) mountWithKataVolume(ctx context.Context, id string, overla
mounts := []mount.Mount{
{
Type: mountType,
Source: "overlay",
Source: KataVirtualVolumeDefaultSource,
Options: overlayOptions,
},
}
@ -165,6 +170,16 @@ func (o *snapshotter) mountWithKataVolume(ctx context.Context, id string, overla
func (o *snapshotter) mountWithProxyVolume(rafs rafs.Rafs) ([]string, error) {
options := []string{}
source := rafs.Annotations[label.CRIImageRef]
// In the normal flow, this should correctly return the imageRef. However, passing the CRIImageRef label
// from containerd is not supported. Therefore, the source will be set to "".
// But in this case, kata runtime-rs has a non-empty check for the source field. To ensure this field
// remains non-empty, a forced assignment is used here. This does not affect the passing of information.
// it is solely to pass the check.
if len(source) == 0 {
source = KataVirtualVolumeDummySource
}
for k, v := range rafs.Annotations {
options = append(options, fmt.Sprintf("%s=%s", k, v))
}

View File

@ -905,6 +905,19 @@ func (o *snapshotter) mountRemote(ctx context.Context, labels map[string]string,
}
lowerPaths := make([]string, 0, 8)
if o.fs.ReferrerDetectEnabled() {
// From the parent list, we want to add all the layers
// between the upmost snapshot and the nydus meta snapshot.
// On the other hand, we consider that all the layers below
// the nydus meta snapshot will be included in its mount.
for i := range s.ParentIDs {
if s.ParentIDs[i] == id {
break
}
lowerPaths = append(lowerPaths, o.upperPath(s.ParentIDs[i]))
}
}
lowerPathNydus, err := o.lowerPath(id)
if err != nil {
return nil, errors.Wrapf(err, "failed to locate overlay lowerdir")

View File

@ -17,10 +17,8 @@ use std::{
use nix::{
poll::{poll, PollFd, PollFlags},
sched::{setns, CloneFlags},
unistd::{
fork, getpgid,
ForkResult::{Child, Parent},
},
sys::wait::{waitpid, WaitStatus},
unistd::{fork, getpgid, ForkResult},
};
use serde::Serialize;
@ -259,19 +257,35 @@ fn main() {
return;
}
}
let pid = unsafe { fork() };
match pid.expect("fork failed: unable to create child process") {
Child => {
match unsafe { fork() } {
Ok(ForkResult::Child) => {
if let Err(e) = start_fanotify() {
eprintln!("failed to start fanotify server {e:?}");
}
}
Parent { child } => {
Ok(ForkResult::Parent { child }) => {
if let Err(e) = getpgid(Some(child)).map(|pgid| {
eprintln!("forked optimizer server subprocess, pid: {child}, pgid: {pgid}");
}) {
eprintln!("failed to get pgid of {child} {e:?}");
};
}
match waitpid(child, None) {
Ok(WaitStatus::Signaled(pid, signal, _)) => {
eprintln!("child process {pid} was killed by signal {signal}");
}
Ok(WaitStatus::Stopped(pid, signal)) => {
eprintln!("child process {pid} was stopped by signal {signal}");
}
Err(e) => {
eprintln!("failed to wait for child process: {e}");
}
_ => {}
}
}
Err(e) => {
eprintln!("fork failed: unable to create child process: {e:?}");
}
}
}