Historically, non-schema1 images had a deterministic image ID == config digest.
With zstd:chunked, we don't want to deduplicate layers pulled by consuming the
full tarball and layers partially pulled based on TOC, because we can't cheaply
ensure equivalence; so, image IDs for images where a TOC was used differ.
To accommodate that, compare images using their configs digests, not using image IDs.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
When looking up the current-store image ID, do that
from the same output where we verify that the ID is from the
current store, instead of listing images twice.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
The test got the stores RW status backwards.
Before zstd:chunked, both image IDs should be the same, so this used
to make no difference.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This will appease the higher-level quota logic. Basically, to
find a free quota ID to prevent reuse, we will iterate through
the contents of the directory and check the quota IDs of all
subdirectories, then use the first free ID found that is larger
than the base ID (the one set on the base directory). Problem:
our volumes use a two-tier directory structure, where the volume
has an outer directory (with the name of the actual volume) and
an inner directory (always named _data). We were only setting the
quota on _data, meaning the outer directory did not have an ID,
and the ID-choosing logic thus never detected that any IDs had
been allocated and always chose the same ID.
Setting the ID on the outer directory with PROJINHERIT set makes
the ID allocation logic work properly, and guarantees children
inherit the ID - so _data and all contents of the volume get the
ID as we'd expect.
No tests as we don't have a filesystem in our CI that supports
XFS quotas (setting it on / needs kernel flags added).
Fixes https://issues.redhat.com/browse/RHEL-18038
Signed-off-by: Matt Heon <mheon@redhat.com>
when the current soft limit is higher than the new value, ulimit fails
to set the hard limit as (tested on Rawhide):
[root@rawhide ~]# ulimit -n -H 1048575
-bash: ulimit: open files: cannot modify limit: Invalid argument
to avoid the problem, set also the soft limit:
[root@rawhide ~]# ulimit -n -H
12345678
[root@rawhide ~]# ulimit -n -H 1048575
-bash: ulimit: open files: cannot modify limit: Invalid argument
[root@rawhide ~]# ulimit -n -SH 1048575
[root@rawhide ~]# ulimit -n -H
1048575
commit 71d5ee0e04 introduced the issue.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This hasn't been touched in 7 years and Vagarant is no longer
a default entrypoint for many people. We have other things
documented in CONTRIBUTING.
Signed-off-by: Colin Walters <walters@verbum.org>
`CRRuntimeSupportsPodCheckpointRestore()` is used to check if the current
container runtime (e.g., runc or crun) can restore a container into an
existing Pod. It does this by processing output message to check if the
`--lsm-mount-context` option is supported. This option was recently
added to crun [1], however, crun and runc have slightly different output
messages:
```
$ crun restore--lsm-mount-contextt
restore: option '--lsm-mount-context' requires an argument
Try `restore --help' or `restore --usage' for more information.
```
```
$ runc restore --lsm-mount-context
ERRO[0000] flag needs an argument: -lsm-mount-context
```
This patch updates the function to support both runtimes.
[1] https://github.com/containers/crun/pull/1578
Signed-off-by: Radostin Stoyanov <rstoyanov@fedoraproject.org>
Update to latest main to see if everything passes in preparation for the
first 5.3 release candidate.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There is no good reason for the special case, kube and pod units
definitely need it. Volume and network units maybe not but for
consistency we add it there as well. This makes the docs much easier to
write and understand for users as the behavior will not differ.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As documented in the issue there is no way to wait for system units from
the user session[1]. This causes problems for rootless quadlet units as
they might be started before the network is fully up. TWhile this was
always the case and thus was never really noticed the main thing that
trigger a bunch of errors was the switch to pasta.
Pasta requires the network to be fully up in order to correctly select
the right "template" interface based on the routes. If it cannot find a
suitable interface it just fails and we cannot start the container
understandingly leading to a lot of frustration from users.
As there is no sign of any movement on the systemd issue we work around
here by using our own user unit that check if the system session
network-online.target it ready.
Now for testing it is a bit complicated. While we do now correctly test
the root and rootless generator since commit ada75c0bb8 the resulting
Wants/After= lines differ between them and there is no logic in the
testfiles themself to say if root/rootless to match specifics. One idea
was to use `assert-key-is-rootless/root` but that seemed like more
duplication for little reason so use a regex and allow both to make it
pass always. To still have some test coverage add a check in the system
test to ask systemd if we did indeed have the right depdendencies where
we can check for exact root/rootless name match.
[1] https://github.com/systemd/systemd/issues/3312Fixes#22197
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This service is meant to be used by quadlet as replacement for
network-online.target as this does not work for rootless users.
see #22197
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The reason being that I plan to add a unit that should only be used for
the user session and otherwise there is no way to only keep a unit in
user.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Use a single loop for both the user and system service so we do not have
to duplicate the full paths every time.
In particular we can use `$^` to list all dependecies and then add the
not generated files to the loop as well to simplify this. And to make
things clear rename PODMAN_UNIT_FILES to PODMAN_GENERATED_UNIT_FILES so
readers immediately know they are generated and are safe to delete in
contrast to the .socket/.timer unit that are not and part of the git
history.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Two flakes seen in the last three months. One of them was in
August, so it's not related to ongoing criu-4.0 problems.
Suspected cause: race waiting for "podman run --rm" container
to transition from stopped to removed.
Solution: allow a 5-second grace period, retrying every second.
Also: add explanations to the Expect()s, remove unnecessary
code, and tighten up the CID check.
Signed-off-by: Ed Santiago <santiago@redhat.com>
I'm assuming this was buildah#5595: the COMMENT field moved around.
Deal with it, and add a few more checks while we're at it.
Signed-off-by: Ed Santiago <santiago@redhat.com>
...for debugging #24147, because "md5sum mismatch" is not
the best way to troubleshoot bytestream differences.
socat is run on the container, so this requires building a
new testimage (20241011). Bump to new CI VMs[1] which include it.
[1] https://github.com/containers/automation_images/pull/389
Signed-off-by: Ed Santiago <santiago@redhat.com>
The current mypod hack breaks down when running individual tests:
$ hack/bats 010 <<< barfs because it does not want pause-image!
Reason: Bats does not provide any official way to tell if tests
are being run in parallel.
Workaround: use an undocumented way.
Signed-off-by: Ed Santiago <santiago@redhat.com>