Found in debian testing where by default there are no unqualified search
registries installed. As such the test failed as the FIXME said. Now
there is no need for the test to assume anything.
Instead set our own config via CONTAINERS_REGISTRIES_CONF then we can
do exact matches, except that env was not read in the shell completion
code so move some code around to make it read the var in the same way as
podman login/logout.
Signed-off-by: Paul Holzinger <git@holzinger.dev>
with defaults values when changes configuration with podman update.
The new LinuxResource structure does not represent the current unchanged configuration, which was not affected by the change.
Fixes: https://issues.redhat.com/browse/RUN-2375
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
The command `podman info` returned only one imagestore in
`store.graphOptions.<driver>.imagestore` even if multiple
image stores were configured.
This change replaces the field `<driver>.imagestore` with
the field `<driver>.additionalImageStores`, that instead
of a string is an array of strings and that includes all
the configured additional image stores.
Fix https://github.com/containers/storage/issues/2094
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
New flags in a `podman update` can change the configuration of HealthCheck when the container is started, without having to restart or recreate the container.
This can help determine why a given container suddenly started failing HealthCheck without interfering with the services it provides. For example, reconfigure HealthCheck to keep logs longer than the usual last X results, store logs to other destinations, etc.
Fixes: https://issues.redhat.com/browse/RHEL-60561
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Test is failing on 1mt because of differences between 'stat'
command output and /proc/mounts. Solution: compare stat %t
(hex filesystem type), not %T (human-readable). This should
match no matter what kernel version or version of stat on
host/container.
Fixes: #24611
Signed-off-by: Ed Santiago <santiago@redhat.com>
Final cleanup. Has been working fine in #23257 for weeks.
Not much gain here, but every little bit helps.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Previous version was badly broken: it relied on 'make'
rebuilding a file under cwd, which is a no-no; and, in
the case where we don't have a source directory, just
blindly hoped that there'd be a system-installed .service
file with the correct path to podman.
Solution:
. if running in source directory, run sed directly into
destination service file in $UNIT_DIR. This is ugly
duplication of a line in Makefile.
. if NOT running in a source directory, check $PODMAN:
. if it's /usr/bin/podman, continue. Include a warning
that will be shown only on test failure.
. otherwise skip, because we don't know what we're testing
Signed-off-by: Ed Santiago <santiago@redhat.com>
Up to now this test has been run using:
PODMAN_TIMEOUT=2 run_podman kube play ...
...and this gives podman time to start the pod before getting
the signal.
When run in parallel, under heavy load, the above command seems
to time out before podman has gotten its act together. Weird
things happen, like weird exit status and (most crucially)
zombie containers.
Solution: wait for container to actually start before we kill it.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Regression test for #23550. Setting the TZDIR env should make no
difference for the local timezone as this is not a real timezone name
that is resolved from that directory.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add support for inspecting Mounts which include SubPaths.
Handle SubPaths for kubernetes image volumes.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Debug for #23913, I though if we have no idea which process is nuking
the volume then we need to figure this out. As there is no reproducer
we can (ab)use the cleanup tracer. Simply trace all unlink syscalls to
see which process deletes our special named volume. Given the volume
name is used as path on the fs and is deleted on volume rm we should
know exactly which process deleted it the next time hopefully.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
In preparation for maybe some day being able to run build tests
in parallel.
SUPER IMPORTANT NOTE! BUILD TESTS CANNOT BE PARALLELIZED YET!
buildah, when run in parallel, barfs with:
race: parallel builds: copying...committing...creating... layer not known
Until this is fixed, podman-build can never be run in parallel.
See https://github.com/containers/buildah/issues/5674
This PR is simply cleaning things up so, if/when that day comes,
the ensuing parallelize PR will be short & sweet.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The recent fedora kernel 6.11.4 has a problem with ipv6 networks [1].
This is not a podman bug at all but rather a kernel regression. I can
reproduce the issue easily by running this test.
Given many users were hit by this add it to the distro level gating
which runs in the fedora openQA framework and then we should catch a
bad kernel like this hopefully in the future and prevent it from going
into stable.
[1] https://github.com/containers/podman/issues/24374
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Quadlet tests and some systemd tests leak unit files, as
reported by 'systemctl list-units --failed'. Clean them up.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The startup service is special because we have to transition from
startup to the normal unit. And in order to do so we kill ourselves (as
we are run as part of the service). This means we always exited 1 which
causes systemd to keep us failure and not remove the transient unit
unless "reset-failed" is called. As there is no process around to do
that we cannot really do this, thus make us exit(0) which makes more
sense.
Of course we could try to reset-failed the unit later but the code for
that seems more complicated than that.
Add a new test from Ed that ensures we check for all healthcheck units
not just the timer to avoid leaks. I slightly modified it to provide a
better error on leaks.
Fixes: 0bbef4b830 ("libpod: rework shutdown handler flow")
Fixes: #24351
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As an internal consistency check, the pasta tests check for duplicated test
cases by grepping a log file for a parsed test id. However it uses
grep -F for the purpose which will not perform an exact match, but a
substring match. There are some tests which generate an id which is a
substring of the id for other tests, so when test order is randomised, this
can cause a spurious failure. This can happen in practice when running
the test in parallel with very high concurrency (e.g. -j 100).
Fix this by adding the -x option to grep, which only checks for full line
exact matches.
Fixes: https://github.com/containers/podman/issues/24342
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The additional image store feature assumes that images / layers
in the additional store never go away, while we do remove it after
this test. Try to repair the store.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Historically, non-schema1 images had a deterministic image ID == config digest.
With zstd:chunked, we don't want to deduplicate layers pulled by consuming the
full tarball and layers partially pulled based on TOC, because we can't cheaply
ensure equivalence; so, image IDs for images where a TOC was used differ.
To accommodate that, compare images using their configs digests, not using image IDs.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
When looking up the current-store image ID, do that
from the same output where we verify that the ID is from the
current store, instead of listing images twice.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
The test got the stores RW status backwards.
Before zstd:chunked, both image IDs should be the same, so this used
to make no difference.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
when the current soft limit is higher than the new value, ulimit fails
to set the hard limit as (tested on Rawhide):
[root@rawhide ~]# ulimit -n -H 1048575
-bash: ulimit: open files: cannot modify limit: Invalid argument
to avoid the problem, set also the soft limit:
[root@rawhide ~]# ulimit -n -H
12345678
[root@rawhide ~]# ulimit -n -H 1048575
-bash: ulimit: open files: cannot modify limit: Invalid argument
[root@rawhide ~]# ulimit -n -SH 1048575
[root@rawhide ~]# ulimit -n -H
1048575
commit 71d5ee0e04 introduced the issue.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
As documented in the issue there is no way to wait for system units from
the user session[1]. This causes problems for rootless quadlet units as
they might be started before the network is fully up. TWhile this was
always the case and thus was never really noticed the main thing that
trigger a bunch of errors was the switch to pasta.
Pasta requires the network to be fully up in order to correctly select
the right "template" interface based on the routes. If it cannot find a
suitable interface it just fails and we cannot start the container
understandingly leading to a lot of frustration from users.
As there is no sign of any movement on the systemd issue we work around
here by using our own user unit that check if the system session
network-online.target it ready.
Now for testing it is a bit complicated. While we do now correctly test
the root and rootless generator since commit ada75c0bb8 the resulting
Wants/After= lines differ between them and there is no logic in the
testfiles themself to say if root/rootless to match specifics. One idea
was to use `assert-key-is-rootless/root` but that seemed like more
duplication for little reason so use a regex and allow both to make it
pass always. To still have some test coverage add a check in the system
test to ask systemd if we did indeed have the right depdendencies where
we can check for exact root/rootless name match.
[1] https://github.com/systemd/systemd/issues/3312Fixes#22197
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
...for debugging #24147, because "md5sum mismatch" is not
the best way to troubleshoot bytestream differences.
socat is run on the container, so this requires building a
new testimage (20241011). Bump to new CI VMs[1] which include it.
[1] https://github.com/containers/automation_images/pull/389
Signed-off-by: Ed Santiago <santiago@redhat.com>
The current mypod hack breaks down when running individual tests:
$ hack/bats 010 <<< barfs because it does not want pause-image!
Reason: Bats does not provide any official way to tell if tests
are being run in parallel.
Workaround: use an undocumented way.
Signed-off-by: Ed Santiago <santiago@redhat.com>
since the effect would be to lower the rlimits when their definition
is higher than the default value.
The test doesn't fail on the previous version, unless the system is
configured with a nofile ulimit higher than the default value.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2317721
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
In debian EST and MST7MDT are gone by default and moved to a special
package[1], instead of also installing that in the images lets use
different timezones in the test.
[1] 42c0008f86
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This command sequence causes SizeRootFs to change on foo:
podman tag foo newimagename
podman save ... newimagename
podman load ...
Solution: get foo completely out of the picture. Use an
airgapped image: new image, new digest, new everything.
Fixes: #23756
Signed-off-by: Ed Santiago <santiago@redhat.com>
There's an important reason why the healthcheck container in 055-rm
test uses 'sleep infinity' and not 'top. Document it.
And, the test itself wasn't actually working as intended. Make
it safer by confirming that the container actually enters
the "stopping" state.
Signed-off-by: Ed Santiago <santiago@redhat.com>