Checking of service and timer caused unexpected exit code `3` of `systemctl status`. Since the status check can be executed when HealthCheck was exited, this caused a termination error code `3` for `systemctl status`. Because service was in dead state because HealthCheck exited.
Fixes: https://github.com/containers/podman/issues/25204
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
The test `podman selinux: check unsupported relabel` has been failing
recently on Fedora rawhide.
This is due to a regression in the `ls` command itself. Workaround for
now is to switch to `getfattr -n security.selinux ...`.
Ref: https://github.com/containers/podman/issues/25132#issuecomment-2615744915Fixes: #25132
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
Fixes: https://github.com/containers/podman/issues/25002
Also add the ability to inspect containers for
UseImageHosts and UseImageHostname.
Finally fixed some bugs in handling of --no-hosts for Pods,
which I descovered.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When running this test on a system without unqualifiedsearch registries
it will fail with a different error causing the test to fail. to avoid
that case define our own registries.conf that defines quay.io as
registry. This should make the test pass in the debian env.
Signed-off-by: Paul Holzinger <git@holzinger.dev>
Found in debian testing where by default there are no unqualified search
registries installed. As such the test failed as the FIXME said. Now
there is no need for the test to assume anything.
Instead set our own config via CONTAINERS_REGISTRIES_CONF then we can
do exact matches, except that env was not read in the shell completion
code so move some code around to make it read the var in the same way as
podman login/logout.
Signed-off-by: Paul Holzinger <git@holzinger.dev>
with defaults values when changes configuration with podman update.
The new LinuxResource structure does not represent the current unchanged configuration, which was not affected by the change.
Fixes: https://issues.redhat.com/browse/RUN-2375
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
The command `podman info` returned only one imagestore in
`store.graphOptions.<driver>.imagestore` even if multiple
image stores were configured.
This change replaces the field `<driver>.imagestore` with
the field `<driver>.additionalImageStores`, that instead
of a string is an array of strings and that includes all
the configured additional image stores.
Fix https://github.com/containers/storage/issues/2094
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
New flags in a `podman update` can change the configuration of HealthCheck when the container is started, without having to restart or recreate the container.
This can help determine why a given container suddenly started failing HealthCheck without interfering with the services it provides. For example, reconfigure HealthCheck to keep logs longer than the usual last X results, store logs to other destinations, etc.
Fixes: https://issues.redhat.com/browse/RHEL-60561
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Test is failing on 1mt because of differences between 'stat'
command output and /proc/mounts. Solution: compare stat %t
(hex filesystem type), not %T (human-readable). This should
match no matter what kernel version or version of stat on
host/container.
Fixes: #24611
Signed-off-by: Ed Santiago <santiago@redhat.com>
Final cleanup. Has been working fine in #23257 for weeks.
Not much gain here, but every little bit helps.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Previous version was badly broken: it relied on 'make'
rebuilding a file under cwd, which is a no-no; and, in
the case where we don't have a source directory, just
blindly hoped that there'd be a system-installed .service
file with the correct path to podman.
Solution:
. if running in source directory, run sed directly into
destination service file in $UNIT_DIR. This is ugly
duplication of a line in Makefile.
. if NOT running in a source directory, check $PODMAN:
. if it's /usr/bin/podman, continue. Include a warning
that will be shown only on test failure.
. otherwise skip, because we don't know what we're testing
Signed-off-by: Ed Santiago <santiago@redhat.com>
Up to now this test has been run using:
PODMAN_TIMEOUT=2 run_podman kube play ...
...and this gives podman time to start the pod before getting
the signal.
When run in parallel, under heavy load, the above command seems
to time out before podman has gotten its act together. Weird
things happen, like weird exit status and (most crucially)
zombie containers.
Solution: wait for container to actually start before we kill it.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Regression test for #23550. Setting the TZDIR env should make no
difference for the local timezone as this is not a real timezone name
that is resolved from that directory.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add support for inspecting Mounts which include SubPaths.
Handle SubPaths for kubernetes image volumes.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Debug for #23913, I though if we have no idea which process is nuking
the volume then we need to figure this out. As there is no reproducer
we can (ab)use the cleanup tracer. Simply trace all unlink syscalls to
see which process deletes our special named volume. Given the volume
name is used as path on the fs and is deleted on volume rm we should
know exactly which process deleted it the next time hopefully.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
In preparation for maybe some day being able to run build tests
in parallel.
SUPER IMPORTANT NOTE! BUILD TESTS CANNOT BE PARALLELIZED YET!
buildah, when run in parallel, barfs with:
race: parallel builds: copying...committing...creating... layer not known
Until this is fixed, podman-build can never be run in parallel.
See https://github.com/containers/buildah/issues/5674
This PR is simply cleaning things up so, if/when that day comes,
the ensuing parallelize PR will be short & sweet.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The recent fedora kernel 6.11.4 has a problem with ipv6 networks [1].
This is not a podman bug at all but rather a kernel regression. I can
reproduce the issue easily by running this test.
Given many users were hit by this add it to the distro level gating
which runs in the fedora openQA framework and then we should catch a
bad kernel like this hopefully in the future and prevent it from going
into stable.
[1] https://github.com/containers/podman/issues/24374
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Quadlet tests and some systemd tests leak unit files, as
reported by 'systemctl list-units --failed'. Clean them up.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The startup service is special because we have to transition from
startup to the normal unit. And in order to do so we kill ourselves (as
we are run as part of the service). This means we always exited 1 which
causes systemd to keep us failure and not remove the transient unit
unless "reset-failed" is called. As there is no process around to do
that we cannot really do this, thus make us exit(0) which makes more
sense.
Of course we could try to reset-failed the unit later but the code for
that seems more complicated than that.
Add a new test from Ed that ensures we check for all healthcheck units
not just the timer to avoid leaks. I slightly modified it to provide a
better error on leaks.
Fixes: 0bbef4b830 ("libpod: rework shutdown handler flow")
Fixes: #24351
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As an internal consistency check, the pasta tests check for duplicated test
cases by grepping a log file for a parsed test id. However it uses
grep -F for the purpose which will not perform an exact match, but a
substring match. There are some tests which generate an id which is a
substring of the id for other tests, so when test order is randomised, this
can cause a spurious failure. This can happen in practice when running
the test in parallel with very high concurrency (e.g. -j 100).
Fix this by adding the -x option to grep, which only checks for full line
exact matches.
Fixes: https://github.com/containers/podman/issues/24342
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The additional image store feature assumes that images / layers
in the additional store never go away, while we do remove it after
this test. Try to repair the store.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>