As part of our database init, we perform a check of the current
values for a few fields (graph driver, graph root, static dir,
and a few more) to validate that Libpod is being started with a
sane & sensible config, and the user's containers can actually be
expected to work. Basically, we take the current runtime config
and compare against values cached in the database from the first
time Podman was run.
We've had some issues with this logic before this year around
symlink resolution, but this is a new edge case. Somehow, the
database is being loaded with the empty string for some fields
(at least graph driver) which is causing comparisons to fail
because we will never compare against "" for those fields - we
insert the default value instead, assuming we have one.
Having a value of "" in the database largely invalidates the
check so arguably we could just drop it, but what BoltDB did -
and what SQLite does after this patch - is to use the default
value for comparison instead of "". This should still catch some
edge cases, and shouldn't be too harmful.
What this does not do is identify or solve the reason that we are
seeing the empty string in the database at all. From my read on
the logic, it must mean that the graph driver is explicitly set
to "" in the c/storage config at the time Podman is first run and
I'm not precisely sure how that happens.
Fixes#24738
Signed-off-by: Matt Heon <mheon@redhat.com>
Checking of service and timer caused unexpected exit code `3` of `systemctl status`. Since the status check can be executed when HealthCheck was exited, this caused a termination error code `3` for `systemctl status`. Because service was in dead state because HealthCheck exited.
Fixes: https://github.com/containers/podman/issues/25204
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
The test `podman selinux: check unsupported relabel` has been failing
recently on Fedora rawhide.
This is due to a regression in the `ls` command itself. Workaround for
now is to switch to `getfattr -n security.selinux ...`.
Ref: https://github.com/containers/podman/issues/25132#issuecomment-2615744915Fixes: #25132
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
Fixes: https://github.com/containers/podman/issues/25002
Also add the ability to inspect containers for
UseImageHosts and UseImageHostname.
Finally fixed some bugs in handling of --no-hosts for Pods,
which I descovered.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When running this test on a system without unqualifiedsearch registries
it will fail with a different error causing the test to fail. to avoid
that case define our own registries.conf that defines quay.io as
registry. This should make the test pass in the debian env.
Signed-off-by: Paul Holzinger <git@holzinger.dev>
Found in debian testing where by default there are no unqualified search
registries installed. As such the test failed as the FIXME said. Now
there is no need for the test to assume anything.
Instead set our own config via CONTAINERS_REGISTRIES_CONF then we can
do exact matches, except that env was not read in the shell completion
code so move some code around to make it read the var in the same way as
podman login/logout.
Signed-off-by: Paul Holzinger <git@holzinger.dev>
with defaults values when changes configuration with podman update.
The new LinuxResource structure does not represent the current unchanged configuration, which was not affected by the change.
Fixes: https://issues.redhat.com/browse/RUN-2375
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
The command `podman info` returned only one imagestore in
`store.graphOptions.<driver>.imagestore` even if multiple
image stores were configured.
This change replaces the field `<driver>.imagestore` with
the field `<driver>.additionalImageStores`, that instead
of a string is an array of strings and that includes all
the configured additional image stores.
Fix https://github.com/containers/storage/issues/2094
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
New flags in a `podman update` can change the configuration of HealthCheck when the container is started, without having to restart or recreate the container.
This can help determine why a given container suddenly started failing HealthCheck without interfering with the services it provides. For example, reconfigure HealthCheck to keep logs longer than the usual last X results, store logs to other destinations, etc.
Fixes: https://issues.redhat.com/browse/RHEL-60561
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Test is failing on 1mt because of differences between 'stat'
command output and /proc/mounts. Solution: compare stat %t
(hex filesystem type), not %T (human-readable). This should
match no matter what kernel version or version of stat on
host/container.
Fixes: #24611
Signed-off-by: Ed Santiago <santiago@redhat.com>
Final cleanup. Has been working fine in #23257 for weeks.
Not much gain here, but every little bit helps.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Previous version was badly broken: it relied on 'make'
rebuilding a file under cwd, which is a no-no; and, in
the case where we don't have a source directory, just
blindly hoped that there'd be a system-installed .service
file with the correct path to podman.
Solution:
. if running in source directory, run sed directly into
destination service file in $UNIT_DIR. This is ugly
duplication of a line in Makefile.
. if NOT running in a source directory, check $PODMAN:
. if it's /usr/bin/podman, continue. Include a warning
that will be shown only on test failure.
. otherwise skip, because we don't know what we're testing
Signed-off-by: Ed Santiago <santiago@redhat.com>
Up to now this test has been run using:
PODMAN_TIMEOUT=2 run_podman kube play ...
...and this gives podman time to start the pod before getting
the signal.
When run in parallel, under heavy load, the above command seems
to time out before podman has gotten its act together. Weird
things happen, like weird exit status and (most crucially)
zombie containers.
Solution: wait for container to actually start before we kill it.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Regression test for #23550. Setting the TZDIR env should make no
difference for the local timezone as this is not a real timezone name
that is resolved from that directory.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add support for inspecting Mounts which include SubPaths.
Handle SubPaths for kubernetes image volumes.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Debug for #23913, I though if we have no idea which process is nuking
the volume then we need to figure this out. As there is no reproducer
we can (ab)use the cleanup tracer. Simply trace all unlink syscalls to
see which process deletes our special named volume. Given the volume
name is used as path on the fs and is deleted on volume rm we should
know exactly which process deleted it the next time hopefully.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
In preparation for maybe some day being able to run build tests
in parallel.
SUPER IMPORTANT NOTE! BUILD TESTS CANNOT BE PARALLELIZED YET!
buildah, when run in parallel, barfs with:
race: parallel builds: copying...committing...creating... layer not known
Until this is fixed, podman-build can never be run in parallel.
See https://github.com/containers/buildah/issues/5674
This PR is simply cleaning things up so, if/when that day comes,
the ensuing parallelize PR will be short & sweet.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The recent fedora kernel 6.11.4 has a problem with ipv6 networks [1].
This is not a podman bug at all but rather a kernel regression. I can
reproduce the issue easily by running this test.
Given many users were hit by this add it to the distro level gating
which runs in the fedora openQA framework and then we should catch a
bad kernel like this hopefully in the future and prevent it from going
into stable.
[1] https://github.com/containers/podman/issues/24374
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Quadlet tests and some systemd tests leak unit files, as
reported by 'systemctl list-units --failed'. Clean them up.
Signed-off-by: Ed Santiago <santiago@redhat.com>