if idmap is specified for a volume, reverse the mappings when copying
up from the container, so that the original permissions are maintained.
Closes: https://github.com/containers/podman/issues/23467
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
BATS teardown logs are unreadable, making it almost impossible
to see tiny "Leaked this-or-that" messages.
Solution: new _run_podman_quiet() helper, replaces run_podman
in a small number of cases within teardown. Clunky, and
duplicative, sorry.
New helper for leak_check, basically spits out warnings (and
bumps error count) if it sees any output whatsoever from
individual "podman XXX ls" commands.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The network cleanup can handle it when it is killed half way through as
it spits out a bunch of error in that case on the next cleanup attempt.
Try to avoid getting into such a state and ignore sigterm during this
section.
Of course we stil can get SIGKILL so we should work on fixing the
underlying problems in network cleanup but let's see if this helps us
with the CI flakes in the meantime.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fix up a couple of versions in comments in the
pkg/api/server/register_images.go file. Based on comments
from #23440
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
When using service containers and play kube we create a complicated set
of dependencies.
First in a pod all conmon/container cgroups are part of one slice, that
slice will be removed when the entire pod is stopped resulting in
systemd killing all processes that were part in it.
Now the issue here is around the working of stopPodIfNeeded() and
stopIfOnlyInfraRemains(), once a container is cleaned up it will check
if the pod should be stopped depending on the pod ExitPolicy. If this is
the case it wil stop all containers in that pod. However in our flaky
test we calle podman pod kill which logically killed all containers
already. Thus the logic now thinks on cleanup it must stop the pod and
calls into pod.stopWithTimeout(). Then there we try to stop but because
all containers are already stopped it just throws errors and never gets
to the point were it would call Cleanup(). So the code does not do
cleanup and eventually calls removePodCgroup() which will cause all
conmon and other podman cleanup processes of this pod to be killed.
Thus the podman container cleanup process was likely killed while
actually trying to the the proper cleanup which leaves us in a bad
state.
Following commands such as podman pod rm will try to the cleanup again
as they see it was not completed but then fail as they are unable to
recover from the partial cleanup state.
Long term network cleanup needs to be more robust and ideally should be
idempotent to handle cases were cleanup was killed in the middle.
Fixes#21569
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This commit was automatically cherry-picked
by buildah-vendor-treadmill v0.3
from the buildah vendor treadmill PR, #13808
/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
> The git commit message from that PR is below. Please review it,
> edit as necessary, then remove this comment block.
\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changes since 2024-05-21:
* document --compat-volumes
* Fix conflict caused by Ed's local-registry PR in buildah
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
Split the table to three based on the expected outcome
Use helper functions to reduce the amount of parameter required in each entry
Remove the service name override code
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
We bind ports to ensure there are no conflicts and we leak them into
conmon to keep them open. However we bound the ports after the network
was set up so it was possible for a second network setup to overwrite
the firewall configs of a previous container as it failed only later
when binding the port. As such we must ensure we bind before the network
is set up.
This is not so simple because we still have to take care of
PostConfigureNetNS bool in which case the network set up happens after
we launch conmon. Thus we end up with two different conditions.
Also it is possible that we "leak" the ports that are set on the
container until the garbage collector will close them. This is not
perfect but the alternative is adding special error handling on each
function exit after prepare until we start conmon which is a lot of work
to do correctly.
Fixes https://issues.redhat.com/browse/RHEL-50746
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fixes: e62c928642 ("Make podman-compose refer to podman-compose(1) when using an external provider")
- test: add coverage for PODMAN_COMPOSE_WARNING_LOGS
Signed-off-by: Petter Mikkelsen <43xhyr9m@anonaddy.me>
I broke the kube external storage test in the course of my
safename PR: _write_test_yaml() with no command generated
a pod that did not trigger the conditions required for
this test.
Solution: run a container (top). Add new checks to prevent
this gap from happening again.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The value of the pointer might be changed while creating the container
causing unexpected side effects.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>