Add ExitPolicy key to pod quadlets with logic to default to stop.
Docs updated with clarifcation on default value and usage example.
Simple assert added to bats to verify default constraint exists.
Changed argument order in ginkgo basic pod unit test
Signed-off-by: Neil Bailey <nbsp@nbailey.net>
podman's logic to parse excludes from `--ignorefile` is not consistent
with buildah, use code directly from imagebuilder.
Closes: https://github.com/containers/podman/issues/25746
Signed-off-by: flouthoc <flouthoc.git@gmail.com>
We should fully replace the options, now that we vendored the
libnetwork/resolvconf changes into podman this just works.
Fixes: #22399
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When a container has no image, i.e. using rootfs like our new infra
containers then the Image function crashed trying to show the first 12
image ID chars. If there is no image simply show nothing there.
Fixes: #26224
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When podman restarts config values within the Engine are lost.
Add --hook-dirs arguments as appropriate to the cleanup command
so that hooks are preserved on restarts due to the on-restart setting
Tests: add a check that prestart/poststop hooks ran every time after 2
restarts.
`wait_for_restart_count` was re-used to wait for restarts and moved to
helpers file.
Signed-off-by: Dominique Martinet <dominique.martinet@atmark-techno.com>
Fixes: #17935
The tests were incorrectly using `/dev/zero`. These options are
intended to set I/O limits on specific block devices.
The test already sets up a loopback device, so reuse it.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The JSON decoder correctly cannot decode (overflow) negative values (e.g., `-1`) for fields of type `uint64`, as `-1` is used to represent `max` in `POSIXRlimit`. To handle this, we use `tmpSpecGenerator` to decode the request body. The `tmpSpecGenerator` replaces the `POSIXRlimit` type with a `tmpRlimit` type that uses the `json.Number` type for decoding values. The `tmpRlimit` is then converted into the `POSIXRlimit` type and assigned to the `SpecGenerator`.
This approach ensures compatibility with the Podman CLI and remote API, which already handle `-1` by casting it to `uint64` (`uint64(-1)` equals `MaxUint64`) to signify `max`.
Fixes: https://issues.redhat.com/browse/RUN-2859
Fixes: https://github.com/containers/podman/issues/24886
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
This looks like debug leftover, in any case this is not an error so
simply remove the line.
Fixes#25965
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The backstory for this is that runc 1.2 (opencontainers/runc#3967)
fixed a long-standing bug in our mount flag handling (a bug that crun
still has). Before runc 1.2, when dealing with locked mount flags that
user namespaced containers cannot clear, trying to explicitly clearing
locked flags (like rw clearing MS_RDONLY) would silently ignore the rw
flag in most cases and would result in a read-only mount. This is
obviously not what the user expects.
What runc 1.2 did is that it made it so that passing clearing flags
like rw would always result in an attempt to clear the flag (which was
not the case before), and would (in all cases) explicitly return an
error if we try to clear locking flags. (This also let us finally fix a
bunch of other long-standing issues with locked mount flags causing
seemingly spurious errors).
The problem is that podman sets rw on all mounts by default (even if
the user doesn't specify anything). This is actually a no-op in
runc 1.1 and crun because of a bug in how clearing flags were handled
(rw is the absence of MS_RDONLY but until runc 1.2 we didn't correctly
track clearing flags like that, meaning that rw would literally be
handled as if it were not set at all by users) but in runc 1.2 leads to
unfortunate breakages and a subtle change in behaviour (before, a ro
mount being bind-mounted into a container would also be ro -- though
due to the above bug even setting rw explicitly would result in ro in
most cases -- but with runc 1.2 the mount will always be rw even if
the user didn't explicitly request it which most users would find
surprising). By the way, this "always set rw" behaviour is a departure
from Docker and it is not necesssary.
Signed-off-by: rcmadhankumar <madhankumar.chellamuthu@suse.com>
This commit removes the code to build a local pause
image from the Containerfile. It is replaced with
code to find the catatonit binary and include it in
the Rootfs.
This removes the need to build a local pause container
image.
The same logic is also applied to createServiceContainer
which is originally also based on the pause image.
Fixes: #23292
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
When using a custom --root it will not have the image present and as
such cause a pull. We can however use our own local cache if present to
avoid the pull if we give the right podman options via
_PODMAN_TEST_OPTS.
I saw the volume quota test fail during the pull in openQA thus I
noticed this issue.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Somehow the files do not match sometimes, I like to get data on the
/etc/hosts file on the host looks to see if this would explain anything.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This reverts commit d633824a95.
The issue has been fixed in commit 9a0c0b2eef and I have not seen it
since so remove this special case.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Sinc v5.0 pasta is the default and if it would not be installed a ton of
tests would already fail. As such these conditional checks are
pointless and can be removed to simplify the tests.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
create the /etc/passwd and /etc/group files before any user/group
lookup so that the entries added dynamically are found by --user.
As a side effect, do not automatically create the group with same
value as the uid when not specified, since it is expected to run with
gid=0.
Closes: https://github.com/containers/podman/issues/25805
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
A recent change[1] in netavark makes it so we no longer set the default
dns.podman search domain. As such we must no longer test for it.
[1] https://github.com/containers/netavark/pull/1214
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Since commit 945aade38b we do tear down the kube units if all pods
failed to start. This however broke the use case of an empty pod as we
did not consider that being starting successfully which is wrong and
caused a regression for at least one user.
To fix this special case the empty pod and consider that running.
Fixes: #25786
Fixes: 945aade38b ("quadlet kube: correctly mark unit as failed")
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Networks are stored in two ways in the DB, first a static network list
which holds all the network with its option for the container. Second,
the network status which hold the actual network result from netavark
but only when the container is running.
If the container is running they must be in sync and podman inspect has
checks to ensure that as well it errors out of there is a desync between
the two.
As the adding to the db and doing actual networking configuration are
diffeent parts it possible that one worked while the other failed which
triggers the desync. To avoid this make the network connect/disconnect
code more robust against partial failures. When the network calls fail
we update the db again to remove/add the network back.
Fixes: https://issues.redhat.com/browse/RHEL-78037
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Do not use the interspersed option for logs, it is not needed and just
restricts valid use cases.
Fixes#25653
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Includes one minor test fix as the line number reported as error was
changed, it seems to be actually correct now.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Giuseppe is working on some proper fixes, for now in order to get this
moved along skip it so we can merge the disk usage fix.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
nc can be provided by either ncat (nmap) or netcat (OpenBSD), we only
work with the nmap version so make sure we always use that one and not
the short alias which can be resolved to either one.
It is not clear to me what changed on rawhide but it seemsv netcat is
preferred even though we have nmap-ncat installed.
Note this only changes the host side nc calls, the Alpine based images
only have nc as command so we must continue to use it inside.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
GoLang sets unset values to the default value of the type. This means that the destination of the log is an empty string and the count and size are set to 0. However, this means that size and count are unbounded, and this is not the default behavior.
Fixes: https://github.com/containers/podman/issues/25473
Fixes: https://issues.redhat.com/browse/RHEL-83262
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
When no containers could be started we need to make sure the unit status
reflects this. This means we should not send the READ=1 message and not
keep the service container running when we were unable to start any
container.
There is the question what should happen when only a subset was started.
For systemd we can only be either running or failed. And as podman kube
play also just keeps the partial started pods running I opted to let
systemd keep considering this as success.
Fixes#20667
Fixes https://issues.redhat.com/browse/RHEL-80471
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When starting a container consider healthcheck errors fatal. That way
user know when systemd-run failed to setup the timer to run the
healthcheck and we don't get into a state where the container is running
but not the healthcheck.
This also fixes the broken error reporting from the systemd-run exec, if
the binary could not be run the output was just empty leaving the users
with no idea what failed.
Fixes#25034
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This resolves an ordering issue that prevented quotas from being
applied. XFS quotas are applied recursively, but only for
subdirectories created after the quota is applied; if we create
`_data` before the quota, and then use `_data` for all data in
the volume, the quota will never be used by the volume.
Also, add a test that volume quotas are working as designed using
an XFS formatted loop device in the system tests. This should
prevent any further regressions on basic quota functionality,
such as quotas being shared between volumes.
Fixes#25368
Signed-off-by: Matt Heon <mheon@redhat.com>