Move error checking of possible null returned value before
its dereference in importBuilder.Format
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Signed-off-by: Tigran Sogomonian <tsogomonian@astralinux.ru>
commit 5ebba75dbd implemented this
behaviour for rootless users and later commit
0a69aefa41 changed it when in a user
namespace, but the same limitation exists for root without
CAP_SYS_RESOURCE. Change the check to use the clamp to the current
values if running without CAP_SYS_RESOURCE.
Closes: https://github.com/containers/podman/issues/24692
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
1. Completed the EventerType comment.
2. Changed EventerType to be represented as a string.
3. Since EventerType is designed to be entirely lowercase, changed the comparison to use lowercase instead of uppercase.
4. Renamed newEventJournalD to newJournalDEventer.
5. Removed redundant error-checking steps in events_linux.go.
Signed-off-by: ksw2000 <13825170+ksw2000@users.noreply.github.com>
* Add --hosts-file flag to container create, container run and pod create
* Add HostsFile field to pod inspect and container inspect results
* Test BaseHostsFile config in containers.conf
Signed-off-by: Gavin Lam <gavin.oss@tutamail.com>
New flags in a `podman update` can change the configuration of HealthCheck when the container is started, without having to restart or recreate the container.
This can help determine why a given container suddenly started failing HealthCheck without interfering with the services it provides. For example, reconfigure HealthCheck to keep logs longer than the usual last X results, store logs to other destinations, etc.
Fixes: https://issues.redhat.com/browse/RHEL-60561
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
In theory RootlessNetnsInfo() should never return nil here. However that
was actually only true when the rootless netns was set up before and
wrote the right cache file with the ip addresses.
Given this cache file is a new feature just added in 5.3 if you updated
from 5.2 or earlier the file will not exists thus cause failures for all
following started containers.
The fix for this is to stop all containers and make sure the
rootless-netns was removed so the next start creates it new with the
proper 5.3 cache file. However as there is no way to rely on users doing
that and it is also not requirement so simply handle the nil deref here.
The only way to test this would be to run the old version then the new
version which we cannot really do in CI. We do have upgrade test for
that but they are root only and likely need a lot more work to get them
going rootless but certainly worth to explore to prevent such problems
in the future.
Fixes: a1e6603133 ("libpod: make use of new pasta option from c/common")
Fixes: #24566
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
commit 5ebba75dbd implemented this
behaviour for rootless users, but the same limitation exists for any
user in a user namespace. Change the check to use the clamp to the
current values anytime podman runs in a user namespace.
Closes: https://github.com/containers/podman/issues/24508
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
All the backend work was done a while back for image volumes, so
this is effectively just plumbing the option in for volumes in
the parser logic. We do need to change the return type of the
volume parser as it only worked on spec.Mount before (which does
not have subpath support, so we'd have to pass it as an option
and parse it again) but that is cleaner than the alternative.
Fixes#20661
Signed-off-by: Matt Heon <mheon@redhat.com>
Add support for inspecting Mounts which include SubPaths.
Handle SubPaths for kubernetes image volumes.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This commit resolves an issue where network creation and removal events were not being logged in `podman events`. A new function has been introduced in the `events` package to ensure consistent logging of network lifecycle events. This update will allow users to track network operations more effectively through the event log, improving visibility and aiding in debugging network-related issues.
Fixes: #24032
Signed-off-by: Sainath Sativar <Sativar.sainath@gmail.com>
This is not needed and was added by during debugging but it turned out
to be something else. We should not lock the thread unless needed
because this just raises question why it is here otherwise.
Also the lock would not do much as we spawn a goroutine below anyway so
it runs on another thread no matter what.
From the review comment by Miloslav but it was merged before I had the
chance to fix it:
https://github.com/containers/podman/pull/24406#discussion_r1828102666
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
One of the problems with the Events() API was that you had to call it in
a new goroutine. This meant the the error returned by it had to be read
back via a second channel. This cuased other bugs in the past but here
the biggest problem is that basic errors such as invalid since/until
options were not directly returned to the caller.
It meant in the API we were not able to write http code 200 quickly
because we always waited for the first event or error from the
channels. This in turn made some clients not happy as they assume the
server hangs on time out if no such events are generated.
To fix this we resturcture the entire event flow. First we spawn the
goroutine inside the eventer Read() function so not all the callers have
to. Then we can return the basic error quickly without the goroutine.
The caller then checks the error like any normal function and the API
can use this one to decide which status code to return.
Second we now return errors/event in one channel then the callers can
decide to ignore or log them which makes it a bit more clear.
Fixes c46884aa93 ("podman events: check for an error after we finish reading events")
Fixes#23712
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This type is unsused, undocumented and basically broken. If this would
be used anywhere it will just deadlock after writing 100+ events without
reading as the channel will just be full.
It was added in commit 8da5f3f733 but never used there nor is there any
justification why this was added in the commit message or PR comments.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Using the internal Wait() API over the events API as this is much more
efficient. Reading events will need to read a lot of data otherwise.
For the function here it should work fine and it is even better as it
does not depend on the event logger at all.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Setup2() calls Setup() so they are both the same thing, the idea was to
keep Setup2() around in c/common for a bit to avoid breaking changes
during our regular vendoring. Now just use Setup() so we can get rid of
Setup2() in c/common.
a7415c3eab
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The cgroup.Stat() operation is not atomic, so it's possible that the
cgroup is removed during the Stat() call. Catch specific errors that
can occur when the cgroup is missing and validate the existence of the
cgroup path.
If the cgroup is not found, return a more specific error indicating
that the container has been removed.
Closes: https://github.com/containers/podman/issues/23789
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We reset the failed unit to not leak it, however we did so before
stopping, this is wrong because when the stop fails we will again have a
failed unit. The correct thing is to reset after the stop because once
it is stopped it cannot create new errors.
I found this using the following reproducer and this is enough to fix
it:
```
while :; do
cid=$(podman run -d --name foo --health-cmd /home/podman/healthcheck \
--health-startup-cmd /home/podman/healthcheck \
quay.io/libpod/testimage:20241011 /home/podman/pause)
podman healthcheck run $cid
podman rm -fa
sleep 2
systemctl --user list-units --failed | grep $cid && break
done
```
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The startup service is special because we have to transition from
startup to the normal unit. And in order to do so we kill ourselves (as
we are run as part of the service). This means we always exited 1 which
causes systemd to keep us failure and not remove the transient unit
unless "reset-failed" is called. As there is no process around to do
that we cannot really do this, thus make us exit(0) which makes more
sense.
Of course we could try to reset-failed the unit later but the code for
that seems more complicated than that.
Add a new test from Ed that ensures we check for all healthcheck units
not just the timer to avoid leaks. I slightly modified it to provide a
better error on leaks.
Fixes: 0bbef4b830 ("libpod: rework shutdown handler flow")
Fixes: #24351
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This will appease the higher-level quota logic. Basically, to
find a free quota ID to prevent reuse, we will iterate through
the contents of the directory and check the quota IDs of all
subdirectories, then use the first free ID found that is larger
than the base ID (the one set on the base directory). Problem:
our volumes use a two-tier directory structure, where the volume
has an outer directory (with the name of the actual volume) and
an inner directory (always named _data). We were only setting the
quota on _data, meaning the outer directory did not have an ID,
and the ID-choosing logic thus never detected that any IDs had
been allocated and always chose the same ID.
Setting the ID on the outer directory with PROJINHERIT set makes
the ID allocation logic work properly, and guarantees children
inherit the ID - so _data and all contents of the volume get the
ID as we'd expect.
No tests as we don't have a filesystem in our CI that supports
XFS quotas (setting it on / needs kernel flags added).
Fixes https://issues.redhat.com/browse/RHEL-18038
Signed-off-by: Matt Heon <mheon@redhat.com>
the previous implementation was expecting the rlimits to be set for the
entire process and clamping the values only when running as rootless.
Change the implementation to always specify the expected values in the
OCI spec file and do the clamping only when running as rootless and
using the default values.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
In the common scenario of podman-remote run --rm the API is required to
attach + start + wait to get exit code. This has the problem that the
wait call races against the container removal from the cleanup process
so it may not get the exit code back. However we keep the exit code
around for longer than the container so we can just look it up in the
endpoint. Of course this only works when we get a full id as param but
podman-remote will do that.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Undoing some of my own work here from #24090 now that we have the
ExposedPorts field implemented in inspect. I considered a revert
of that patch, but it's still needed as without it we'd be
including exposed ports when --net=container which is not
correct.
Basically, exposed ports for a container should always go in the
new ExposedPorts field we added. They sometimes go in the Ports
field in NetworkSettings, but only when the container is not
net=host and not net=container. We were always including exposed
ports, which was not correct, but is an easy logical fix.
Also required is a test change to correct the expected behavior
as we were testing for incorrect behavior.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
the kernel checks that both the uid and the gid are mapped inside the
user namespace, not only the uid:
/**
* privileged_wrt_inode_uidgid - Do capabilities in the namespace work over the inode?
* @ns: The user namespace in question
* @idmap: idmap of the mount @inode was found from
* @inode: The inode in question
*
* Return true if the inode uid and gid are within the namespace.
*/
bool privileged_wrt_inode_uidgid(struct user_namespace *ns,
struct mnt_idmap *idmap,
const struct inode *inode)
{
return vfsuid_has_mapping(ns, i_uid_into_vfsuid(idmap, inode)) &&
vfsgid_has_mapping(ns, i_gid_into_vfsgid(idmap, inode));
}
for this reason, improve the check for hasCurrentUserMapped to verify
that the gid is also mapped, and if it is not, use an intermediate
mount for the container rootfs.
Closes: https://github.com/containers/podman/issues/24159
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
In this code, g.HostSpecific is _always_ false, as it is never set by
generate.New and is thus left at the default value (false).
Remove dead code.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
A field we missed versus Docker. Matches the format of our
existing Ports list in the NetworkConfig, but only includes
exposed ports (and maps these to struct{}, as they never go to
real ports on the host).
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
Previously, we didn't bother including exposed ports in the
container config when creating a container with --net=host. Per
Docker this isn't really correct; host-net containers are still
considered to have exposed ports, even though that specific
container can be guaranteed to never use them.
We could just fix this for host container, but we might as well
make it generic. This patch unconditionally adds exposed ports to
the container config - it was previously conditional on a network
namespace being configured. The behavior of `podman inspect` with
exposed ports when using `--net=container:` has also been
corrected. Previously, we used exposed ports from the container
sharing its network namespace, which was not correct. Now, we use
regular port bindings from the namespace container, but exposed
ports from our own container.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
As shown in #23671 these functions can return the raw error without any
useful context to the user which makes it hard to understand where
things went wrong. Simply add some context to some error paths here.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Currently podman run -d can exit 0 if we send SIGTERM during startup
even though the contianer was never started. That just doesn't make any
sense is horribly confusing for a external job manager like systemd.
The original motivation was to exit 0 for the podman.service in commit
ca7376bb11. That does make sense but it should only do so for the
service and only if the server did indeed gracefully shutdown.
So we rework how the exit logic works, do not let the handler perform
the exit. Instead the shutdown package does the exit after all handlers
are run, this solves the issue of ordering. Then we default to exit code
1 like we did before and allow the service exit handler to overwrite the
exit code 0 in case of a graceful shutdown.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we are killed during netns setup it will leak the netns path as it
was not commited in the db. This is rather common if you run systemctl
stop on a podman systemd unit. Of course we cannot protect against
SIGKILL but in systemd case we get SIGTERM and we really should not exit
in a critical section like this.
Fixes#24044
Signed-off-by: Paul Holzinger <pholzing@redhat.com>