If a container was stopped and we try to start it before we called
cleanup it tried to reuse the network which caused a panic as the pasta
code cannot deal with that. It is also never correct as the netns must
be created by the runtime in case of custom user namespaces used. As
such the proper thing is to clean the netns up first.
Also change a e2e test to report better errors. It is not directly
related to this chnage but it failed on v1 of this patch so we noticed
the ugly error message it produced. Thanks to Ed for the fix.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This avoids dereferencing c.config.Spec.Linux if it is nil, which is the
case on FreeBSD.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
This fixes a regression added in commit 4fd84190b8, because the name was
overwritten by the createTimer() timer call the removeTransientFiles()
call removed the new timer and not the startup healthcheck. And then
when the container was stopped we leaked it as the wrong unit name was
in the state.
A new test has been added to ensure the logic works and we never leak
the system timers.
Fixes#22884
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
debian's man (5) hostname page states "The file should contain a single newline-terminated hostname
string."
[NO NEW TESTS NEEDED]
fix#22729
Signed-off-by: Bo Wang <wangbob@uniontech.com>
When an empty volume is mounted into a container, Docker will
chown that volume appropriately for use in the container. Podman
does this as well, but there are differences in the details. In
Podman, a chown is presently a one-and-done deal; in Docker, it
will continue so long as the volume remains empty. Mount into a
dozen containers, but never add content, the chown occurs every
time. The chown is also linked to copy-up; it will always occur
when a copy-up occurred, despite the volume now not being empty.
This PR changes our logic to (mostly) match Docker's.
For some reason, the chowning also stops if the volume is chowned
to root at any point. This feels like a Docker bug, but as they
say, bug for bug compatible.
In retrospect, using bools for NeedsChown and NeedsCopyUp was a
mistake. Docker isn't actually tracking this stuff; they're just
doing a copy-up and permissions change unconditionally as long as
the volume is empty. They also have the two linked as one
operation, seemingly, despite happening at very different times
during container init. Replicating that in our stateful system is
nontrivial, hence the need for the new CopiedUp field. Basically,
we never want to chown a volume with contents in it, except if
that data is a result of a copy-up that resulted from mounting
into the current container. Tracking who did the copy-up is the
easiest way to do this.
Fixes#22571
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Now WaitForExit returns the exit code as stored in the db instead of
returning an error when the container was removed.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
wait for another interval when the container transitioned to "stopped"
to give more time to the healthcheck status to change.
Closes: https://github.com/containers/podman/issues/22760
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We have to exclude the ips in the rootless netns as they are not the
host. Now that fix only works if there are more than one ip one the
host available, if there is only one we do not set the entry at all
which I consider better as failing to resolve this name is a much better
error for users than connecting to a wrong ip. It also matches what
--network pasta already does.
The test is bit more compilcated as I would like, however it must deal
with both cases one ip, more than one so there is no way around it I
think.
Fixes#22653
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
wait for the healthy status on the thread where the container lock is
held. Otherwise, if it is performed from a go routine, a different
thread is used (since the runtime.LockOSThread() call doesn't have any
effect), causing pthread_mutex_unlock() to fail with EPERM.
Closes: https://github.com/containers/podman/issues/22651
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The v5 API made a breaking change for podman inspect, this means that
an old client could not longer parse the result from the new 5.X server.
The other way around new client and old server already worked.
As it turned out there were several users that run into this, one case
to hit this is using an old 4.X podman machine wich now pulls a newer
coreos with podman 5.0. But there are also other users running into it.
In order to keep the API working we now have a version check and return
the old v4 compatible payload so the old remote client can still work
against a newer server thus removing any major breaking change for an
old client.
Fixes#22657
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This reverts commit 909ab59419.
The workaround was added almost 5 years ago to workaround an issue
with old conmon releases. It is safe to assume such ancient conmon
releases are not used anymore.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The scenario for inducing this is as follows:
1. Start a container with a long stop timeout and a PID1 that
ignores SIGTERM
2. Use `podman stop` to stop that container
3. Simultaneously, in another terminal, kill -9 `pidof podman`
(the container is now in ContainerStateStopping)
4. Now kill that container's Conmon with SIGKILL.
5. No commands are able to move the container from Stopping to
Stopped now.
The cause is a logic bug in our exit-file handling logic. Conmon
being dead without an exit file causes no change to the state.
Add handling for this case that tries to clean up, including
stopping the container if it still seems to be running.
Fixes#19629
Signed-off-by: Matt Heon <mheon@redhat.com>
Systemd dislikes it when we rapidly create and remove a transient
unit. Solution: If we change the name every time, it's different
enough that systemd is satisfied and we stop having errors trying
to restart the healthcheck.
Generate a random 32-bit integer, and add it (formatted as hex)
to the end of the unit name to do this. As a result, we now have
to store the unit name in the database, but it does make
backwards compat easy - if the unit name in the DB is empty, we
revert to the old behavior because the timer was created by old
Podman.
Should resolve RHEL-26105
Signed-off-by: Matt Heon <mheon@redhat.com>
Effectively, this is an ability to take an image already pulled
to the system, and automatically mount it into one or more
containers defined in Kubernetes YAML accepted by `podman play`.
Requirements:
- The image must already exist in storage.
- The image must have at least 1 volume directive.
- The path given by the volume directive will be mounted from the
image into the container. For example, an image with a volume
at `/test/test_dir` will have `/test/test_dir` in the image
mounted to `/test/test_dir` in the container.
- Multiple images can be specified. If multiple images have a
volume at a specific path, the last image specified trumps.
- The images are always mounted read-only.
- Images to mount are defined in the annotation
"io.podman.annotations.kube.image.automount/$ctrname" as a
semicolon-separated list. They are mounted into a single
container in the pod, not the whole pod.
As we're using a nonstandard annotation, this is Podman only, any
Kubernetes install will just ignore this.
Underneath, this compiles down to an image volume
(`podman run --mount type=image,...`) with subpaths to specify
what bits we want to mount into the container.
Signed-off-by: Matt Heon <mheon@redhat.com>
Image volumes (the `--mount type=image,...` kind, not the
`podman volume create --driver image ...` kind - it's strange
that we have two) are needed for our automount scheme, but the
request is that we mount only specific subpaths from the image
into the container. To do that, we need image volume subpath
support. Not that difficult code-wise, mostly just plumbing.
Also, add support to the CLI; not strictly necessary, but it
doesn't hurt anything and will make testing easier.
Signed-off-by: Matt Heon <mheon@redhat.com>
Checking if the file exists before opening it anyway is really pointless
and needs a extra syscall and in theory is racy as the file might have
been changed between the two calls. We can simply ignore the ENOENT
error on the ReadFile call.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When the field is set to false we should never log healthcheck events.
Fixes https://issues.redhat.com/browse/RHEL-18987
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We already know the status of the healthcheck in the caller so calling
healthCheckStatus() just make the event code sync the container state
and reread the healthcheck file for no reason.
It is much better to directly pass the status down to the event call.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
In cases where we fail to configure the error is returned as it and may
be missing useful context. Make sure we know the error happened as part
of the storage setup.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This is something Docker does, and we did not do until now. Most
difficult/annoying part was the REST API, where I did not really
want to modify the struct being sent, so I made the new restart
policy parameters query parameters instead.
Testing was also a bit annoying, because testing restart policy
always is.
Signed-off-by: Matt Heon <mheon@redhat.com>
The logic here is more complex than I would like, largely due to
the behavior of `podman inspect` for running containers. When a
container is running, `podman inspect` will source as much as
possible from the OCI spec used to run that container, to grab
up-to-date information on things like devices. We don't want to
change this, it's definitely the right behavior, but it does make
updating a running container inconvenient: we have to rewrite the
OCI spec as part of the update to make sure that `podman inspect`
will read the correct resource limits.
Also, make update emit events. Docker does it, we should as well.
Signed-off-by: Matt Heon <mheon@redhat.com>
This includes migrating from cdi.GetRegistry() to cdi.Configure() and
cdi.GetDefaultCache() as applicable.
Signed-off-by: Evan Lezar <elezar@nvidia.com>
Podman needs to be able to detect when a system reboot occurs to
do certain types of cleanup operation (for example, reset
container states, clean up IPAM allocations, etc). our current
method for this is a sentinel file on a tmpfs filesystem. The
problem emerges that there is no directory that is guaranteed to
be a tmpfs and is also guaranteed to be accessible to rootless
users in the FHS. If the user has a systemd user session, we can
depend on /run/user/$UID, but we can't reliably say that they do.
This code will detect the no-tmpfs-but-reboot-occurred case by
writing the current system boot ID to our tmpfs sentinel file
when it is created, and checking that file every time Podman
starts to make sure that the current boot ID matches the cached
one in the sentinel file. If they don't match, a reboot occurred
and the sentinel file was not on a tmpfs and thus survived. In
that case, throw an error telling the user to remove certain
directories (the ones that are supposed to be tmpfs), so we can
proceed as expected.
Signed-off-by: Matt Heon <mheon@redhat.com>
if the 'U' option is provided, do not chown the destination target to
the existing target in the image.
Closes: https://github.com/containers/podman/issues/22224
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
if the volume is mounted with "idmap", there should not be any mapping
using the user namespace mappings since this is done at runtime using
the "idmap" kernel feature.
Closes: https://github.com/containers/podman/issues/22228
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Useful to tell whether containers are being made with pasta or
slirp4netns by default. Info is bloated enough already that I
don't really have concerns about shoving more into it.
Fixes#22172
Signed-off-by: Matt Heon <mheon@redhat.com>