follow-up to 6886e80b45
when "podman -rm -f" is used on a container in "stopping" state, also
make sure it is terminated before removing it from the local storage.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
check that the container has a valid pid before attempting to use
kill($PID, 0) on it. If the PID==0, it means the container is already
stopped.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Do not allow for removing the service container unless all associated
pods have been removed. Previously, the service container could be
removed when all pods have exited which can lead to a number of issues.
Now, the service container is treated like an infra container and can
only be removed along with the pods.
Also make sure that a pod is unlinked from the service container once
it's being removed.
Fixes: #16964
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
if the container has no pid namespace, they are not killed when the
container process ends. In this case, attempt to kill them in the
same way.
The problem was noticed with toolbox where the exec'ed sessions are
not terminated when the container is stopped, blocking the system
shutdown.
[NO NEW TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
in several top-level API functions. These are the first line of
the function that contains them, which makes sense; we want to
capture any error returned by the function. However, making this
the first defer means that it is the last thing to run after the
function returns - meaning that the container's
`defer c.lock.Unlock()` has already fired, leading to a chance we
modify the container without holding its lock.
We could move the function around so it's no longer the first
defer, but then we'd have to call it twice (immediately after
`defer c.lock.Unlock()` if the container is not batched, and a
second time in a new `else` block right after the lock/sync call
to make sure we handle batched containers). Seems simpler to just
leave it like this.
[NO NEW TESTS NEEDED] Can't really test for DB corruption easily.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When you use podman logs with --until and --follow it should exit after
the requested until time and not keep hanging forever.
This fixes the behavior for the k8s-file backend.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When you use podman logs with --until and --follow it should exit after
the requested until time and not keep hanging forever.
To make this work I reworked the code to use the better journald event
reading code for logs as well. this correctly uses the sd_journal API
without having to compare the cursors to find the EOF.
The same problems exists for the k8s-file driver, I will fix this in the
next commit.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Instead of reading the full journal which can be expensive we can seek
based on the time.
If you have a journald with many podman events just compare the time
`time podman events --since 1s --stream=false` with and without this
patch.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.
To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.
Fixes#16950
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
do not allow removing containers that are in the stopping state,
otherwise it can lead to a race condition where a "podman rm" removes
the container from the storage while another process is stopping the
same container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2155828
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This change aims to store an error message to the ContainerState struct
with the last known error from the Start, StartAndAttach, and Stop OCI
Runtime functions.
The goal was to act in accordance with Docker's behavior.
Fixes: #13729
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
This means we store things like config.json and the secret files
also on tmpfs, lowering wear on disk and leaving less stuff on disk
on an unclean shutdown.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This allows use to use STDOUT directly without having to call open
again, also this makes the export API endpoint much more performant
since it no longer needs to copy to a temp file.
I noticed that there was no export API test so I added one.
And lastly opening /dev/stdout will not work on windows.
Fixes#16870
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
always create a user namespace when running with euid != 0 since the
user is not owning the current mount namespace.
This issue happened on a Kubernetes cluster, where the pod was running
privileged but the UID was not 0, as it was configured in the image
itself.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This should simplify the db logic. We no longer need a extra db bucket
for the netns, it is still supported in read only mode for backwards
compat. The old version required us to always open the netns before we
could attach it to the container state struct which caused problem in
some cases were the netns was no longer valid.
Now we use the netns as string throughout the code, this allow us to
only open it when needed reducing possible errors.
[NO NEW TESTS NEEDED] Existing tests should cover it and it is only a
flake so hard to reproduce the error.
Fixes#16140
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We should have done this much earlier, most of the times CNI networks
just mean networks so I changed this and also fixed some function
names. This should make it more clear what actually refers to CNI and
what is just general network backend stuff.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we read logs there can be full or partial lines, when it is full we
need to append a newline, thus the message length must be incremented by
one.
Fixes#16856
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Also update vendor of containers/storage and image
Cleanup display of added/dropped capabilties as well
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Also fix a number of duplicate words. Yet disable the new `dupword`
linter as it displays too many false positives.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Now that the OCI runtime specs have support for idmapped mounts, let's
use them instead of relying on the custom annotation in crun.
Also add the mechanism to specify the mapping to use. Pick the same
format used by crun so it won't be a breaking change for users that
are already using it.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Init containers are removed once they exit, but podman
reports and error that the container does not exist, when
it was previously removed. Stop reporting missing containers
when removing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
With the 4.0 network rewrite I introduced a regression in 094e1d70de.
It only covered the case where a checkpoint is restored via --import.
The normal restore path was not covered since the static ip/mac are now
part in an extra db bucket. This commit fixes that by changing the config
in the db.
Note that there were no test for --ignore-static-ip/mac so I added a big
system test which should cover all cases (even the ones that already
work). This is not exactly pretty but I don't have to enough time to
come up with something better at the moment.
Fixes#16666
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
subpath allows for only a subdirecty of a volumes data to be mounted in the container
add support for the named volume type sub path with others to follow.
resolves#12929
Signed-off-by: Charlie Doern <cbddoern@gmail.com>
When stopping the transient systemd timer/unit which powers running
health checks, make sure to ignore its dependencies. It turns out
that we're otherwise running into a timeout when running a container in
a systemd unit and reboot.
An alternative may be to further tweak some attributes/options when
creating the timer/unit via systemd-run but it seems safe to just ignore
the dependencies and stop.
[NO NEW TESTS NEEDED] - we don't yet have means to test reboots.
Fixes: #14531
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When labes map is too big we may get syslog entry truncated.
This breaks JSON parsing making event loading impossible.
[NO NEW TESTS NEEDED]
Signed-off-by: Matej Vasek <mvasek@redhat.com>
The podman healthchecks are implemented using systemd timers, this works
great but it will never work on non systemd distros. Currently the logic
always assumes systemd is available and will fail with an error, so users
are forced to always run with `--no-healthcheck` to disable healthchecks
that are defined in an image for example. This is annoying and IMO
unnecessary, we should just default to no healthcheck on these systems.
First, use the systemd build tag to disable it at build time if this tag
is not used.
Second, use make sure systemd is used as init before trying
to use healthchecks. This could be the case when we are run in a container.
[NO NEW TESTS NEEDED] We do not have any non systemd VMs in CI AFAIK.
Fixes#16644
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This just calls GC on the local storage, which will remove any leftover
directories from previous containers that are not in the podman db anymore.
This is useful primarily for transient store mode, but can also help in
the case of an unclean shutdown.
Also adds some e2e test to ensure prune --external works.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This brings a performance improvement to `podman run` on top of the
other transient_store improvements in containers/storage:
Transient mode without transient bolt_db:
Benchmark 1: bin/podman run --transient-store=true --rm --pull=never --network=host --security-opt seccomp=unconfined fedora true
Time (mean ± σ): 130.6 ms ± 5.8 ms [User: 44.4 ms, System: 25.9 ms]
Range (min … max): 122.6 ms … 143.7 ms 21 runs
Transient mode with transient bolt_db:
Benchmark 1: bin/podman run --transient-store=true --rm --pull=never --network=host --security-opt seccomp=unconfined fedora true
Time (mean ± σ): 100.3 ms ± 5.3 ms [User: 40.5 ms, System: 24.9 ms]
Range (min … max): 93.0 ms … 111.6 ms 29 runs
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This handles the transient store options from the container/storage
configuration in the runtime/engine.
Changes are:
* Print transient store status in `podman info`
* Print transient store status in runtime debug output
* Add --transient-store argument to override config option
* Propagate config state to conmon cleanup args so the callback podman
gets the same config.
Note: This doesn't really change any behaviour yet (other than the changes
in containers/storage).
Signed-off-by: Alexander Larsson <alexl@redhat.com>
Later changes will need to access it earlier, so move its creation to
just after the creation of StaticDir.
Note: For whatever reason this we created twice before, but we now
only do it once.
Signed-off-by: Alexander Larsson <alexl@redhat.com>