It turns out, after iterating over rows, we need to check for errors. It
also turns out that we did not do that at all.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add support for `--imagestore` in podman which allows users to split the filesystem of containers vs image store, imagestore if configured will pull images in image storage instead of the graphRoot while keeping the other parts still in the originally configured graphRoot.
This is an implementation of
https://github.com/containers/storage/pull/1549 in podman.
Signed-off-by: Aditya R <arajan@redhat.com>
The code was moved to c/common so use that instead. Also add tests for
the new pasta_options config field. However there is one outstanding
problem[1]: pasta rejects most options when set more than once. Thus it is
impossible to overwrite most of them on the cli. If we cannot fix this
in pasta I need to make further changes in c/common to dedup the
options.
[1] https://archives.passt.top/passt-dev/895dae7d-3e61-4ef7-829a-87966ab0bb3a@redhat.com/
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Conmon very early dups the std streams with /dev/null, therefore all
errors it reports go nowhere. When you run podman with debug level we
set --syslog and we can see the error in the journal. This should be
the default. We have a lot of weird failures in CI that could be caused
by conmon and we have access to the journal in the cirrus tasks so that
should make debugging much easier.
Conmon still uses the same logging level as podman so it will not spam
the journal and only log warning and errors by default.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Remove code duplication and use the new FilterID function from
c/common. Also remove the duplicated ComputeUntilTimestamp in podman use
the one from c/common as well.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When waiting for a container, there may be a time window where conmon
has already exited but the container hasn't been fully cleaned up.
In that case, we give the container at most 20 seconds to be fully
cleaned up. We cannot wait forever since conmon may have been killed or
something else went wrong.
After the timeout, we optimistically assume the container to be cleaned
up and its exit code to present. If no exit code can be found, we
return an error.
Indicate in the error whether the timeout kicked in to help debug
(transient) errors and flakes (e.g., #18860).
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
podman info prints the network information about binary path,
package version, program version and DNS information.
Fixes: #18443
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
There is weird issue #18856 which causes the version check to fail.
Return the underlying error in these cases so we can see it and debug
it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There are certain messages logged by OCI runtimes when killing a
container that has already stopped that we really do not care
about when stopping a container. Due to our architecture, there
are inherent races around stopping containers, and so we cannot
guarantee that *we* are the people to kill it - but that doesn't
matter because Podman only cares that the container has stopped,
not who delivered the fatal signal.
Unfortunately, the OCI runtimes don't understand this, and log
various warning messages when the `kill` command is invoked on a
container that was already dead. These cause our tests to fail,
as we now check for clean STDERR when running Podman. To work
around this, capture STDERR for the OCI runtime in a buffer only
for stopping containers, and go through and discard any of the
warnings we identified as spurious.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We use shared-memory pthread mutexes to handle mutual exclusion
in Libpod. It turns out that these have configurable options for
how to handle a recursive lock (IE, a thread trying to lock a
lock that the same thread had previously locked). The mutex can
either deadlock, or allow the duplicate lock without deadlocking.
Default behavior is, helpfully, unspecified, so if not explicitly
set there is no clear indication of which of these behaviors will
be seen. Unfortunately, today is the first I learned of this, so
our initial implementation did *not* explicitly set our preferred
behavior.
This turns out to be a major problem with a language like Golang,
where multiple goroutines can (and often do) use the same OS
thread. So we can have two goroutines trying to stop the same
container, and if the no-deadlock mutex behavior is in use, both
threads will successfully acquire the lock because the C library,
not knowing about Go's lightweight threads, sees the same PID
trying to lock a mutex twice, and allows it without question.
It appears that, at least on Fedora/RHEL/Debian libc, the default
(unspecified) behavior of the locks is the non-deadlocking
version - so, effectively, our locks have been of questionable
utility within the same Podman process for the last four years.
This is somewhat concerning.
What's even more concerning is that the Golang-native sync.Mutex
that was also in use did nothing to prevent the duplicate locking
(I don't know if I like the implications of this).
Anyways, this resolves the major issue of our locks not working
correctly by explicitly setting the correct pthread mutex
behavior.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
If the first container to get the pod lock is the infra container
it's going to want to remove the entire pod, which will also
remove every other container in the pod. Subsequent containers
will get the pod lock and try to access the pod, only to realize
it no longer exists - and that, actually, the container being
removed also no longer exists.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This was causing some CI flakes. I'm pretty sure that the pods
being removed already isn't a bug, but just the result of another
container in the pod removing it first - so no reason not to
ignore the errors.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This fixes a lint issue, but I'm keeping it in its own commit so
it can be reverted independently if necessary; I don't know what
side effects this may have. I don't *think* there are any
issues, but I'm not sure why it wasn't a pointer in the first
place, so there may have been a reason.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
For filter=id=XXX (containers, pods) and =ctr-ids=XXX (pods):
if XXX is only hex characters, treat it as a PREFIX
otherwise, treat it as a REGEX
Add tests. Update documentation. And fix an incorrect help message.
Fixes: #18471
Signed-off-by: Ed Santiago <santiago@redhat.com>
To debug a deadlock, we really want to know what lock is actually
locked, so we can figure out what is using that lock. This PR
adds support for this, using trylock to check if every lock on
the system is free or in use. Will really need to be run a few
times in quick succession to verify that it's not a transient
lock and it's actually stuck, but that's not really a big deal.
Signed-off-by: Matt Heon <mheon@redhat.com>
This is a general debug command that identifies any lock
conflicts that could lead to a deadlock. It's only intended for
Libpod developers (while it does tell you if you need to run
`podman system renumber`, you should never have to do that
anyways, and the next commit will include a lot more technical
info in the output that no one except a Libpod dev will want).
Hence, hidden command, and only implemented for the local driver
(recommend just running it by SSHing into a `podman machine` VM
in the unlikely case it's needed by remote Podman).
These conflicts should normally never happen, but having a
command like this is useful for debugging deadlock conditions
when they do occur.
Signed-off-by: Matt Heon <mheon@redhat.com>
This is a nice quality-of-life change that should help to debug
situations where someone runs out of locks (usually when a bunch
of unused volumes accumulate).
Signed-off-by: Matt Heon <mheon@redhat.com>
Being able to easily identify what lock has been allocated to a
given Libpod object is only somewhat useful for debugging lock
issues, but it's trivial to expose and I don't see any harm in
doing so.
Signed-off-by: Matt Heon <mheon@redhat.com>
setupPasta() has logic to handle forwarding of TCP or UDP ports. It has
what looks like logic to give an error if trying to forward ports of any
other protocol. However, there's a straightforward error in this that it
will in fact only give the error if you try to use a protocol called
"default". Other unknown protocols will fall through and result in a
nonsensical pasta command line which will almost certainly cause a cryptic
error later on.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We had something like 6 different boolean options (removing a
container turns out to be rather complicated, because there are a
million-odd things that want to do it), and the function
signature was getting unreasonably large. Change to a struct to
clean things up.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The infra container would try to remove the pod, despite the pod
already being in the process of being removed - oops. Add a check
to ensure we don't try and remove the pod when called by the
`podman pod rm` command.
Also, wire up noLockPod - it wasn't previously wired in, which is
concerning, and could be related?
Finally, make a few minor fixes to un-break lint.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This probably should have been in the API since the beginning,
but it's not too late to start now.
The extra information is returned (both via the REST API, and to
the CLI handler for `podman rm`) but is not yet printed - it
feels like adding it to the output could be a breaking change?
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This allows for accurate reporting of dependency removal, but the
work is still incomplete: pods can be removed, but do not report
the containers they removed as part of said removal. Will add
this in a subsequent commit.
Major note: I made ignoring no-such-container errors automatic
once it has been determined that a container did exist in the
first place. I can't think of any case where this would not be a
TOCTOU - IE, no reason not to ignore them. The `--ignore` option
to `podman rm` should still retain meaning as it will ignore
errors from containers that didn't exist in the first place.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This is the initial stage of implementation. The current API
functions but does not report the additional containers and pods
removed. This is necessary to properly display results to the
user after `podman rm --all`.
The existing remove-dependencies code has been removed in favor
of this more native solution.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The current way of bind mounting the host timezone file has problems.
Because /etc/localtime in the image may exist and is a symlink under
/usr/share/zoneinfo it will overwrite the targetfile. That confuses
timezone parses especially java where this approach does not work at
all. So we end up with an link which does not reflect the actual truth.
The better way is to just change the symlink in the image like it is
done on the host. However because not all images ship tzdata we cannot
rely on that either. So now we do both, when tzdata is installed then
use the symlink and if not we keep the current way of copying the host
timezone file in the container to /etc/localtime.
Also note that we need to rebuild the systemd image to include tzdata in
order to test this as our images do not contain the tzdata by default.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2149876
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Implement means for reflecting failed containers (i.e., those having
exited non-zero) to better integrate `kube play` with systemd. The
idea is to have the main PID of `kube play` exit non-zero in a
configurable way such that systemd's restart policies can kick in.
When using the default sdnotify-notify policy, the service container
acts as the main PID to further reduce the resource footprint. In that
case, before stopping the service container, Podman will lookup the exit
codes of all non-infra containers. The service will then behave
according to the following three exit-code policies:
- `none`: exit 0 and ignore containers (default)
- `any`: exit non-zero if _any_ container did
- `all`: exit non-zero if _all_ containers did
The upper values can be passed via a hidden `kube play
--service-exit-code-propagation` flag which can be used by tests and
later on by Quadlet.
In case Podman acts as the main PID (i.e., when at least one container
runs with an sdnotify-policy other than "ignore"), Podman will continue
to wait for the service container to exit and reflect its exit code.
Note that this commit also fixes a long-standing annoyance of the
service container exiting non-zero. The underlying issue was that the
service container had been stopped with SIGKILL instead of SIGTERM and
hence exited non-zero. Fixing that was a prerequisite for the exit-code
propagation to work but also improves the integration of `kube play`
with systemd and hence Quadlet with systemd.
Jira: issues.redhat.com/browse/RUN-1776
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Make sure to prune container exit codes only when the associated
container does not exist anymore. This is needed when checking if any
container in kube-play exited non-zero and a building block for the
below linked Jira card.
[NO NEW TESTS NEEDED] - there are no unit tests for exit code pruning.
Jira: https://issues.redhat.com/browse/RUN-1776
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Use a helper to handle the cleanupErr logic instead of
copy&pasting it EIGHT times.
Also modifies the returned errors to be wrapped with a context,
and changes the text of the logged errors a bit.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Use a shared helper instead of copy&pasting the handling
of cleanupErr EIGHT times.
This changes the wording of logged error text, and the error
in one case, a bit.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
[NO NEW TESTS NEEDED]
... because testing this would require us to intentionally
create an inconsistent state, which should ideally not be possible...
(and because at this point I don't even know what the reported failure
was.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Make sure to look for the container's exit code when it's in stopped
state. With `--restart=always`, the container seems to stay in the
stopped state which led the wait logic to loop until the 20 seconds
timeout for the cleanup process to have finished kicks in.
Also defensively make sure to loop when the container is in stopped
state but no exit code has been written yet.
Add a regression test to make sure Podman doesn't wait more than 20
seconds. Even on a CI machine under high load I expect it to take much
much much less than that, so I do not expect this test to flake in the
future.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
b25b330306 introduced this behaviour.
It was fine at the time because we didn't support "container update",
so the limit could not be changed at runtime. Since it is not
possible to change the memory limit at runtime, read the limit as
reported from the cgroup.
https://github.com/containers/crun/pull/1217 is required for crun.
Closes: https://github.com/containers/podman/issues/18621
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
As shown in #17831, WAL mode plays a role in causing `database is locked`
errors. Those are errors, in theory, should not happen as the DB should
busy wait. mattn/go-sqlite3/issues/274 has some comments indicating
that the busy handler behaves differently in WAL mode which may be an
explanation to the error.
For now, let's disable WAL mode and only re-enable it when we have
clearer understanding of what's going on. The upstream issue along with
the SQLite documentation do not give me the clear guidance that I would
need.
[NO NEW TESTS NEEDED] - flake is only reproducible in CI.
Fixes: #18356
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
In rootFsSize(), instead of calculating the size of the diff for every
layer of the container's base image, ask the storage library for the sum
of the values it recorded when it first wrote those layers.
In a similar fashion, teach rwSize() to use the library's
ContainerSize() method instead of trying to roll its own.
Replace calls to pkg/util.SizeOfPath() with calls to
github.com/containers/storage/pkg/directory.Size(), which does the same
thing.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
If the container was already cleaned up we should not try to do it
again. Podman stop will always try to call Cleanup() if you look at the
podman event log and just keep calling podman stop --all you see a
cleanup event every time. This is not wanted. Also in case of the host
pidns we report a error every single time, see the linked issue.
Fixes#18460
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The remote API will wait 300s by default before conmon will call the
cleanup. In the meantime when you inspect an exec session started with
ExecStart() (so not attached) and it did exit we do not know that. If
a caller inspects it they think it is still running. To prevent this we
should sync the session based on the exec pid and update the state
accordingly.
For a reproducer see the test in this commit or the issue.
Fixes#18424
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Podman kube generate now uses the pod's restart policy
when generating the kube yaml. If generating from containers
only, use the restart policy of the first non-init container.
Podman kube play applies the pod restart policy from the yaml
file to the pod. The containers within a pod inherit this restart
policy.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Add Restarts column to the podman ps output to show how many times a
container was restarted based on its restart policy. This column will be
displayed when --format={{.Restarts}}.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Add --restart flag to pod create to allow users to set the
restart policy for the pod, which applies to all the containers
in the pod. This reuses the restart policy already there for
containers and has the same restart policy options.
Add "never" to the restart policy options to match k8s syntax.
It is a synonym for "no" and does the exact same thing where the
containers are not restarted once exited.
Only the containers that have exited will be restarted based on the
restart policy, running containers will not be restarted when an exited
container is restarted in the same pod (same as is done in k8s).
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
fix some issues with the handling of errors, we print an error only
when there is already one set to be returned. Also the first error is
not printed, since it is reported back to the caller of the function.
Improve some messages with more context that can be helpful when
things go wrong.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
go swagger has a flat namespace so it doesn't handle name conflicts at
all. The libpod info response uses the Info struct from some docker dep
instead. Because we cannot change the docker dependency simply rename
the Info struct, but only via swagger comment not the go actual struct.
I verified locally that this works.
Fixes#18228
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We can now use the new API for creating files and directories without
setting the umask to allow parallel usage of those methods.
This patch also bumps c/common for that.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
Ref: https://pkg.go.dev/math/rand@go1.20#Seed
Note: For `runtime_test.go`, this test-case was never actually doing
what appears as it's intent . Fixing it to work as intended would be
require incredibly libpod-invasive changes. Do the least-worse thing and
simply confirm that consecutive generated names are different.
Signed-off-by: Chris Evich <cevich@redhat.com>
According to an old upstream issue [1]: "If the first statement after
BEGIN DEFERRED is a SELECT, then a read transaction is started.
Subsequent write statements will upgrade the transaction to a write
transaction if possible, or return SQLITE_BUSY."
So let's move the first SELECT under the same transaction as the table
initialization.
[NO NEW TESTS NEEDED] as it's a hard to cause race.
[1] https://github.com/mattn/go-sqlite3/issues/274#issuecomment-1429054597Fixes: #17859
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
If user specifies commit --format, we were not setting it before
commit, this caused warning messages that made no sense to be
printed that made no sense.
Fixes: https://github.com/containers/podman/issues/17773
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Commit 1ab833fb73 improved the situation but it is still not enough.
If you run short lived containers with --restart=always podman is
basically permanently restarting them. To only way to stop this is
podman stop. However podman stop does not do anything when the
container is already in a not running state. While this makes sense we
should still mark the container as explicitly stopped by the user.
Together with the change in shouldRestart() which now checks for
StoppedByUser this makes sure the cleanup process is not going to start
it back up again.
A simple reproducer is:
```
podman run --restart=always --name test -d alpine true
podman stop test
```
then check if the container is still running, the behavior is very
flaky, it took me like 20 podman stop tries before I finally hit the
correct window were it was stopped permanently.
With this patch it worked on the first try.
Fixes#18259
[NO NEW TESTS NEEDED] This is super flaky and hard to correctly test
in CI. MY ginkgo v2 work seems to trigger this in play kube tests so
that should catch at least some regressions. Also this may be something
that should be tested at podman test days by users (#17912).
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It really doesn't make sense to match the version one to one,
this just requires us to update it every time manually.
Use a regex instead.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Make sure to tear down the netns again on errors. This is needed when a
later call fails and we do not have already stored the netns in the
container state.
[NO NEW TESTS NEEDED] My ginkgo-v2 PR will catch problem like this once
merged.
Fixes#18205
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The documentation says
> The new Buffer takes ownership of buf, and the
> caller should not use buf after this call.
so use the more directly applicable, and simpler, bytes.Reader instead, to avoid this potentially risky use.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Currently --tmpdir changes the location of the pause.pid file. this
causes issues because the c code in pkg/rootless does not know about
that. I tried to fix this[1] by fixing the c code to not use the
shortcut. While this fix worked it will result in many pause processes
leaking in the integrration tests.
Commit ab88632 added this behavior but following the disccusion it was
never the intention that we end up having more than one pause process.
The issues that was trying to fix was caused by somthing else AFAICT,
the main problem seems to be that the pause.pid file parent directory
may not be created when we try to create the pid file so it failed with
ENOENT. This patch fixes it by creating this directory always and revert
the change to no longer depend on the tmpdir value.
With this commit we now always use XDG_RUNTIME_DIR/libpod/tmp/pause.pid
for all podman processes. This allows the c shortcut to work reliably
and should therefore improve perfomance over my other approach.
A system test is added to ensure we see the right behavior and that
podman system migrate actually stops the pause process. Thanks to Ed
Santiago for the improved test to make it work for both `catatonit` and
`podman pause`.
This should fix the issues with namespace missmatches that we can see in
CI as flakes.
[1] https://github.com/containers/podman/pull/18057Fixes#18057
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When creating storage for a container using ID maps, read the ID maps
that are assigned to the container from the returned container
structure, rather than from the options structure that we passed to the
storage library, which it previously modified in error.
[NO NEW TESTS NEEDED]
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Kube generate on pods was not checking for any underscores
in the pod name so was creating a kube yaml with an invalid
pod name when there were underscores present.
The hostname for the pod is set to the podname by default. There
is no need to set that to the container's name or the pod name
again in the generated yaml. So removed that field unless a hostname
was set for the container by the user.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
I made a change in c/common[1] to prevent duplicates in netns names.
This now causes problem in podman[2] where the rootless netns will no
longer work after the netns got invalid but the underlying path still
exists. AFAICT this happens when the podman pause process got killed and
we are now in a different user namespace.
While I do not know what causes this, this commit should make it at
least possible to recover from this situation automatically as it used
to be before[1].
the problem with that is that containers started before it will not be
able to talk to contianers started after this. A restart of the previous
container will fix it but this was also the case before.
[NO NEW TESTS NEEDED]
[1] https://github.com/containers/common/pull/1381
[2] https://github.com/containers/podman/issues/17903#issuecomment-1494169843
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
add a function to securely mount a subpath inside a volume. We cannot
trust that the subpath is safe since it is beneath a volume that could
be controlled by a separate container. To avoid TOCTOU races between
when we check the subpath and when the OCI runtime mounts it, we open
the subpath, validate it, bind mount to a temporary directory and use
it instead of the original path.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The podman kube generate command can now generate a
Deployment kind when the --ype flag is set to deployment.
By default, a Pod spec will be generated if --type flag is
not set.
Add --replicas flag to kube generate to allow users to set
the value of replicas in the generated yaml when generating a
Deployment kind.
Add e2e and minikube tests for this feature.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
It turns out the restart is _not_ a stop+start but keeps certain
resources open and is subject to some timeouts that may differ across
distributions' default settings.
[NO NEW TESTS NEEDED] as I have absolutely no idea how to reliably cause
the failure/flake/race.
Also ignore ENOENTS of the CID file when removing a container which has
been identified of actually fixing #17607.
Fixes: #17607
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When a userns is set we setup the network after the bind mounts, at the
point where resolv.conf is generated we do not yet know the subnet.
Just like the other dns servers for bridge networks we need to add the
ip later in completeNetworkSetup()
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2182052
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
`Ping()` requires the DB lock, so we had to move it into a transaction
to fix#17859. Since we try to access the DB directly afterwards, I
prefer to let that fail instead of paying the cost of a transaction
which would lock the DB for _all_ processes.
[NO NEW TESTS NEEDED] as it's a hard to reproduce race.
Fixes: #17859
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
If a container with an ID starting with "db1" exists, and a
container named "db1" also exists, and they are different
containers - if I run `podman inspect db1` the container named
"db1" should be inspected, and there should not be an error that
multiple containers matched the name or id "db1". This was
already handled by BoltDB, and now is properly managed by SQLite.
Fixes#17905
Signed-off-by: Matt Heon <mheon@redhat.com>
The DB config is a single-row table, and the first Podman process
to run against the database creates it. However, there was a race
where multiple Podman processes, started simultaneously, could
try and write it. Only the first would succeed, with subsequent
processes failing once (and then running correctly once re-ran),
but it was happening often in CI and deserves fixing.
[NO NEW TESTS NEEDED] It's a CI flake fix.
Signed-off-by: Matt Heon <mheon@redhat.com>
SQLite developers consider it a misfeature [1], and after turning it on,
we saw a new set of flakes. Let's turn it off and trust the developers
[1] that WAL mode is sufficient for our purposes.
Turning the shared cache off also makes the DB smaller and faster.
[NO NEW TESTS NEEDED]
[1] https://sqlite.org/forum/forumpost/1f291cdca4
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The symptoms in #17859 indicate that setting the PRAGMAs in individual
EXECs outside of a transaction can lead to concurrency issues and
failures when the DB is locked. Hence set all PRAGMAs when opening
the connection. Move them into individual constants to improve
documentation and readability.
Further make transactions exclusive as #17859 also mentions an error
that the DB is locked during a transaction.
[NO NEW TESTS NEEDED] - existing tests cover the code.
Fixes: #17859
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
<MH: Cherry-picked on top of my branch>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
I was searching the SQLite docs for a fix, but apparently that
was the wrong place; it's a common enough error with the Go
frontend for SQLite that the fix is prominently listed in the API
docs for go-sqlite3. Setting cache mode to 'shared' and using a
maximum of 1 simultaneous open connection should fix.
Performance implications of this are unclear, but cache=shared
sounds like it will be a benefit, not a curse.
[NO NEW TESTS NEEDED] This fixes a flake with concurrent DB
access.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
As described in #17777, the `restart` on-failure action did not behave
correctly when the health check is being run by a transient systemd
unit. It ran just fine when being executed outside such a unit, for
instance, manually or, as done in the system tests, in a scripted
fashion.
There were two issue causing the `restart` on-failure action to
misbehave:
1) The transient systemd units used the default `KillMode=cgroup` which
will nuke all processes in the specific cgroup including the recently
restarted container/conmon once the main `podman healthcheck run`
process exits.
2) Podman attempted to remove the transient systemd unit and timer
during restart. That is perfectly fine when manually restarting the
container but not when the restart itself is being executed inside
such a transient unit. Ultimately, Podman tried to shoot itself in
the foot.
Fix both issues by moving the restart logic in the cleanup process.
Instead of restarting the container, the `healthcheck run` will just
stop the container and the cleanup process will restart the container
once it has turned unhealthy.
Fixes: #17777
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Transient mode means the DB should not persist, so instead of
using the GraphRoot we should use the RunRoot instead.
Signed-off-by: Matt Heon <mheon@redhat.com>
Two main changes:
- The transient state tests relied on BoltDB paths, change to
make them agnostic
- The volume code in SQLite wasn't retrieving and setting the
volume plugin for volumes that used one.
Signed-off-by: Matt Heon <mheon@redhat.com>
Return more sensible errors than SQLite's embedded constraint
failure ones. Should fix a number of integration tests.
Signed-off-by: Matt Heon <mheon@redhat.com>
When streaming events, prevent returning duplicates after a log rotation
by marking a beginning and an end for rotated events. Before starting to
stream, get a timestamp while holding the event lock. The timestamp
allows for detecting whether a rotation event happened while reading the
log file and to skip all events between the begin and end rotation
event.
In an ideal scenario, we could detect rotated events by enforcing a
chronological order when reading and skip those detected to not be more
recent than the last read event. However, events are not always
_written_ in chronological order. While this can be changed, existing
event files could not be read correctly anymore.
Fixes: #17665
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
On cgroup v1 we need to mount only the systemd named hierarchy as
writeable, so we configure the OCI runtime to mount /sys/fs/cgroup as
read-only and on top of that bind mount /sys/fs/cgroup/systemd.
But when we use a private cgroupns, we cannot do that since we don't
know the final cgroup path.
Also, do not override the mount if there is already one for
/sys/fs/cgroup/systemd.
Closes: https://github.com/containers/podman/issues/17727
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Currently Podman prevents SELinux container separation,
when running within a container. This PR adds a new
--security-opt label=nested
When setting this option, Podman unmasks and mountsi
/sys/fs/selinux into the containers making /sys/fs/selinux
fully exposed. Secondly Podman sets the attribute
run.oci.mount_context_type=rootcontext
This attribute tells crun to mount volumes with rootcontext=MOUNTLABEL
as opposed to context=MOUNTLABEL.
With these two settings Podman inside the container is allowed to set
its own SELinux labels on tmpfs file systems mounted into its parents
container, while still being confined by SELinux. Thus you can have
nested SELinux labeling inside of a container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
On FreeBSD, c.config.Spec.Linux is not populated - in this case, we can
assume that the container is not using a pid namespace.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
Add a hidden flag to set the database backend and plumb it into
podman-info. Further add a system test to make sure the flag and the
info output are working properly.
Note that the test may need to be changed once we settled on how
to test the sqlite backend in CI.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
There's a unique constraint in the table, so we shouldn't add the same
volume more than once to the same container.
[NO NEW TESTS NEEDED] as it fixes an existing one.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
A partial name match is tricky as we want it to be fast but also make
sure there's only one partial match iff there's no full one.
[NO NEW TESTS NEEDED] as it fixes a system test.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
I wasn't able to find a way to get error-checks working with the sqlite3
library with the time at hand.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add a way to keep play kube running in the foreground and terminating all pods
after receiving a a SIGINT or SIGTERM signal. The pods will also be
cleaned up after the containers in it have exited.
If an error occurrs during kube play, any resources created till the
error point will be cleane up also.
Add tests for the various scenarios.
Fixes#14522
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
A number of fixes for pod creation and removal.
The important part is that matching partial IDs requires a trailing `%`
for SQL to interpret it as a wildcard. More information at
https://www.sqlitetutorial.net/sqlite-like/
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Allow to replace existing exit codes. A container may be started and
stopped multiple times etc.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The value of -1 is used when we do not _yet_ know the exit code of the
container. Otherwise, the DB checks would error. There's probably a
smarter than allowing -1 but for now, that will do the trick and let the
tests progress.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
[NO NEW TESTS NEEDED] - the sqlite backend is still in development and
is not enabled by default.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
- Added a mechanism to check schema version and migrate
(no migrations yet since schema hasn't changed yet).
- Added pod support to AddContainer, and unified AddContainer and
RemoveContainer between containers and pods.
- Fixed newly-added GetPodName and GetCtrName in BoltDB so they
only return pod/container names.
Signed-off-by: Matt Heon <mheon@redhat.com>
This has been broken since we added Volumes - so, Podman v0.12.1
(so, around 5 years). I have no evidence anyone is using it in
the wild. It doesn't really function as expected. And it's a lot
of extraneous code and tests for the database.
Rip it out entirely, we can re-add once BoltDB is gone if there
is a requirement to do so.
Signed-off-by: Matt Heon <mheon@redhat.com>
This contains the implementation of (most) container functions,
with stubs for all pod and volume functions. Presently accessed
via environment variable only for testing purposes.
Signed-off-by: Matt Heon <mheon@redhat.com>
always use the direct mapping when writing the mappings for an
idmapped mount. crun was previously using the reverse mapping, which
is not correct and it is being addressed here:
https://github.com/containers/crun/pull/1147
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Since commit 06241077cc we use the aardvark per container dns
functionality. This means we should only have the aardvark ip in
resolv.conf otherwise the client resolver could skip aardvark, thus
ignoring the special dns option for this container.
Fixes#17499
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When run with --cgroups=split mode (e.g. quadlet) we do not use the a
separate cgroup for the container and just run in the unit cgroup.
When we filter logs we thus must match the unit name.
Added a small test to the quadlet test to make sure it will work.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It makes little sense to create a log line string from the entry just to
parse it again into a LogLine. We have the typed fields so we can
assemble the logLine direclty, this makes things simpler and more
efficient.
Also entries from the passthrough driver do not use the CONTAINER_ID_FULL
field, instead we can just access c.ID() directly.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The passthrough driver is designed for use in systemd units. By default
we can expect systemd to log the output on journald unless the unit sets
differen StandardOutput/StandardError settings.
At the moment podman logs just errors out when the passthrough driver is
used. With this change we will read the journald for the unit messages.
The logic is actually very similar to the existing one, we just need to
change the filter. We now filter by SYSTEMD_UNIT wich equals to the
contianer cgroup, this allows us the actually filter on a per contianer
basis even when multiple contianers are started in the same unit, i.e.
via podman-kube@.service.
The only difference a user will see is that journald will merge
stdout/err into one stream so we loose the separation there.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This was added as hack in commit 6b06e9b77c because the journald logs
code was not able to handle an empty journal. But since commit
767947ab88 this is no longer the case, we correctly use the sd_journal
API and know when the journal is empty.
Therefore we no longer need this hack and it should be removed because
it just adds overhead and an empty journal entry for no good reason.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* Utils must support higher level API to create Tar with chrooted into
directory
* Volume export: use TarwithChroot instead of Tar so we can make sure no
symlink can be exported by tar if it exists outside of the source
directory.
* container export: use chroot and Tar instead of Tar so we can make sure no
symlink can be exported by tar if it exists outside of the mointPoint.
[NO NEW TESTS NEEDED]
[NO TESTS NEEDED]
Race needs combination of external/in-container mechanism which is hard to repro in CI.
Closes: BZ:#2168256
CVE: https://access.redhat.com/security/cve/CVE-2023-0778
Signed-off-by: Aditya R <arajan@redhat.com>
we were previously using an experimental feature in crun, but we lost
this capability once we moved to using the OCI runtime spec to specify
the volume mappings in fdcc2257df.
Add the same feature to libpod, so that we can support relative
positions for the idmaps.
Closes: https://github.com/containers/podman/issues/17517
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When generating a kube yaml with kube generate, do not
set the hostPort in the pod spec if the service flag is
set and we are generating a service kind too.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
* add tests
* add documentation for --shm-size-systemd
* add support for both pod and standalone run
Signed-off-by: danishprakash <danish.prakash@suse.com>
Add a podman ulimit annotation to kube generate and play.
If a container has a container with ulimits set, kube gen
will add those as an annotation to the generated yaml.
If kube play encounters the ulimit annotation, it will set
ulimits for the container being played.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
The tests for generating username/passwd entries assume that
UID/GID 123/456 do not exist, which is not a safe assumption on
Debian. If a /etc/passwd entry with that UID/GID already exists,
the test will not add a new one with the same UID/GID, and will
fail. Change UID and GID to be 6 digits, because we're a lot less
likely to collide with UIDs and GIDs in use on the system that
way. Could also go further and randomly generate the UID/GID, but
that feels like overkill.
Fixes#17366
Signed-off-by: Matt Heon <mheon@redhat.com>
The new version contains the ginkgolinter, which makes sure the
assertions are more helpful.
Also replace the deprecated os.SEEK_END with io.SeekEnd.
There is also a new `musttag` linter which checks if struct that are
un/marshalled all have json tags. This results in many warnings so I
disabled the check for now. We can reenable it if we think it is worth
it but for now it way to much work to fix all report problems.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Do not save the container each for changing the state and for removing
running exec sessions. Saving the container is expensive and avoiding
the redundant save makes `container rm` 1.2 times faster on my
workstation.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
In the super rare case that there are two containers with the same ID
for two different users, podman logs with the journald driver would show
logs from both containers.
[NO NEW TESTS NEEDED] Impossible to reproduce.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
I noticed this while running some things in parallel, podman events
would show events from other users. Because all events are written to
the journal everybody can see them. So when we read the journal we must
filter events for only the current UID.
To reproduce run `podman events` as user then in another window create a
container as root for example. After this patch it will correctly ignore
these events from other users.
[NO NEW TESTS NEEDED] I don't think we can test with two users at the same
time.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Loading container states speed things up when listing all containers but
it comes with a price tag for many other call paths. Hence, make
loading the state conditional to allow for keeping `podman ps` fast
without other commands regressing in performance.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Do not sync containers with the runtime and the database when listing
containers. It turns out to be extremely expensive and unnecessary.
The sync was needed since listing all containers from the database did
not populate their state. Doing that, however, is much faster since we
already have a connection to the database.
This change makes listing 200 containers 2 times faster than before.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
If the image being used has a user set that is a positive
integer greater than 0, then set the securityContext.runAsNonRoot
to true for the container in the generated kube yaml.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Also do not return (and immediately suppress) an error if no health
check is defined for a given container.
Makes listing 100 containers around 10 percent faster.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
After https://github.com/containers/netavark/pull/452 `netavark` is
incharge of deciding `custom_dns_servers` if any so lets honor that and
libpod should not set these manually.
This also ensures docker parity
Podman populates container's `/etc/resolv.conf` with custom DNS servers ( specified via `--dns` or `dns_server` in containers.conf )
even when container is connected to a network where `dns_enabled` is `true`.
Current behavior does not matches with docker, hence following commit ensures that podman only populates custom DNS server when container is not connected to any network where DNS is enabled and for the cases where `dns_enabled` is `true`
the resolution for custom DNS server will happen via ( `aardvark-dns` or `dnsname` ).
Reference: https://docs.docker.com/config/containers/container-networking/#dns-services
Closes: containers#16172
Signed-off-by: Aditya R <arajan@redhat.com>
Aardvark-dns and netavark now accepts custom DNS servers for containers
via new config field `dns_servers`. New field allows containers to use
custom resolvers instead of host's default resolvers.
Following commit instruments libpod to pass these custom DNS servers set
via `--dns` or central config to the network stack.
Depends-on:
* Common: containers/common#1189
* Netavark: containers/netavark#452
* Aardvark-dns: containers/aardvark-dns#240
Signed-off-by: Aditya R <arajan@redhat.com>
Kill is a fast syscall, so we can reduce the sleep time from 100ms to
10ms in hope to speed things up a bit.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Commit 067442b570 improved stopping/killing a container by detecting
whether the cleanup process has already fired and changed the state of
the container. Further improve on that by returning early instead of
trying to wait for the PID to finish. At that point we know that the
container has exited but the previous PID may have been recycled
already by the kernel.
[NO NEW TESTS NEEDED] - the absence of the two flaking tests recorded
in #17142 will tell.
Fixes: #17142
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add a comment when SIGKILL is being used. It may help future readers
better comprehend what's going on and why.
[NO NEW TESTS NEEDED] - cannot test a comment :^)
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Move the stopSignal decl into the branch where it's actually used.
[NO NEW TESTS NEEDED] as it's just a small refactor.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The code can be simplified by using a timer directly.
[NO NEW TESTS NEEDED] - should not change behavior.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Reserved annotations are used internally by Podman and would effect
nothing when run with Kubernetes so we should not be generating these
annotations.
Fixes: https://github.com/containers/podman/issues/17105
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The container lock is released before stopping/killing which implies
certain race conditions with, for instance, the cleanup process changing
the container state to stopped, exited or other states.
The (remaining) flakes seen in #16142 and #15367 strongly indicate a
race in between the stopping/killing a container and the cleanup
process. To fix the flake make sure to ignore invalid-state errors.
An alternative fix would be to change `KillContainer` to not return such
errors at all but commit c77691f06f indicates an explicit desire to
have these errors being reported in the sig proxy.
[NO NEW TESTS NEEDED] as it's a race already covered by the system
tests.
Fixes: #16142Fixes: #15367
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Every time I look at a container-removal issue I wonder why the
container isn't locked directly here, so let's add a comment here.
I am not sure whether I would be better if callers took care of
locking but for now the comment will safe the future me and probably
other readers some time.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
This is a cleaner solution and guarantees the variables
will be used before they are initialized.
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The StoppedByUser variable indicates that the container was
requested to stop by a user. It's used to prevent restart policy
from firing (so that a restart=always container won't restart if
the user does a `podman stop`. The problem is we were setting it
*very* late in the stop() function. Originally, this was fine,
but after the changes to add the new Stopping state, the logic
that triggered restart policy was firing before StoppedByUser was
even set - so the container would still restart.
Setting it earlier shouldn't hurt anything and guarantees that
checks will see that the container was stopped manually.
Fixes#17069
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
While manually playing with --service-container, I encountered a number
of too verbose logs. For instance, there's no need to error-log when
the service-container has already been stopped.
For testing, add a new kube test with a multi-pod YAML which will
implicitly show that #17024 is now working.
Fixes: #17024
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Every podman command is paying the price for this compile even when they
don't use the Regex, this will speed up start of podman by a little.
[NO NEW TESTS NEEDED] Existing tests should catch issues.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
follow-up to 6886e80b45
when "podman -rm -f" is used on a container in "stopping" state, also
make sure it is terminated before removing it from the local storage.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
check that the container has a valid pid before attempting to use
kill($PID, 0) on it. If the PID==0, it means the container is already
stopped.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Do not allow for removing the service container unless all associated
pods have been removed. Previously, the service container could be
removed when all pods have exited which can lead to a number of issues.
Now, the service container is treated like an infra container and can
only be removed along with the pods.
Also make sure that a pod is unlinked from the service container once
it's being removed.
Fixes: #16964
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
if the container has no pid namespace, they are not killed when the
container process ends. In this case, attempt to kill them in the
same way.
The problem was noticed with toolbox where the exec'ed sessions are
not terminated when the container is stopped, blocking the system
shutdown.
[NO NEW TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
in several top-level API functions. These are the first line of
the function that contains them, which makes sense; we want to
capture any error returned by the function. However, making this
the first defer means that it is the last thing to run after the
function returns - meaning that the container's
`defer c.lock.Unlock()` has already fired, leading to a chance we
modify the container without holding its lock.
We could move the function around so it's no longer the first
defer, but then we'd have to call it twice (immediately after
`defer c.lock.Unlock()` if the container is not batched, and a
second time in a new `else` block right after the lock/sync call
to make sure we handle batched containers). Seems simpler to just
leave it like this.
[NO NEW TESTS NEEDED] Can't really test for DB corruption easily.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When you use podman logs with --until and --follow it should exit after
the requested until time and not keep hanging forever.
This fixes the behavior for the k8s-file backend.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When you use podman logs with --until and --follow it should exit after
the requested until time and not keep hanging forever.
To make this work I reworked the code to use the better journald event
reading code for logs as well. this correctly uses the sd_journal API
without having to compare the cursors to find the EOF.
The same problems exists for the k8s-file driver, I will fix this in the
next commit.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Instead of reading the full journal which can be expensive we can seek
based on the time.
If you have a journald with many podman events just compare the time
`time podman events --since 1s --stream=false` with and without this
patch.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.
To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.
Fixes#16950
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
do not allow removing containers that are in the stopping state,
otherwise it can lead to a race condition where a "podman rm" removes
the container from the storage while another process is stopping the
same container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2155828
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This change aims to store an error message to the ContainerState struct
with the last known error from the Start, StartAndAttach, and Stop OCI
Runtime functions.
The goal was to act in accordance with Docker's behavior.
Fixes: #13729
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
This means we store things like config.json and the secret files
also on tmpfs, lowering wear on disk and leaving less stuff on disk
on an unclean shutdown.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This allows use to use STDOUT directly without having to call open
again, also this makes the export API endpoint much more performant
since it no longer needs to copy to a temp file.
I noticed that there was no export API test so I added one.
And lastly opening /dev/stdout will not work on windows.
Fixes#16870
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
always create a user namespace when running with euid != 0 since the
user is not owning the current mount namespace.
This issue happened on a Kubernetes cluster, where the pod was running
privileged but the UID was not 0, as it was configured in the image
itself.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This should simplify the db logic. We no longer need a extra db bucket
for the netns, it is still supported in read only mode for backwards
compat. The old version required us to always open the netns before we
could attach it to the container state struct which caused problem in
some cases were the netns was no longer valid.
Now we use the netns as string throughout the code, this allow us to
only open it when needed reducing possible errors.
[NO NEW TESTS NEEDED] Existing tests should cover it and it is only a
flake so hard to reproduce the error.
Fixes#16140
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We should have done this much earlier, most of the times CNI networks
just mean networks so I changed this and also fixed some function
names. This should make it more clear what actually refers to CNI and
what is just general network backend stuff.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we read logs there can be full or partial lines, when it is full we
need to append a newline, thus the message length must be incremented by
one.
Fixes#16856
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Also update vendor of containers/storage and image
Cleanup display of added/dropped capabilties as well
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>