`CRRuntimeSupportsPodCheckpointRestore()` is used to check if the current
container runtime (e.g., runc or crun) can restore a container into an
existing Pod. It does this by processing output message to check if the
`--lsm-mount-context` option is supported. This option was recently
added to crun [1], however, crun and runc have slightly different output
messages:
```
$ crun restore--lsm-mount-contextt
restore: option '--lsm-mount-context' requires an argument
Try `restore --help' or `restore --usage' for more information.
```
```
$ runc restore --lsm-mount-context
ERRO[0000] flag needs an argument: -lsm-mount-context
```
This patch updates the function to support both runtimes.
[1] https://github.com/containers/crun/pull/1578
Signed-off-by: Radostin Stoyanov <rstoyanov@fedoraproject.org>
There is no good reason for the special case, kube and pod units
definitely need it. Volume and network units maybe not but for
consistency we add it there as well. This makes the docs much easier to
write and understand for users as the behavior will not differ.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As documented in the issue there is no way to wait for system units from
the user session[1]. This causes problems for rootless quadlet units as
they might be started before the network is fully up. TWhile this was
always the case and thus was never really noticed the main thing that
trigger a bunch of errors was the switch to pasta.
Pasta requires the network to be fully up in order to correctly select
the right "template" interface based on the routes. If it cannot find a
suitable interface it just fails and we cannot start the container
understandingly leading to a lot of frustration from users.
As there is no sign of any movement on the systemd issue we work around
here by using our own user unit that check if the system session
network-online.target it ready.
Now for testing it is a bit complicated. While we do now correctly test
the root and rootless generator since commit ada75c0bb8 the resulting
Wants/After= lines differ between them and there is no logic in the
testfiles themself to say if root/rootless to match specifics. One idea
was to use `assert-key-is-rootless/root` but that seemed like more
duplication for little reason so use a regex and allow both to make it
pass always. To still have some test coverage add a check in the system
test to ask systemd if we did indeed have the right depdendencies where
we can check for exact root/rootless name match.
[1] https://github.com/systemd/systemd/issues/3312Fixes#22197
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fixes#24267
This commit replaces a potentially unsafe slice-assignment with a call to `slices.Clone`.
This could prevent a bug where `saveCommand` and `loadCommand` could end up sharing an underlying array if `parentFlags` has a cap > it's len.
Signed-off-by: Zachary Hanham <z.hanham00@gmail.com>
The special handling to return the exit code after the container has
been removed should only be done if there are no special conditions
requested. If a user asked for running or nay other state returning the
exit code immediately with a success response is just wrong. We only
want to allow that so the remote client can fetch the exit code without
races.
Fixes b3829a2932 ("libpod API: make wait endpoint better against rm races")
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Prior to this commit, many scp functions existed without option structs, which would make extending functionality (adding new options) impossible without breaking changes, or without adding redundant wrapper functions.
This commit adds in new option types for various scp related functions, and changes those functions' signatures to use the new options.
This commit also modifies the `ImageEngine.Scp()` function's interface to use the new opts.
The commit also renames the existing `ImageScpOptions` entity type to `ScpTransferImageOptions`. This is because the previous `ImageScpOptions` was inaccurate, as it is not the actual options for `ImageEngine.Scp()`. `ImageEngine.Scp()` should instead receive `ImageScpOptions`.
This commit should not change any behavior, however it will break the existing functions' signatures.
Signed-off-by: Zachary Hanham <z.hanham00@gmail.com>
mapMutex is initialized in the ContainerRm function and cannot be released from outside,
thus unlock mutex before returning from function.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Signed-off-by: Егор Макрушин <emakrushin@astralinux.ru>
Recently was trying to start podman machine with krunkit and got:
Error: krunkit exited unexpectedly with exit code 1
which isn't very descriptive. Although this doesn't solve the
issue, it increases the debugability of this error.
Signed-off-by: Eric Curtin <ecurtin@redhat.com>
Quadlet inserts network-online.target Wants/After dependencies to ensure pulling works.
Those systemd statements cannot be subsequently reset.
In the cases where those dependencies are not wanted, we add a new
configuration item called `DefaultDependencies=` in a new section called
[Quadlet]. This section is shared between different unit types.
fixes#24193
Signed-off-by: Farya L. Maerten <me@ltow.me>
In the common scenario of podman-remote run --rm the API is required to
attach + start + wait to get exit code. This has the problem that the
wait call races against the container removal from the cleanup process
so it may not get the exit code back. However we keep the exit code
around for longer than the container so we can just look it up in the
endpoint. Of course this only works when we get a full id as param but
podman-remote will do that.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Call the wait endpoint right away when a container is started and not
only when attach is done, this allows us for wait to work when the
container has been removed otherwise (i.e. podman-remote run --rm). In
that case it was possible that wait failed and we then fall back to
reading events. However based on some reports there seems to be the
chance that the event readin is not working for them either and returns
a bad error "Cannot get exit code: <nil>" which does not help anybody.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we are activated by systemd the code assumed that we had a valid
URL which was not the case so it failed to parse the URL which causes
the info call to fail all the time.
This fixes two problems first add the schema to the systemd activated
listener URL so it can be parsed correctly but second simply do not
parse it as url as all we care about in the info call is if it is unix
and the file path exists.
Fixes#24152
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Similar to github.com/containers/buildah/pull/5761 but not
security critical as Podman does not have an expectation that
mounts are scoped (the ability to write a --mount option is
already the ability to mount arbitrary content into the container
so sneaking arbitrary options into the mount doesn't have
security implications). Still, bad practice to let users inject
anything into the mount command line so let's not do that.
Signed-off-by: Matt Heon <mheon@redhat.com>
This commit was automatically cherry-picked
by buildah-vendor-treadmill v0.3
from the buildah vendor treadmill PR, #13808
* Fix conflict caused by Ed's local-registry PR in buildah
* Wire in "new" --retry and --retry-delay, these existed for longer
but where non functional.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
A field we missed versus Docker. Matches the format of our
existing Ports list in the NetworkConfig, but only includes
exposed ports (and maps these to struct{}, as they never go to
real ports on the host).
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
There is no reason to validate the args here, first podman may change
the syntax so this is just duplication that may hurt us long term. It
also added special handling of some options that just do not make sense,
i.e. removing 0.0.0.0, podman should really be the only parser here. And
more importantly this prevents variables from being used.
Fixes#24081
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Previously, we didn't bother including exposed ports in the
container config when creating a container with --net=host. Per
Docker this isn't really correct; host-net containers are still
considered to have exposed ports, even though that specific
container can be guaranteed to never use them.
We could just fix this for host container, but we might as well
make it generic. This patch unconditionally adds exposed ports to
the container config - it was previously conditional on a network
namespace being configured. The behavior of `podman inspect` with
exposed ports when using `--net=container:` has also been
corrected. Previously, we used exposed ports from the container
sharing its network namespace, which was not correct. Now, we use
regular port bindings from the namespace container, but exposed
ports from our own container.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
As shown in #23671 these functions can return the raw error without any
useful context to the user which makes it hard to understand where
things went wrong. Simply add some context to some error paths here.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we check for a storage container mount we normally expect a
ErrContainerUnknown when it does not exists. However during we check if
it is actually mounted we also can get ErrLayerUnknown when the
contianer was removed between the Container and Mount checks as they do
not happen under the same lock.
Fixes#23671
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Currently podman run -d can exit 0 if we send SIGTERM during startup
even though the contianer was never started. That just doesn't make any
sense is horribly confusing for a external job manager like systemd.
The original motivation was to exit 0 for the podman.service in commit
ca7376bb11. That does make sense but it should only do so for the
service and only if the server did indeed gracefully shutdown.
So we rework how the exit logic works, do not let the handler perform
the exit. Instead the shutdown package does the exit after all handlers
are run, this solves the issue of ordering. Then we default to exit code
1 like we did before and allow the service exit handler to overwrite the
exit code 0 in case of a graceful shutdown.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Modifies the "Remove machine" test to verify the system connections are
handled properly on removal.
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
These flags can affect the output of the HealtCheck log. Currently, when a container is configured with HealthCheck, the output from the HealthCheck command is only logged to the container status file, which is accessible via `podman inspect`.
It is also limited to the last five executions and the first 500 characters per execution.
This makes debugging past problems very difficult, since the only information available about the failure of the HealthCheck command is the generic `healthcheck service failed` record.
- The `--health-log-destination` flag sets the destination of the HealthCheck log.
- `none`: (default behavior) `HealthCheckResults` are stored in overlay containers. (For example: `$runroot/healthcheck.log`)
- `directory`: creates a log file named `<container-ID>-healthcheck.log` with JSON `HealthCheckResults` in the specified directory.
- `events_logger`: The log will be written with logging mechanism set by events_loggeri. It also saves the log to a default directory, for performance on a system with a large number of logs.
- The `--health-max-log-count` flag sets the maximum number of attempts in the HealthCheck log file.
- A value of `0` indicates an infinite number of attempts in the log file.
- The default value is `5` attempts in the log file.
- The `--health-max-log-size` flag sets the maximum length of the log stored.
- A value of `0` indicates an infinite log length.
- The default value is `500` log characters.
Add --health-max-log-count flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-max-log-size flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-log-destination flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Modify `RemoveConnections` to verify the new default system connection's
rootful state matches the rootful-ness of the podman machine it is associated
with.
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
Takes the code inside the closure in the function `RemoveConnections`
and makes it a separate function to increase readability.
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
Moves the `DefaultMachineName` constant out of `pkg/machine` and into
`pkg/machine/define`.
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
There is no reason to disallow exposed sctp ports at all. As root we can
publish them find and as rootless it should error later anyway.
And for the case mentioned in the issue it doesn't make sense as the
port is not even published thus it is just part of the metadata which is
totally in all cases.
Fixes#23911
Signed-off-by: Paul Holzinger <pholzing@redhat.com>