This primarily served to protect us against shutting down the
Libpod runtime while operations (like creating a container) were
happening. However, it was very inconsistently implemented (a lot
of our longer-lived functions, like pulling images, just didn't
implement it at all...) and I'm not sure how much we really care
about this very-specific error case?
Removing it also removes a lot of potential deadlocks, which is
nice.
[NO NEW TESTS NEEDED]
Signed-off-by: Matthew Heon <mheon@redhat.com>
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists. Since this is an expected state, we should not complain
about it.
Fixes: https://github.com/containers/podman/issues/12808
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
podman container clone takes the id of an existing continer and creates a specgen from the given container's config
recreating all proper namespaces and overriding spec options like resource limits and the container name if given in the cli options
this command utilizes the common function DefineCreateFlags meaning that we can funnel as many create options as we want
into clone over time allowing the user to clone with as much or as little of the original config as they want.
container clone takes a second argument which is a new name and a third argument which is an image name to use instead of the original container's
the current supported flags are:
--destroy (remove the original container)
--name (new ctr name)
--cpus (sets cpu period and quota)
--cpuset-cpus
--cpu-period
--cpu-rt-period
--cpu-rt-runtime
--cpu-shares
--cpuset-mems
--memory
--run
resolves#10875
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
The CONTAINERS_CONF environment variable can be used to override the
configuration file, which is useful for testing. However, at the moment
this variable is not propagated to conmon. That means in particular, that
conmon can't propagate it back to podman when invoking its --exit-command.
The mismatch in configuration between the starting and cleaning up podman
instances can cause a variety of errors.
This patch also adds two related test cases. One checks explicitly that
the correct CONTAINERS_CONF value appears in conmon's environment. The
other checks for a possible specific impact of this bug: if we use a
nonstandard name for the runtime (even if its path is just a regular crun),
then the podman container cleanup invoked at container exit will fail.
That has the effect of meaning that a container started with -d --rm won't
be correctly removed once complete.
Fixes#12917
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Checkpoint/restore pod tests are not running with an older runc and now
that runc 1.1.0 appears in the repositories it was detected that the
tests were failing. This was not detected in CI as CI was not using runc
1.1.0 yet.
Signed-off-by: Adrian Reber <areber@redhat.com>
The `podman network connect` and `podman network disconnect`
commands give containers access to different networks than the
ones they were created with; these networks can also have DNS
servers associated with them. Until now, however, we did not
modify resolv.conf as network membership changed.
With this PR, `podman network connect` will add any new
nameservers supported by the new network to the container's
/etc/resolv.conf, and `podman network disconnect` command will do
the opposite, removing the network's nameservers from
`/etc/resolv.conf`.
Fixes#9603
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When running podman inside systemd user units, it is possible that
systemd kills the rootless netns slirp4netns process because it was
started in the default unit cgroup. When the unit is stopped all
processes in that cgroup are killed. Since the slirp4netns process is
run once for all containers it should not be killed. To make sure
systemd will not kill the process we move it to the user.slice.
Fixes#13153
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It seems we are ignoring output from healthcheck session.
Open a valid pipe to healthcheck session in order read its output.
Use common pipe for both `stdout/stderr` since that was the previous
behviour as well.
Signed-off-by: Aditya R <arajan@redhat.com>
Append the podman dns seach domain to the host search domains when we
use the dnsname/aardvark server. Previously it would only use podman
seach domains and discard the host domains.
Fixes#13103
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
separated cgroupNS sharing from setting the pod as the cgroup parent,
made a new flag --share-parent which sets the pod as the cgroup parent for all
containers entering the pod
remove cgroup from the default kernel namespaces since we want the same default behavior as before which is just the cgroup parent.
resolves#12765
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
podman system prune should also remove all networks. When we want to
users to migrate to the new network stack we recommend to run podman
system reset. However this did not remove networks and if there were
still networks around we would continue to use cni since this was
considered an old system.
There is one exception for the default network. It should not be removed
since this could cause other issues when it no longer exists. The
network backend detection logic ignores the default network so this is
fine.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
these mount flags are already used for the /dev/shm mount on the host,
but they are not set for the bind mount itself.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
by default slirp4netns uses the tap0 device. When slirp4netns is
used, use that device by default instead of eth0.
Closes: https://github.com/containers/podman/issues/11695
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Often users want their overlayed volumes to be `non-volatile` in nature
that means that same `upper` dir can be re-used by one or more
containers but overall of nature of volumes still have to be `overlay`
so work done is still on a overlay not on the actual volume.
Following PR adds support for more advanced options i.e custom `workdir`
and `upperdir` for overlayed volumes. So that users can re-use `workdir`
and `upperdir` across new containers as well.
Usage
```console
$ podman run -it -v myvol:/data:O,upperdir=/path/persistant/upper,workdir=/path/persistant/work alpine sh
```
Signed-off-by: Aditya R <arajan@redhat.com>
when running on NFS, a RemoveAll could cause EBUSY because of some
unlinked files that are still kept open and "silly renamed" to
.nfs$ID.
This is only half of the fix, as conmon needs to be fixed too.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2040379
Related: https://github.com/containers/conmon/pull/319
[NO NEW TESTS NEEDED] as it requires NFS as the underlying storage.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We should not check if the network supports dns when we create a
container with network aliases. This could be the case for containers
created by docker-compose for example if the dnsname plugin is not
installed or the user uses a macvlan config where we do not support dns.
Fixes#12972
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
the config.json file for the OCI runtime is never closed, this is a
problem when running on NFS, since it leaves around stale files that
cannot be unlinked.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Waiting on an initialized sync.WaitGroup returns immediately.
Hence, move the goroutine to wait and close *after* reading
the logs.
Fixes: #12904
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Improve our compatibility with Docker by better handling the
state strings that we print in `podman ps`. Docker capitalizes
all states in `ps` (we do not) - fix this in our PS code. Also,
stop normalizing ContainerStateConfigured to the "Created" state,
and instead make it always be Created, with the existing Created
state becoming Initialized.
I didn't rename the actual states because I'm somewhat reticent
to make such a large change a day before we leave for break. It's
somewhat confusing that ContainerStateConfigured now returns
Created, but internally and externally we're still consistent.
[NO NEW TESTS NEEDED] existing tests should catch anything that
broke.
I also consider this a breaking change. I will flag appropriately
on Github.
Fixes RHBZ#2010432 and RHBZ#2032561
Signed-off-by: Matthew Heon <mheon@redhat.com>
This change updates the CDI API to commit 46367ec063fda9da931d050b308ccd768e824364
which addresses some inconistencies in the previous implementation.
Signed-off-by: Evan Lezar <elezar@nvidia.com>
Support removing the entire pod when --depend is used on an infra
container. --all now implies --depend to properly support removing all
containers and not error out when hitting infra containers.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
move the check after the cgroup manager is set, so to correctly detect
--cgroup-manager=cgroupfs and do not raise a warning about dbus not
being present.
Closes: https://github.com/containers/podman/issues/12802
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The libpod/network packages were moved to c/common so that buildah can
use it as well. To prevent duplication use it in podman as well and
remove it from here.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This directory needs to be world searchable so users can access it from
different user namespaces.
Fixes: https://github.com/containers/podman/issues/12779
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This option causes Podman to not only remove the specified containers
but all of the containers that depend on the specified
containers.
Fixes: https://github.com/containers/podman/issues/10360
Also ran codespell on the code
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
For ip/macvlan networks we cannot use the gateway as address for this
hostname. In this case the gateway is normally not on the host so we
just try to use a local ip instead.
[NO NEW TESTS NEEDED] We cannot run macvlan networks in CI.
Fixes#11351
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Remove hard code use of the DefaultInfraImage and rely on
getting this from containers.conf.
Fixes: https://github.com/containers/podman/issues/12771
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
I don't see where these With Functions are used, so removing them to
clean up code.
WithDefaultInfra* functions screwed me up and confused me.
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
It would be easier to diagnose OCI runtime errors if the error actually
had the name of the OCI runtime that produced the error.
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
this commit fixes two bugs and adds regression tests.
when getting healthcheck values from an image, if the image does not
have a timeout defined, this resulted in a 0 value for timeout. The
default as described in the man pages is 30s.
when inspecting a container with a healthcheck command, a customer
observed that the &, <, and > characters were being converted into a
unicode escape value. It turns out json marshalling will by default
coerce string values to ut8.
Fixes: bz2028408
Signed-off-by: Brent Baude <bbaude@redhat.com>
Currently Docker copies up the first volume on a mountpoint with
data.
Fixes: https://github.com/containers/podman/issues/12714
Also added NeedsCopyUP, NeedsChown and MountCount to the podman volume
inspect code.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Fix handling of "bind" and "tmpfs" olumes to actually work.
Allow bind, tmpfs local volumes to work in rootless mode.
Also removed the string "error" from all error messages that begine with it.
All Podman commands are printed with Error:, so this causes an ugly
stutter.
Fixes: https://github.com/containers/podman/issues/12013
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Added support for pod security options. These are applied to infra and passed down to the
containers as added (unless overridden).
Modified the inheritance process from infra, creating a new function Inherit() which reads the config, and marshals the compatible options into an intermediate struct `InfraInherit`
This is then unmarshaled into a container config and all of this is added to the CtrCreateOptions. Removes the need (mostly) for special additons which complicate the Container_create
code and pod creation.
resolves#12173
Signed-off-by: cdoern <cdoern@redhat.com>
Prodding bz #2024229 a little more, it turns out the service file is NOT
deleted when it is in a failed state (i.e the healtch check has failed
for some reason). The state must be reset before the service is stopped
on container removal and then the files will be removed properly.
BZ#:2024229
[NO NEW TESTS NEEDED]
Signed-off-by: Brent Baude <bbaude@redhat.com>
Some containers require certain user account(s) to exist within the
container when they are run. This option will allow callers to add a
bunch of passwd entries from the host to the container even if the
entries are not in the local /etc/passwd file on the host.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1935831
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Do not apply reserved annotations from the image to the container.
Reserved annotations are applied during container creation to retrieve
certain information (e.g., custom seccomp profile or autoremoval)
once a container has been created.
Context: #12671
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
When Podman is running a container in private IPC mode (default), it
creates a bind mount for /dev/shm that is then attached to a tmpfs
folder on the host file system. However, checkpointing a container has
the side-effect of stopping that container and unmount the tmpfs used
for /dev/shm. As a result, after checkpoint all files stored in the
container's /dev/shm would be lost and the container might fail to
restore from checkpoint.
To address this problem, this patch creates a tar file with the
content of /dev/shm that is included in the container checkpoint and
used to restore the container.
Signed-off-by: Radostin Stoyanov <rstoyanov@fedoraproject.org>
This ensures that existing containers will still manage
`/etc/passwd` by default, as they have been doing until now. New
containers that explicitly set `false` will still have passwd
management disabled, but otherwise the code will run.
[NO NEW TESTS NEEDED] This will only be caught on upgrade and I
don't really know how to write update tests - and Ed is on PTO.
Signed-off-by: Matthew Heon <mheon@redhat.com>
It has been deprecated and is no longer supported. Fully remove it and
only print a warning if a user uses it.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2011695
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
added support for a new flag --passwd which, when false prohibits podman from creating entries in
/etc/passwd and /etc/groups allowing users to modify those files in the container entrypoint
resolves#11805
Signed-off-by: cdoern <cdoern@redhat.com>
Add first non localhost ipv4 of all host interfaces as destination
for host.contaners.internal for rootless containers.
Fixes: https://github.com/containers/podman/issues/12000
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
the logic is: if the process env vars key exists in podman default or in image defined, and the value is equal, skip the env var key.
the typo make it compare to itself -_-
so, here comes the simple fixup.
Signed-off-by: 荒野無燈 <ttys3.rust@gmail.com>
Force removal of images will also remove associated containers.
Historically, infra containers have been excluded resulting in
rather annoying errors, for instance, when running `rmi -af`.
Since there is not reasons to exclude infra containers, allow for
removing the entire pod when an infra image is force removed.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
This adds the following information to the output of 'podman inspect':
* CheckpointedAt - time the container was checkpointed
Only set if the container has been checkpointed
* RestoredAt - time the container was restored
Only set if the container has been restored
* CheckpointLog - path to the checkpoint log file (CRIU's dump.log)
Only set if the log file exists (--keep)
* RestoreLog - path to the restore log file (CRIU's restore.log)
Only set if the log file exists (--keep)
* CheckpointPath - path to the actual (CRIU) checkpoint files
Only set if the checkpoint files exists (--keep)
* Restored - set to true if the container has been restored
Only set if the container has been restored
Signed-off-by: Adrian Reber <areber@redhat.com>
when a container with healthchecks exits due to stopping or failure, we
need the cleanup process to remove both the timer file and the service
file.
Bz#:2024229
Signed-off-by: Brent Baude <bbaude@redhat.com>
It is important that we store the current networks from the db in the
config. Also make sure to properly handle aliases and ignore static ip/mac
addresses.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add the new networks format to specgen. For api users cni_networks is
still supported to make migration easier however the static ip and mac
fields are removed.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Network connect now supports setting a static ipv4, ipv6 and mac address
for the container network. The options are added to the cli and api.
Fixes#9883
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Make sure we create new containers in the db with the correct structure.
Also remove some unneeded code for alias handling. We no longer need this
functions.
The specgen format has not been changed for now.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The new network db structure stores everything in the networks bucket.
Previously some network settings were not written the the network bucket
and only stored in the container config.
Instead of the old format which used the container ID as value in the
networks buckets we now use the PerNetworkoptions struct there.
To migrate existing users we use the state.GetNetworks() function. If it
fails to read the new format it will automatically migrate the old
config format to the new one. This is allows a flawless migration path.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Issue #11825 suggests that *rootless* Podman can run into situations
where too many inotify fds are open. Indeed, rootless Podman has a
slightly higher usage of inotify watchers than the root counterpart
when using slirp4netns
Make sure to not only close all watchers but to also remove the files
from being watched. Otherwise, the fds only get closed
when the files are removed.
[NO NEW TESTS NEEDED] since we don't have a way to test it.
Fixes: #11825
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
While trying to match permissions of target directory podman adds
extra `0111` which should not be needed if target path does not have
execute permission.
Signed-off-by: Aditya Rajan <arajan@redhat.com>
We need to follow all symlinks in the /etc/resolv.conf path. Currently
we would only check the last file but it is possible that any directory
before that is also a link.
Unfortunately this code is very hard to maintain and not well tested. I
will try to come up with a unit test when I have more time. I think we
could utilize some for of chroot for this. For now we are stucked with
the default setup in the fedora/ubunutu test VMs.
[NO NEW TESTS NEEDED]
Fixes#12461
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
rootlessNetNS.Cleanup() has an issue with how it detects if cleanup
is needed, reading the container state is not good ebough because
containers are first stopped and than cleanup will be called. So at one
time two containers could wait for cleanup but the second one will fail
because the first one triggered already the cleanup thus making rootless
netns unavailable for the second container resulting in an teardown
error. Instead of checking the container state we need to check the
netns state.
Secondly, podman unshare --rootless-netns should not do the cleanup.
This causes more issues than it is worth fixing. Users also might want
to use this to setup the namespace in a special way. If unshare also
cleans this up right away we cannot do this.
[NO NEW TESTS NEEDED]
Fixes#12459
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
If the /proc/$PID/cgroup file doesn't exist, then it is likely the
container was terminated in the meanwhile so report ErrCtrStopped that
is already handled instead of ENOENT.
commit a66f40b4df introduced the regression.
Closes: https://github.com/containers/podman/issues/12457
[NO NEW TESTS NEEDED] it solves a race in the CI that is difficult to reproduce.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
... at least within a single service.
[NO NEW TESTS NEEDED]
because testing RNGs is problematic. (We _could_
probably inject a mock RNG implementation that always
returns the same value, or something like that.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Add an error return to it and affected callers.
Should not affect behavior, the function can't currently fail.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Use a private RNG with the desired seed, don't interfere
with the other uses.
Introducing the servicePortState type is rather overkill
for the single member, but we'll add another one immediately.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We should not modify the XDG_RUNTIME_DIR env value during runtime of
libpod, this can cause hard to find bugs. Only set it for the OCI
runtime, this matches the other commands such as start, stop, kill...
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
improve the heuristic to detect the scope that was created for the container.
This is necessary with systemd running as PID 1, since it moves itself
to a different sub-cgroup, thus stats would not account for other
processes in the same container.
Closes: https://github.com/containers/podman/issues/12400
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
OCI runtimes may set the memory limits in different ways, e.g., crun
creates a sub-cgroup where the limits are applied, while runc applies
them directly on the created cgroup. Since there is standardization
on the cgroup path to use, just use the limit specified in the spec
file.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
if the SELinux label could not be restored correctly, leave the OS
thread locked so that it is terminated once it returns to the threads
pool.
[NO NEW TESTS NEEDED] the failure is hard to reproduce
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This should fix the SELinux issue we are seeing with talking to
/run/systemd/private.
Fixes: https://github.com/containers/podman/issues/12362
Also unset the XDG_RUNTIME_DIR if set, since we don't know when running
as a service if this will cause issue.s
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
failed to send a signal to the container's PID1, but ignored the
results of that update. That's generally bad practice, since even
if we can't directly take action on an error, we should still
make an effort to report it for debugging purposes. I used Infof
instead of something more serious to avoid duplicate reporting to
the user if something has gone seriously wrong.
[NO NEW TESTS NEEDED] this is just adding additional error reporting.
Signed-off-by: Matthew Heon <mheon@redhat.com>
`crun status ctrid` outputs `No such file or directory` when container
is not there so podman much ack it.
[NO NEW TESTS NEEDED]
Signed-off-by: Aditya Rajan <arajan@redhat.com>
While trying to kill a container with a `signal` we cant do anything if
container is already dead so `exit` gracefully instead of trying to
delete container again. Get container status from runtime.
[ NO NEW TESTS NEEDED ]
Signed-off-by: Aditya Rajan <arajan@redhat.com>
When generating kube of a container, the podname and container name in
the yaml are identical. This offends rules in podman where pods and
containers cannot have the same name. We now append _pod to the
podname to avoid that collision.
Signed-off-by: Brent Baude <bbaude@redhat.com>
The return error was not returned by podman , instead a different error
was created. Also make sure to free assigned ips on an error to not leak
them.
Lastly podman container cleanup uses the default network backend instead
of the provided one, we need to add `--network-backend` to the exit
command.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Firewalld cannot be used because it can connect to the dbus api but
talks to firewalld in the host namespace. This will affact your host
badly and also causes tests to fail.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Create a custom writer which logs the netavark output to logrus. This
will log to the syslog when it is enabled.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There is a problem with creating and storing the exit command when the
container was created. It only contains the options the container was
created with but NOT the options the container is started with. One
example would be a CNI network config. If I start a container once, then
change the cni config dir with `--cni-config-dir` ans start it a second
time it will start successfully. However the exit command still contains
the wrong `--cni-config-dir` because it was not updated.
To fix this we do not want to store the exit command at all. Instead we
create it every time the conmon process for the container is startet.
This guarantees us that the container cleanup process is startet with
the correct settings.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
CRIU supports checkpoint/restore of file locks. This feature is
required to checkpoint/restore containers running applications
such as MySQL.
Signed-off-by: Radostin Stoyanov <radostin@redhat.com>
The netns cleanup code is checking if there are running containers, this
can fail if you run several libpod instances with diffrent root/runroot.
To fix it we use one netns for each libpod instances. To prevent name
conflicts we use a hash from the static dir as part of the name.
Previously this worked because we would use the CNI files to check if
the netns was still in use. but this is no longer possible with netavark.
[NO NEW TESTS NEEDED]
Fixes#12306
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
structure.
Resolves a discrepancy between the types used in inspect for docker and podman.
This causes a panic when using the docker client against podman when the
secondary IP fields in the `NetworkSettings` inspect field are populated.
Fixes containers#12165
Signed-off-by: Federico Gimenez <fgimenez@redhat.com>
Some field names are confusing. Change them so that they make more sense
to the reader.
Since these fields are only in the main branch we can safely rename them
without worrying about backwards compatibility.
Note we have to change the field names in netavark too.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Podman adds a few environment variables by default, and
currently there is no way to get rid of them from your container.
This option will allow you to specify which defaults you don't
want.
--unsetenv-all will remove all default environment variables.
Default environment variables can come from podman builtin,
containers.conf or from the container image.
Fixes: https://github.com/containers/podman/issues/11836
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When reading logs from the journal, keep going after the container
exits, in case it gets restarted.
Events logged to the journal via the normal paths don't include
CONTAINER_ID_FULL, so don't bother adding it to the "history" event we
use to force at least one entry for the container to show up in the log.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Honor custom `target` if specified while running or creating containers
with secret `type=mount`.
Example:
`podman run -it --secret token,type=mount,target=TOKEN ubi8/ubi:latest
bash`
Signed-off-by: Aditya Rajan <arajan@redhat.com>
This commits adds port forwarding logic directly into podman. The
podman-machine cni plugin is no longer needed.
The following new features are supported:
- works with cni, netavark and slirp4netns
- ports can use the hostIP to bind instead of hard coding 0.0.0.0
- gvproxy no longer listens on 0.0.0.0:7777 (requires a new gvproxy
version)
- support the udp protocol
With this we no longer need podman-machine-cni and should remove it from
the packaging. There is also a change to make sure we are backwards
compatible with old config which include this plugin.
Fixes#11528Fixes#11728
[NO NEW TESTS NEEDED] We have no podman machine test at the moment.
Please test this manually on your system.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This adds the parameter '--print-stats' to 'podman container restore'.
With '--print-stats' Podman will measure how long Podman itself, the OCI
runtime and CRIU requires to restore a checkpoint and print out these
information. CRIU already creates process restore statistics which are
just read in addition to the added measurements. In contrast to just
printing out the ID of the restored container, Podman will now print
out JSON:
# podman container restore --latest --print-stats
{
"podman_restore_duration": 305871,
"container_statistics": [
{
"Id": "47b02e1d474b5d5fe917825e91ac653efa757c91e5a81a368d771a78f6b5ed20",
"runtime_restore_duration": 140614,
"criu_statistics": {
"forking_time": 5,
"restore_time": 67672,
"pages_restored": 14
}
}
]
}
The output contains 'podman_restore_duration' which contains the
number of microseconds Podman required to restore the checkpoint. The
output also includes 'runtime_restore_duration' which is the time
the runtime needed to restore that specific container. Each container
also includes 'criu_statistics' which displays the timing information
collected by CRIU.
Signed-off-by: Adrian Reber <areber@redhat.com>
This adds the parameter '--print-stats' to 'podman container checkpoint'.
With '--print-stats' Podman will measure how long Podman itself, the OCI
runtime and CRIU requires to create a checkpoint and print out these
information. CRIU already creates checkpointing statistics which are
just read in addition to the added measurements. In contrast to just
printing out the ID of the checkpointed container, Podman will now print
out JSON:
# podman container checkpoint --latest --print-stats
{
"podman_checkpoint_duration": 360749,
"container_statistics": [
{
"Id": "25244244bf2efbef30fb6857ddea8cb2e5489f07eb6659e20dda117f0c466808",
"runtime_checkpoint_duration": 177222,
"criu_statistics": {
"freezing_time": 100657,
"frozen_time": 60700,
"memdump_time": 8162,
"memwrite_time": 4224,
"pages_scanned": 20561,
"pages_written": 2129
}
}
]
}
The output contains 'podman_checkpoint_duration' which contains the
number of microseconds Podman required to create the checkpoint. The
output also includes 'runtime_checkpoint_duration' which is the time
the runtime needed to checkpoint that specific container. Each container
also includes 'criu_statistics' which displays the timing information
collected by CRIU.
Signed-off-by: Adrian Reber <areber@redhat.com>
To make testing easier we can overwrite the network backend with the
global `--network-backend` option.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
make sure the /etc/mtab symlink is created inside the rootfs when /etc
is a symlink.
Closes: https://github.com/containers/podman/issues/12189
[NO NEW TESTS NEEDED] there is already a test case
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>