When running under systemd there is no need to create yet another
cgroup for the container.
With conmon-delegated the current cgroup will be split in two sub
cgroups:
- supervisor
- container
The supervisor cgroup will hold conmon and the podman process, while
the container cgroup is used by the OCI runtime (using the cgroupfs
backend).
Closes: https://github.com/containers/libpod/issues/6400
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This will allow containers that connect to the network namespace be
able to use the container name directly.
For example you can do something like
podman run -ti --name foobar fedora ping foobar
While we can do this with hostname now, this seems more natural.
Also if another container connects on the network to this container it
can do
podman run --network container:foobar fedora ping foobar
And connect to the original container,without having to discover the name.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
fix the check for c.state.NetNS == nil. Its value is changed in the
first code block, so the condition is always true in the second one
and we end up running slirp4netns twice.
Closes: https://github.com/containers/libpod/issues/6538
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
do not set the hostname when joining an UTS namespace, as it could be
owned by a different userns.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
when running in a new userns, make sure the resolv.conf and hosts
files bind mounted from another container are accessible to root in
the userns.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
In FIPS Mode we expect to work off of the Mountpath not the Rundir path.
This is causing FIPS Mode checks to fail.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add more default options parsing
Switch to using --time as opposed to --timeout to better match Docker.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We previously attempted to work within CNI to do this, without
success. So let's do it manually, instead. We know where the
files should live, so we can remove them ourselves instead. This
solves issues around sudden reboots where containers do not have
time to fully tear themselves down, and leave IP address
allocations which, for various reasons, are not stored in tmpfs
and persist through reboot.
Fixes#5433
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This corrects a regression from Podman 1.4.x where container exec
sessions inherited supplemental groups from the container, iff
the exec session did not specify a user.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Enables most of the network-related functionality from
`podman run` in `podman pod create`. Custom CNI networks can be
specified, host networking is supported, DNS options can be
configured.
Also enables host networking in `podman play kube`.
Fixes#2808Fixes#3837Fixes#4432Fixes#4718Fixes#4770
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
`gocritic` is a powerful linter that helps in preventing certain kinds
of errors as well as enforcing a coding style.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
When doing a checkpoint with --export the root file-system diff was not
working as expected. Instead of getting the changes from the running
container to the highest storage layer it got the changes from the
highest layer to that parent's layer. For a one layer container this
could mean that the complete root file-system is part of the checkpoint.
With this commit this changes to use the same functionality as 'podman
diff'. This actually enables to correctly diff the root file-system
including tracking deleted files.
This also removes the non-working helper functions from libpod/diff.go.
Signed-off-by: Adrian Reber <areber@redhat.com>
The code currently assumes that the container we delegate network
namespace to will never further delegate to another container, so
when looking up things like /etc/hosts and /etc/resolv.conf we
won't pull the correct files from the chained dependency. The
changes to resolve this are relatively simple - just need to keep
looking until we find a container without NetNsCtr set.
Fixes#4626
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Trying to checkpoint a container started with --rm works, but it makes
no sense as the container, including the checkpoint, will be deleted
after writing the checkpoint. This commit inhibits checkpointing
containers started with '--rm' unless '--export' is used. If the
checkpoint is exported it can easily be restored from the exported
checkpoint, even if '--rm' is used. To restore a container from a
checkpoint it is even necessary to manually run 'podman rm' if the
container is not started with '--rm'.
Signed-off-by: Adrian Reber <areber@redhat.com>
When Libpod removes a container, there is the possibility that
removal will not fully succeed. The most notable problems are
storage issues, where the container cannot be removed from
c/storage.
When this occurs, we were faced with a choice. We can keep the
container in the state, appearing in `podman ps` and available for
other API operations, but likely unable to do any of them as it's
been partially removed. Or we can remove it very early and clean
up after it's already gone. We have, until now, used the second
approach.
The problem that arises is intermittent problems removing
storage. We end up removing a container, failing to remove its
storage, and ending up with a container permanently stuck in
c/storage that we can't remove with the normal Podman CLI, can't
use the name of, and generally can't interact with. A notable
cause is when Podman is hit by a SIGKILL midway through removal,
which can consistently cause `podman rm` to fail to remove
storage.
We now add a new state for containers that are in the process of
being removed, ContainerStateRemoving. We set this at the
beginning of the removal process. It notifies Podman that the
container cannot be used anymore, but preserves it in the DB
until it is fully removed. This will allow Remove to be run on
these containers again, which should successfully remove storage
if it fails.
Fixes#3906
Signed-off-by: Matthew Heon <mheon@redhat.com>
When restoring a container with user namespace, the user namespace is
created by the OCI runtime, and the network namespace is created after
the user namespace to ensure correct ownership.
In this case PostConfigureNetNS will be set and the value of
c.state.NetNS would be nil. Hence, the following error occurs:
$ sudo podman run --name cr \
--uidmap 0:1000:500 \
-d docker.io/library/alpine \
/bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
$ sudo podman container checkpoint cr
$ sudo podman container restore cr
...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x13a5e3c]
Signed-off-by: Radostin Stoyanov <rstoyanov1@gmail.com>
Pull in changes to pkg/secrets/secrets.go that adds the
logic to disable fips mode if a pod/container has a
label set.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Refactor the `RuntimeConfig` along with related code from libpod into
libpod/config. Note that this is a first step of consolidating code
into more coherent packages to make the code more maintainable and less
prone to regressions on the long runs.
Some libpod definitions were moved to `libpod/define` to resolve
circular dependencies.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
when running in systemd mode on cgroups v1, make sure the
/sys/fs/cgroup/systemd/release_agent is masked otherwise the container
is able to modify it and execute scripts on the host.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Previously, `podman checkport restore` with exported containers,
when told to create a new container based on the exported
checkpoint, would create a new container, with a new container
ID, but not reset CGroup path - which contained the ID of the
original container.
If this was done multiple times, the result was two containers
with the same cgroup paths. Operations on these containers would
this have a chance of crossing over to affect the other one; the
most notable was `podman rm` once it was changed to use the --all
flag when stopping the container; all processes in the cgroup,
including the ones in the other container, would be stopped.
Reset cgroups on restore to ensure that the path matches the ID
of the container actually being run.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
For future work, we need multiple implementations of the OCI
runtime, not just a Conmon-wrapped runtime matching the runc CLI.
As part of this, do some refactoring on the interface for exec
(move to a struct, not a massive list of arguments). Also, add
'all' support to Kill and Stop (supported by runc and used a bit
internally for removing containers).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
CNI expects that a DELETE be run before re-creating container
networks. If a reboot occurs quickly enough that containers can't
stop and clean up, that DELETE never happens, and Podman
currently wipes the old network info and thinks the state has
been entirely cleared. Unfortunately, that may not be the case on
the CNI side. Some things - like IP address reservations - may
not have been cleared.
To solve this, manually re-run CNI Delete on refresh. If the
container has already been deleted this seems harmless. If not,
it should clear lingering state.
Fixes: #3759
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
In order to run Podman with VM-based runtimes unprivileged, the
network must be set up prior to the container creation. Therefore
this commit modifies Podman to run rootless containers by:
1. create a network namespace
2. pass the netns persistent mount path to the slirp4netns
to create the tap inferface
3. pass the netns path to the OCI spec, so the runtime can
enter the netns
Closes#2897
Signed-off-by: Gabi Beyer <gabrielle.n.beyer@intel.com>
If the HOME environment variable is not set, make sure it is set to
the configuration found in the container /etc/passwd file.
It was previously depending on a runc behavior that always set HOME
when it is not set. The OCI runtime specifications do not require
HOME to be set so move the logic to libpod.
Closes: https://github.com/debarshiray/toolbox/issues/266
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When --cgroupns=private is used we need to mount a new cgroup file
system so that it points to the correct namespace.
Needs: https://github.com/containers/crun/pull/88
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This is mostly used with Systemd, which really wants to manage
CGroups itself when managing containers via unit file.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This change adds the following annotation to every container created by
podman:
```json
"Annotations": {
"io.containers.manager": "libpod"
}
```
Target of this annotaions is to indicate which project in the containers
ecosystem is the major manager of a container when applications share
the same storage paths. This way projects can decide if they want to
manipulate the container or not. For example, since CRI-O and podman are
not using the same container library (libpod), CRI-O can skip podman
containers and provide the end user more useful information.
A corresponding end-to-end test has been adapted as well.
Relates to: https://github.com/cri-o/cri-o/pull/2761
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
Previously, we only did this for volumes created at the same time
as the container. However, this is not correct behavior - Docker
does so for all named volumes, even those made with
'podman volume create' and mounted into a container later.
Fixes#3945
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When we fail to remove a container's SHM, that's an error, and we
need to report it as such. This may be part of our lingering
storage woes.
Also, remove MNT_DETACH. It may be another cause of the storage
removal failures.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When volume options and the local volume driver are specified,
the volume is intended to be mounted using the 'mount' command.
Supported options will be used to volume the volume before the
first container using it starts, and unmount the volume after the
last container using it dies.
This should work for any local filesystem, though at present I've
only tested with tmpfs and btrfs.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
when cni returns a list of dns servers, we should add them under the
right conditions. the defined conditions are as follows:
- if the user provides dns, it and only it are added.
- if not above and you get a cni name server, it is added and a
forwarding dns instance is created for what was in resolv.conf.
- if not either above, the entries from the host's resolv.conf are used.
Signed-off-by: baude <bbaude@redhat.com>
Signed-off-by: baude <bbaude@redhat.com>
commit 223fe64dc0 introduced the
regression.
When running on cgroups v1, bind mount only /sys/fs/cgroup/systemd as
rw, as the code did earlier.
Also, simplify the rootless code as it doesn't require any special
handling when using --systemd.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1737554
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
If a container is restored multiple times from an exported checkpoint
with the help of '--import --name', the restore will fail if during
'podman run' a static container IP was set with '--ip'. The user can
tell the restore process to ignore the static IP with
'--ignore-static-ip'.
Signed-off-by: Adrian Reber <areber@redhat.com>
when running on a cgroups v2 system, do not bind mount
the named hierarchy /sys/fs/cgroup/systemd as it doesn't exist
anymore. Instead bind mount the entire /sys/fs/cgroup.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
In order to run Podman with VM-based runtimes unprivileged, the
network must be set up prior to the container creation. Therefore
this commit modifies Podman to run rootless containers by:
1. create a network namespace
2. pass the netns persistent mount path to the slirp4netns
to create the tap inferface
3. pass the netns path to the OCI spec, so the runtime can
enter the netns
Closes#2897
Signed-off-by: Gabi Beyer <gabrielle.n.beyer@intel.com>
This includes:
Implement exec -i and fix some typos in description of -i docs
pass failed runtime status to caller
Add resize handling for a terminal connection
Customize exec systemd-cgroup slice
fix healthcheck
fix top
add --detach-keys
Implement podman-remote exec (jhonce)
* Cleanup some orphaned code (jhonce)
adapt remote exec for conmon exec (pehunt)
Fix healthcheck and exec to match docs
Introduce two new OCIRuntime errors to more comprehensively describe situations in which the runtime can error
Use these different errors in branching for exit code in healthcheck and exec
Set conmon to use new api version
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Signed-off-by: Peter Hunt <pehunt@redhat.com>
The newly added functionality to include the container's root
file-system changes into the checkpoint archive can now be explicitly
disabled. Either during checkpoint or during restore.
If a container changes a lot of files during its runtime it might be
more effective to migrated the root file-system changes in some other
way and to not needlessly increase the size of the checkpoint archive.
If a checkpoint archive does not contain the root file-system changes
information it will automatically be skipped. If the root file-system
changes are part of the checkpoint archive it is also possible to tell
Podman to ignore these changes.
Signed-off-by: Adrian Reber <areber@redhat.com>
One of the last limitations when migrating a container using Podman's
'podman container checkpoint --export=/path/to/archive.tar.gz' was
that it was necessary to manually handle changes to the container's root
file-system. The recommendation was to mount everything as --tmpfs where
the root file-system was changed.
This extends the checkpoint export functionality to also include all
changes to the root file-system in the checkpoint archive. The
checkpoint archive now includes a tarstream of the result from 'podman
diff'. This tarstream will be applied to the restored container before
restoring the container.
With this any container can now be migrated, even it there are changes
to the root file-system.
There was some discussion before implementing this to base the root
file-system migration on 'podman commit', but it seemed wrong to do
a 'podman commit' before the migration as that would change the parent
layer the restored container is referencing. Probably not really a
problem, but it would have meant that a migrated container will always
reference another storage top layer than it used to reference during
initial creation.
Signed-off-by: Adrian Reber <areber@redhat.com>
During 'podman container checkpoint' the finished time was not set. This
resulted in a strange container status after checkpointing:
Exited (0) 292 years ago
During checkpointing FinishedTime is now set to time.now().
Signed-off-by: Adrian Reber <areber@redhat.com>
the compilation demands of having libpod in main is a burden for the
remote client compilations. to combat this, we should move the use of
libpod structs, vars, constants, and functions into the adapter code
where it will only be compiled by the local client.
this should result in cleaner code organization and smaller binaries. it
should also help if we ever need to compile the remote client on
non-Linux operating systems natively (not cross-compiled).
Signed-off-by: baude <bbaude@redhat.com>
Allow Podman containers to request to use a specific OCI runtime
if multiple runtimes are configured. This is the first step to
properly supporting containers in a multi-runtime environment.
The biggest changes are that all OCI runtimes are now initialized
when Podman creates its runtime, and containers now use the
runtime requested in their configuration (instead of always the
default runtime).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When using slirp4netns, be sure the built-in DNS server is the first
one to be used.
Closes: https://github.com/containers/libpod/issues/3277
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The option to restore a container from an external checkpoint archive
(podman container restore -i /tmp/checkpoint.tar.gz) restores a
container with the same name and same ID as id had before checkpointing.
This commit adds the option '--name,-n' to 'podman container restore'.
With this option the restored container gets the name specified after
'--name,-n' and a new ID. This way it is possible to restore one
container multiple times.
If a container is restored with a new name Podman will not try to
request the same IP address for the container as it had during
checkpointing. This implicitly assumes that if a container is restored
from a checkpoint archive with a different name, that it will be
restored multiple times and restoring a container multiple times with
the same IP address will fail as each IP address can only be used once.
Signed-off-by: Adrian Reber <areber@redhat.com>
This commit adds an option to the checkpoint command to export a
checkpoint into a tar.gz file as well as importing a checkpoint tar.gz
file during restore. With all checkpoint artifacts in one file it is
possible to easily transfer a checkpoint and thus enabling container
migration in Podman. With the following steps it is possible to migrate
a running container from one system (source) to another (destination).
Source system:
* podman container checkpoint -l -e /tmp/checkpoint.tar.gz
* scp /tmp/checkpoint.tar.gz destination:/tmp
Destination system:
* podman pull 'container-image-as-on-source-system'
* podman container restore -i /tmp/checkpoint.tar.gz
The exported tar.gz file contains the checkpoint image as created by
CRIU and a few additional JSON files describing the state of the
checkpointed container.
Now the container is running on the destination system with the same
state just as during checkpointing. If the container is kept running
on the source system with the checkpoint flag '-R', the result will be
that the same container is running on two different hosts.
Signed-off-by: Adrian Reber <areber@redhat.com>
This adds a couple of function in structure members needed in the next
commit to make container migration actually work. This just splits of
the function which are not modifying existing code.
Signed-off-by: Adrian Reber <areber@redhat.com>
Commit 27f9e23a0b already prevents setting the profile when creating
the spec but we also need to avoid loading and setting the profile when
creating the container.
Fixes: #3112
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The --read-only-tmpfs option caused podman to mount tmpfs on /run, /tmp, /var/tmp
if the container is running int read-only mode.
The default is true, so you would need to execute a command like
--read-only --read-only-tmpfs=false to turn off this behaviour.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The SELinux label for the CRIU dump.log was explicitly set in Podman.
The label for the restore.log, however, not. This just moves the code
to label the log file into a function and calls that functions during
checkpoint and restore.
Signed-off-by: Adrian Reber <areber@redhat.com>
* refactor command output to use one function
* Add new worker pool parallel operations
* Implement podman-remote umount
* Refactored podman wait to use printCmdOutput()
Signed-off-by: Jhon Honce <jhonce@redhat.com>
This swaps the previous handling (parse all volume mounts on the
container and look for ones that might refer to named volumes)
for the new, explicit named volume lists stored per-container.
It also deprecates force-removing volumes that are in use. I
don't know how we want to handle this yet, but leaving containers
that depend on a volume that no longer exists is definitely not
correct.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We have an issue in the current implementation where the cleanup
process is not able to umount the storage as it is running in a
separate namespace.
Simplify the implementation for user namespaces by not using an
intermediate mount namespace. For doing it, we need to relax the
permissions on the parent directories and allow browsing
them. Containers that are running without a user namespace, will still
maintain mode 0700 on their directory.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Specifically, we want to be able to specify whether resolv.conf
and /etc/hosts will be create and bind-mounted into the
container.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
when --uidmap is used, the user won't be able to access
/var/lib/containers/storage/volumes. Use the intermediate mount
namespace, that is accessible to root in the container, for mounting
the volumes inside the container.
Closes: https://github.com/containers/libpod/issues/2713
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When creating a new image volume to be mounted into a container, we need to
make sure the new volume matches the Ownership and permissions of the path
that it will be mounted on.
For example if a volume inside of a containre image is owned by the database
UID, we want the volume to be mounted onto the image to be owned by the
database UID.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When mounting a tmpfs, runc attempts to make the directory it
will be mounted at. Unfortunately, Golang's os.MkdirAll deals
very poorly with symlinks being part of the path. I looked into
fixing this in runc, but it's honestly much easier to just ensure
we don't trigger the issue on our end.
Fixes BZ #1686610
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The logic of deleting and recreating /etc/hosts and
/etc/resolv.conf only makes sense when we're the one that creates
the files - when we don't, it just removes them, and there's
nothing left to use.
Fixes#2602
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Trying to remove circular dependencies between libpod and buildah.
First step to move pkg content from libpod to buildah.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Before, any container with a netNS dependency simply used its dependency container's hosts file, and didn't abide its configuration (mainly --add-host). Fix this by always appending to the dependency container's hosts file, creating one if necessary.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
To be able to use OCI runtimes which do not implement checkpoint/restore
this adds a check to the checkpoint code path and the checkpoint/restore
tests to see if it knows about the checkpoint subcommand. If the used
OCI runtime does not implement checkpoint/restore the tests are skipped
and the actual 'podman container checkpoint' returns an error.
Signed-off-by: Adrian Reber <areber@redhat.com>
CRIU creates a log file during checkpointing in .../userdata/dump.log.
The problem with this file is, is that CRIU injects a parasite code into
the container processes and this parasite code also writes to the same
log file. At this point a process from the inside of the container is
trying to access the log file on the outside of the container and
SELinux prohibits this. To enable writing to the log file from the
injected parasite code, this commit creates an empty log file and labels
the log file with c.MountLabel(). CRIU uses existing files when writing
it logs so the log file label persists and now, with the correct label,
SELinux no longer blocks access to the log file.
Signed-off-by: Adrian Reber <areber@redhat.com>
We should just bind mount the original containers /etc/resolv.conf and /etchosts
into the new container. Changes in the resolv.conf and hosts should be seen
by all containers, This matches Docker behaviour.
In order to make this work the labels on these files need to have a shared
SELinux label.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If user specifies network namespace and the /etc/netns/XXX/resolv.conf
exists, we should use this rather then /etc/resolv.conf
Also fail cleaner if the user specifies an invalid Network Namespace.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
iFix builtin volumes to work with podman volume
Currently builtin volumes are not recored in podman volumes when
they are created automatically. This patch fixes this.
Remove container volumes when requested
Currently the --volume option on podman remove does nothing.
This will implement the changes needed to remove the volumes
if the user requests it.
When removing a volume make sure that no container uses the volume.
Signed-off-by: Daniel J Walsh dwalsh@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The json-iterator package will panic on attempting to use
MarshalIndent with a non-space indentation. This is sort of silly
but swapping from tabs to spaces is not a big issue for us, so
let's work around the silly panic.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Apply the default AppArmor profile at container initialization to cover
all possible code paths (i.e., podman-{start,run}) before executing the
runtime. This allows moving most of the logic into pkg/apparmor.
Also make the loading and application of the default AppArmor profile
versio-indepenent by checking for the `libpod-default-` prefix and
over-writing the profile in the run-time spec if needed.
The intitial run-time spec of the container differs a bit from the
applied one when having started the container, which results in
displaying a potentially outdated AppArmor profile when inspecting
a container. To fix that, load the container config from the file
system if present and use it to display the data.
Fixes: #2107
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>