Commit Graph

296 Commits

Author SHA1 Message Date
Daniel J Walsh 123b8c627d
if container is not in a pid namespace, stop all processes
When a container is in a PID namespace, it is enought to send
the stop signal to the PID 1 of the namespace, only send signals
to all processes in the container when the container is not in
a pid namespace.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-12-19 13:33:17 -05:00
Matthew Heon bd44fd5c81 Reap exec sessions on cleanup and removal
We currently rely on exec sessions being removed from the state
by the Exec() API itself, on detecting the session stopping. This
is not a reliable method, though. The Podman frontend for exec
could be killed before the session ended, or another Podman
process could be holding the lock and prevent update (most
notable in `run --rm`, when a container with an active exec
session is stopped).

To resolve this, add a function to reap active exec sessions from
the state, and use it on cleanup (to clear sessions after the
container stops) and remove (to do the same when --rm is passed).
This is a bit more complicated than it ought to be because Kata
and company exist, and we can't guarantee the exec session has a
PID on the host, so we have to plumb this through to the OCI
runtime.

Fixes #4666

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-12-12 16:35:37 -05:00
Matthew Heon 25cc43c376 Add ContainerStateRemoving
When Libpod removes a container, there is the possibility that
removal will not fully succeed. The most notable problems are
storage issues, where the container cannot be removed from
c/storage.

When this occurs, we were faced with a choice. We can keep the
container in the state, appearing in `podman ps` and available for
other API operations, but likely unable to do any of them as it's
been partially removed. Or we can remove it very early and clean
up after it's already gone. We have, until now, used the second
approach.

The problem that arises is intermittent problems removing
storage. We end up removing a container, failing to remove its
storage, and ending up with a container permanently stuck in
c/storage that we can't remove with the normal Podman CLI, can't
use the name of, and generally can't interact with. A notable
cause is when Podman is hit by a SIGKILL midway through removal,
which can consistently cause `podman rm` to fail to remove
storage.

We now add a new state for containers that are in the process of
being removed, ContainerStateRemoving. We set this at the
beginning of the removal process. It notifies Podman that the
container cannot be used anymore, but preserves it in the DB
until it is fully removed. This will allow Remove to be run on
these containers again, which should successfully remove storage
if it fails.

Fixes #3906

Signed-off-by: Matthew Heon <mheon@redhat.com>
2019-11-19 15:38:03 -05:00
Peter Hunt fa415f07a1 Also delete winsz fifo
In conmon 2.0.3, we add another fifo to handle window resizing. This needs to be cleaned up for commands like restore, where the same path is used.

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-11-15 12:44:15 -05:00
Matthew Heon 5f8bf3d07d Add ensureState helper for checking container state
We have a lot of checks for container state scattered throughout
libpod. Many of these need to ensure the container is in one of a
given set of states so an operation may safely proceed.
Previously there was no set way of doing this, so we'd use unique
boolean logic for each one. Introduce a helper to standardize
state checks.

Note that this is only intended to replace checks for multiple
states. A simple check for one state (ContainerStateRunning, for
example) should remain a straight equality, and not use this new
helper.

Signed-off-by: Matthew Heon <mheon@redhat.com>
2019-10-28 13:09:01 -04:00
Matthew Heon cab7bfbb21 Add a MissingRuntime implementation
When a container is created with a given OCI runtime, but then it
is uninstalled or removed from the configuration file, Libpod
presently reacts very poorly. The EvictContainer code can
potentially remove these containers, but we still can't see them
in `podman ps` (aside from the massive logrus.Errorf messages
they create).

Providing a minimal OCI runtime implementation for missing
runtimes allows us to behave better. We'll be able to retrieve
containers from the database, though we still pop up an error for
each missing runtime. For containers which are stopped, we can
remove them as normal.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-10-15 15:59:20 -04:00
Nalin Dahyabhai 17a7596af4 Unwrap errors before comparing them
Unwrap errors before directly comparing them with errors defined by the
storage and image libraries.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2019-10-14 13:49:06 -04:00
Matthew Heon 6f630bc09b Move OCI runtime implementation behind an interface
For future work, we need multiple implementations of the OCI
runtime, not just a Conmon-wrapped runtime matching the runc CLI.

As part of this, do some refactoring on the interface for exec
(move to a struct, not a massive list of arguments). Also, add
'all' support to Kill and Stop (supported by runc and used a bit
internally for removing containers).

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-10-10 10:19:32 -04:00
baude c35d71e3da catch runc v2 error
when runc returns an error about not being v2 complient, catch the error
and logrus an actionable message for users.

Signed-off-by: baude <bbaude@redhat.com>
2019-10-09 09:15:18 -05:00
Giuseppe Scrivano ec940b08c6
rootless: do not attempt a CNI refresh
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-10-01 14:10:04 +02:00
OpenShift Merge Robot e87012de12
Merge pull request #4065 from mheon/unconditional_conmon_rm
Unconditionally remove conmon files before starting
2019-09-27 15:08:14 -07:00
Matthew Heon b57d2f4cc7 Force a CNI Delete on refreshing containers
CNI expects that a DELETE be run before re-creating container
networks. If a reboot occurs quickly enough that containers can't
stop and clean up, that DELETE never happens, and Podman
currently wipes the old network info and thinks the state has
been entirely cleared. Unfortunately, that may not be the case on
the CNI side. Some things - like IP address reservations - may
not have been cleared.

To solve this, manually re-run CNI Delete on refresh. If the
container has already been deleted this seems harmless. If not,
it should clear lingering state.

Fixes: #3759

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-09-24 09:52:11 -04:00
Matthew Heon 407fba4942 Unconditionally remove conmon files before starting
We've been seeing a lot of issues (ref: #4061, but there are
others) where Podman hiccups on trying to start a container,
because some temporary files have been retained and Conmon will
not overwrite them.

If we're calling start() we can safely assume that we really want
those files gone so the container starts without error, so invoke
the cleanup routine. It's relatively cheap (four file removes) so
it shouldn't hurt us that much.

Also contains a small simplification to the removeConmonFiles
logic - we don't need to stat-then-remove when ignoring ENOENT is
fine.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-09-20 09:30:15 -04:00
Matthew Heon cabe1345f8 Unmounting a container that is already unmounted is OK
We should not be throwing errors because the operation we wanted
to perform is already done. Now, it is definitely strange that a
container is actually unmounted, but shows as mounted in the DB -
if this reoccurs in a way where we can investigate, it's worth
tearing into.

Fixes #4033

Signed-off-by: Matthew Heon <mheon@redhat.com>
2019-09-16 09:22:26 -04:00
Daniel J Walsh 88ebc33840
Report errors when trying to pause rootless containers
If you are running a rootless container on cgroupV1
you can not pause the container.  We need to report the proper error
if this happens.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-09-13 08:29:46 -04:00
baude 7b68cd0b3d clean up after healthcheck execs
when executing a healthcheck, we were not cleaning up after exec's use
of a socket.  we now remove the socket file and ignore if for reason it
does not exist.

Fixes: #3962

Signed-off-by: baude <bbaude@redhat.com>
2019-09-12 14:30:46 -05:00
OpenShift Merge Robot 7ac6ed3b4b
Merge pull request #3581 from mheon/no_cgroups
Support running containers without CGroups
2019-09-11 00:58:46 +02:00
Matthew Heon c2284962c7 Add support for launching containers without CGroups
This is mostly used with Systemd, which really wants to manage
CGroups itself when managing containers via unit file.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-09-10 10:52:37 -04:00
Matthew Heon b6106341fb When first mounting any named volume, copy up
Previously, we only did this for volumes created at the same time
as the container. However, this is not correct behavior - Docker
does so for all named volumes, even those made with
'podman volume create' and mounted into a container later.

Fixes #3945

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-09-09 17:17:39 -04:00
Matthew Heon de9a394fcf Correctly report errors on unmounting SHM
When we fail to remove a container's SHM, that's an error, and we
need to report it as such. This may be part of our lingering
storage woes.

Also, remove MNT_DETACH. It may be another cause of the storage
removal failures.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-09-05 17:12:27 -04:00
Matthew Heon a760e325f3 Add ability for volumes with options to mount/umount
When volume options and the local volume driver are specified,
the volume is intended to be mounted using the 'mount' command.
Supported options will be used to volume the volume before the
first container using it starts, and unmount the volume after the
last container using it dies.

This should work for any local filesystem, though at present I've
only tested with tmpfs and btrfs.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-09-05 17:12:27 -04:00
Valentin Rothberg 56a65cffac generate systemd: support pods and geneartig files
Support generating systemd unit files for a pod.  Podman generates one
unit file for the pod including the PID file for the infra container's
conmon process and one unit file for each container (excluding the infra
container).

Note that this change implies refactorings in the `pkg/systemdgen` API.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-08-21 17:28:30 +02:00
Valentin Rothberg 909ab59419 container stop: kill conmon
Old versions of conmon have a bug where they create the exit file before
closing open file descriptors causing a race condition when restarting
containers with open ports since we cannot bind the ports as they're not
yet closed by conmon.

Killing the old conmon PID is ~okay since it forces the FDs of old
conmons to be closed, while it's a NOP for newer versions which should
have exited already.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-08-05 09:16:18 +02:00
Matthew Heon 9dcd76e369 Ensure we generate a 'stopped' event on force-remove
When forcibly removing a container, we are initiating an explicit
stop of the container, which is not reflected in 'podman events'.
Swap to using our standard 'stop()' function instead of a custom
one for force-remove, and move the event into the internal stop
function (so internal calls also register it).

This does add one more database save() to `podman remove`. This
should not be a terribly serious performance hit, and does have
the desirable side effect of making things generally safer.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-31 17:29:14 -04:00
Matthew Heon ebacfbd091 podman: fix memleak caused by renaming and not deleting
the exit file

If the container exit code needs to be retained, it cannot be retained
in tmpfs, because libpod runs in a memcg itself so it can't leave
traces with a daemon-less design.

This wasn't a memleak detectable by kmemleak for example. The kernel
never lost track of the memory and there was no erroneous refcounting
either. The reference count dependencies however are not easy to track
because when a refcount is increased, there's no way to tell who's
still holding the reference. In this case it was a single page of
tmpfs pagecache holding a refcount that kept pinned a whole hierarchy
of dying memcg, slab kmem, cgropups, unrechable kernfs nodes and the
respective dentries and inodes. Such a problem wouldn't happen if the
exit file was stored in a regular filesystem because the pagecache
could be reclaimed in such case under memory pressure. The tmpfs page
can be swapped out, but that's not enough to release the memcg with
CONFIG_MEMCG_SWAP_ENABLED=y.

No amount of more aggressive kernel slab shrinking could have solved
this. Not even assigning slab kmem of dying cgroups to alive cgroup
would fully solve this. The only way to free the memory of a dying
cgroup when a struct page still references it, would be to loop over
all "struct page" in the kernel to find which one is associated with
the dying cgroup which is a O(N) operation (where N is the number of
pages and can reach billions). Linking all the tmpfs pages to the
memcg would cost less during memcg offlining, but it would waste lots
of memory and CPU globally. So this can't be optimized in the kernel.

A cronjob running this command can act as workaround and will allow
all slab cache to be released, not just the single tmpfs pages.

    rm -f /run/libpod/exits/*

This patch solved the memleak with a reproducer, booting with
cgroup.memory=nokmem and with selinux disabled. The reason memcg kmem
and selinux were disabled for testing of this fix, is because kmem
greatly decreases the kernel effectiveness in reusing partial slab
objects. cgroup.memory=nokmem is strongly recommended at least for
workstation usage. selinux needs to be further analyzed because it
causes further slab allocations.

The upstream podman commit used for testing is
1fe2965e4f (v1.4.4).

The upstream kernel commit used for testing is
f16fea666898dbdd7812ce94068c76da3e3fcf1e (v5.2-rc6).

Reported-by: Michele Baldessari <michele@redhat.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>

<Applied with small tweaks to comments>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-31 17:28:42 -04:00
OpenShift Merge Robot 6665269ab8
Merge pull request #3233 from wking/fatal-requested-hook-directory-does-not-exist
libpod/container_internal: Make all errors loading explicitly configured hook dirs fatal
2019-07-29 16:39:08 +02:00
Peter Hunt a1a79c08b7 Implement conmon exec
This includes:
	Implement exec -i and fix some typos in description of -i docs
	pass failed runtime status to caller
	Add resize handling for a terminal connection
	Customize exec systemd-cgroup slice
	fix healthcheck
	fix top
	add --detach-keys
	Implement podman-remote exec (jhonce)
	* Cleanup some orphaned code (jhonce)
	adapt remote exec for conmon exec (pehunt)
	Fix healthcheck and exec to match docs
		Introduce two new OCIRuntime errors to more comprehensively describe situations in which the runtime can error
		Use these different errors in branching for exit code in healthcheck and exec
	Set conmon to use new api version

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-07-22 15:57:23 -04:00
baude db826d5d75 golangci-lint round #3
this is the third round of preparing to use the golangci-lint on our
code base.

Signed-off-by: baude <bbaude@redhat.com>
2019-07-21 14:22:39 -05:00
baude a78c885397 golangci-lint pass number 2
clean up and prepare to migrate to the golangci-linter

Signed-off-by: baude <bbaude@redhat.com>
2019-07-11 09:13:06 -05:00
OpenShift Merge Robot edc7f52c95
Merge pull request #3425 from adrianreber/restore-mount-label
Set correct SELinux label on restored containers
2019-07-08 20:31:59 +02:00
baude 1d36501f96 code cleanup
clean up code identified as problematic by golands inspection

Signed-off-by: baude <bbaude@redhat.com>
2019-07-08 09:18:11 -05:00
OpenShift Merge Robot f7407f2eb5
Merge pull request #3472 from haircommander/generate-volumes
generate kube with volumes
2019-07-04 22:22:07 +02:00
baude fec1de6ef4 trivial cleanups from golang
the results of a code cleanup performed by the goland IDE.

Signed-off-by: baude <bbaude@redhat.com>
2019-07-03 15:41:33 -05:00
Matthew Heon 38c6199b80 Wipe PID and ConmonPID in state after container stops
Matches the behavior of Docker.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-02 19:10:51 -04:00
Matthew Heon a1bb1987cc Store Conmon's PID in our state and display in inspect
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-02 18:52:55 -04:00
Peter Hunt aeabc45cce Improve parsing of mounts
Specifically, we were needlessly doing a double lookup to find which config mounts were user volumes. Improve this by refactoring a bit of code from inspect

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-07-02 15:18:44 -04:00
baude 8561b99644 libpod removal from main (phase 2)
this is phase 2 for the removal of libpod from main.

Signed-off-by: baude <bbaude@redhat.com>
2019-06-27 07:56:24 -05:00
Giuseppe Scrivano 72cf0c81e8
libpod: use pkg/cgroups instead of containerd/cgroups
use the new implementation for dealing with cgroups.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-06-26 13:17:02 +02:00
baude dd81a44ccf remove libpod from main
the compilation demands of having libpod in main is a burden for the
remote client compilations.  to combat this, we should move the use of
libpod structs, vars, constants, and functions into the adapter code
where it will only be compiled by the local client.

this should result in cleaner code organization and smaller binaries. it
should also help if we ever need to compile the remote client on
non-Linux operating systems natively (not cross-compiled).

Signed-off-by: baude <bbaude@redhat.com>
2019-06-25 13:51:24 -05:00
Adrian Reber 220e169cc1
Provide correct SELinux mount-label for restored container
Restoring a container from a checkpoint archive creates a complete
new root file-system. This file-system needs to have the correct SELinux
label or most things in that restored container will fail. Running
processes are not as problematic as newly exec()'d process (internally
or via 'podman exec').

This patch tells the storage setup which label should be used to mount
the container's root file-system.

Signed-off-by: Adrian Reber <areber@redhat.com>
2019-06-25 14:55:11 +02:00
Matthew Heon c233a12772 Add additional debugging when refreshing locks
Signed-off-by: Matthew Heon <mheon@redhat.com>
2019-06-21 16:00:39 -04:00
Matthew Heon 92bae8d308 Begin adding support for multiple OCI runtimes
Allow Podman containers to request to use a specific OCI runtime
if multiple runtimes are configured. This is the first step to
properly supporting containers in a multi-runtime environment.

The biggest changes are that all OCI runtimes are now initialized
when Podman creates its runtime, and containers now use the
runtime requested in their configuration (instead of always the
default runtime).

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-06-19 17:08:43 -04:00
Daniel J Walsh 629017bb19
When you change the storage driver we ignore the storage-options
The storage driver and the storage options in storage.conf should
match, but if you change the storage driver via the command line
then we need to nil out the default storage options from storage.conf.

If the user wants to change the storage driver and use storage options,
they need to specify them on the command line.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-06-08 06:20:31 -04:00
Adrian Reber 0028578b43
Added support to migrate containers
This commit adds an option to the checkpoint command to export a
checkpoint into a tar.gz file as well as importing a checkpoint tar.gz
file during restore. With all checkpoint artifacts in one file it is
possible to easily transfer a checkpoint and thus enabling container
migration in Podman. With the following steps it is possible to migrate
a running container from one system (source) to another (destination).

 Source system:
  * podman container checkpoint -l -e /tmp/checkpoint.tar.gz
  * scp /tmp/checkpoint.tar.gz destination:/tmp

 Destination system:
  * podman pull 'container-image-as-on-source-system'
  * podman container restore -i /tmp/checkpoint.tar.gz

The exported tar.gz file contains the checkpoint image as created by
CRIU and a few additional JSON files describing the state of the
checkpointed container.

Now the container is running on the destination system with the same
state just as during checkpointing. If the container is kept running
on the source system with the checkpoint flag '-R', the result will be
that the same container is running on two different hosts.

Signed-off-by: Adrian Reber <areber@redhat.com>
2019-06-03 22:05:12 +02:00
Adrian Reber a05cfd24bb
Added helper functions for container migration
This adds a couple of function in structure members needed in the next
commit to make container migration actually work. This just splits of
the function which are not modifying existing code.

Signed-off-by: Adrian Reber <areber@redhat.com>
2019-06-03 22:05:12 +02:00
W. Trevor King 317a5c72c6 libpod/container_internal: Make all errors loading explicitly configured hook dirs fatal
Remove this IsNotExist out which was added along with the rest of this
block in f6a2b6bf2b (hooks: Add pre-create hooks for runtime-config
manipulation, 2018-11-19, #1830).  Besides the obvious "hook directory
does not exist", it was swallowing the less-obvious "hook command does
not exist".  And either way, folks are likely going to want non-zero
podman exits when we fail to load a hook directory they explicitly
pointed us towards.

Signed-off-by: W. Trevor King <wking@tremily.us>
2019-05-29 20:19:41 -07:00
Giuseppe Scrivano 3788da9344
libpod: prefer WaitForFile to polling
replace two usage of kwait.ExponentialBackoff in favor of WaitForFile
that uses inotify when possible.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-05-21 10:07:31 +02:00
Matthew Heon 5cbb3e7e9d Use standard remove functions for removing pod ctrs
Instead of rewriting the logic, reuse the standard logic we use
for removing containers, which is much better tested.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-10 14:14:29 -04:00
Matthew Heon faae3a7065 When refreshing after a reboot, force lock allocation
After a reboot, when we refresh Podman's state, we retrieved the
lock from the fresh SHM instance, but we did not mark it as
allocated to prevent it being handed out to other containers and
pods.

Provide a method for marking locks as in-use, and use it when we
refresh Podman state after a reboot.

Fixes #2900

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-06 14:17:54 -04:00
Matthew Heon 5c4fefa533 Small code fix
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 11:42:34 -04:00
Matthew Heon d7c367aa61 Address review comments on restart policy
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
Matthew Heon cafb68e301 Add a restart event, and make one during restart policy
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
Matthew Heon 56356d7027 Restart policy should not run if a container is running
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
Matthew Heon 7ba1b609aa Move to using constants for valid restart policy types
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
Matthew Heon f4db6d5cf6 Add support for retry count with --restart flag
The on-failure restart option supports restarting only a given
number of times. To do this, we need one additional field in the
DB to track restart count (which conveniently fills a field in
Inspect we weren't populating), plus some plumbing logic.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
Matthew Heon 0d73ee40b2 Add container restart policy to Libpod & Podman
This initial version does not support restart count, but it works
as advertised otherwise.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
Matthew Heon 3fb52f4fbb Add a StoppedByUser field to the DB
This field indicates that a container was explciitly stopped by
an API call, and did not exit naturally. It's used when
implementing restart policy for containers.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
OpenShift Merge Robot ccf28a89bd
Merge pull request #3039 from mheon/podman_init
Add podman init command
2019-05-02 20:45:44 +02:00
Giuseppe Scrivano cc9ef4e61b
container: drop rootless check
we don't need to treat the rootless case differently now that we use a
single user namespace.

Signed-off-by: Giuseppe Scrivano <giuseppe@scrivano.org>
2019-05-01 18:49:08 +02:00
Matthew Heon 0b2c9c2acc Add basic structure of podman init command
As part of this, rework the number of workers used by various
Podman tasks to match original behavior - need an explicit
fallthrough in the switch statement for that block to work as
expected.

Also, trivial change to Podman cleanup to work on initialized
containers - we need to reset to a different state after cleaning
up the OCI runtime.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-01 11:12:24 -04:00
OpenShift Merge Robot f929b9e4d5
Merge pull request #2501 from mtrmac/fixed-hook-order
RFC: Make hooks sort order locale-independent
2019-04-14 03:09:41 -07:00
OpenShift Merge Robot 61fa40b256
Merge pull request #2913 from mheon/get_instead_of_lookup
Use GetContainer instead of LookupContainer for full ID
2019-04-12 09:38:48 -07:00
Matthew Heon f7951c8776 Use GetContainer instead of LookupContainer for full ID
All IDs in libpod are stored as a full container ID. We can get a
container by full ID faster with GetContainer (which directly
retrieves) than LookupContainer (which finds a match, then
retrieves). No reason to use Lookup when we have full IDs present
and available.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-04-12 10:59:00 -04:00
Matthew Heon 27d56c7f15 Expand debugging for container cleanup errors
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-04-11 11:05:00 -04:00
Miloslav Trmač 97c9115c02 Potentially breaking: Make hooks sort order locale-independent
Don't sort OCI hooks using the locale collation order; it does not
make sense for the same system-wide directory to be interpreted differently
depending on the user's LC_COLLATE setting, and the language-specific
collation order can even change over time.

Besides, the current collation order determination code has never worked
with the most common LC_COLLATE values like en_US.UTF-8.

Ideally, we would like to just order based on Unicode code points
to be reliably stable, but the existing implementation is case-insensitive,
so we are forced to rely on the unicode case mapping tables at least.

(This gives up on canonicalization and width-insensitivity, potentially
breaking users who rely on these previously documented properties.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-04-09 21:08:44 +02:00
Jhon Honce 09ff62429a Implement podman-remote rm
* refactor command output to use one function
* Add new worker pool parallel operations
* Implement podman-remote umount
* Refactored podman wait to use printCmdOutput()

Signed-off-by: Jhon Honce <jhonce@redhat.com>
2019-04-09 11:55:26 -07:00
Matthew Heon d245c6df29 Switch Libpod over to new explicit named volumes
This swaps the previous handling (parse all volume mounts on the
container and look for ones that might refer to named volumes)
for the new, explicit named volume lists stored per-container.

It also deprecates force-removing volumes that are in use. I
don't know how we want to handle this yet, but leaving containers
that depend on a volume that no longer exists is definitely not
correct.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-04-04 12:26:29 -04:00
Giuseppe Scrivano 849548ffb8
userns: do not use an intermediate mount namespace
We have an issue in the current implementation where the cleanup
process is not able to umount the storage as it is running in a
separate namespace.

Simplify the implementation for user namespaces by not using an
intermediate mount namespace.  For doing it, we need to relax the
permissions on the parent directories and allow browsing
them. Containers that are running without a user namespace, will still
maintain mode 0700 on their directory.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-03-29 14:04:44 +01:00
baude bb69004b8c podman health check phase3
podman will not start a transient service and timer for healthchecks.
this handles the tracking of the timing for health checks.

added the 'started' status which represents the time that a container is
in its start-period.

the systemd timing can be disabled with an env variable of
DISABLE_HC_SYSTEMD="true".

added filter for ps where --filter health=[starting, healthy, unhealthy]
can now be used.

Signed-off-by: baude <bbaude@redhat.com>
2019-03-22 14:58:44 -05:00
Giuseppe Scrivano 4ac08d3aa1
ps: fix segfault if the store is not initialized
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-03-19 15:01:54 +01:00
Daniel J Walsh 9d81be9614
Make sure buildin volumes have the same ownership and permissions as image
When creating a new image volume to be mounted into a container, we need to
make sure the new volume matches the Ownership and permissions of the path
that it will be mounted on.

For example if a volume inside of a containre image is owned by the database
UID, we want the volume to be mounted onto the image to be owned by the
database UID.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-15 10:44:44 -04:00
Giuseppe Scrivano 508e08410b
container: check containerInfo.Config before accessing it
check that containerInfo.Config is not nil before trying to access
it.

Closes: https://github.com/containers/libpod/issues/2654

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-03-15 10:39:33 +01:00
Matthew Heon 3b5805d521 Add event on container death
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-03-13 10:18:51 -04:00
baude ca1e76ff63 Add event logging to libpod, even display to podman
In lipod, we now log major events that occurr.  These events
can be displayed using the `podman events` command. Each
event contains:

* Type (container, image, volume, pod...)
* Status (create, rm, stop, kill, ....)
* Timestamp in RFC3339Nano format
* Name (if applicable)
* Image (if applicable)

The format of the event and the varlink endpoint are to not
be considered stable until cockpit has done its enablement.

Signed-off-by: baude <bbaude@redhat.com>
2019-03-11 15:08:59 -05:00
Giuseppe Scrivano e22fc79f39
errors: fix error cause comparison
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-03-11 10:08:38 +01:00
W. Trevor King 69cb8639b4 libpod/container_internal: Split locale at the first dot, etc.
We're going to feed this into Go's BCP 47 language parser.  Language
tags have the form [1]:

  language
  ["-" script]
  ["-" region]
  *("-" variant)
  *("-" extension)
  ["-" privateuse]

and locales have the form [2]:

  [language[_territory][.codeset][@modifier]]

The modifier is useful for collation, but Go's language-based API
[3] does not provide a way for us to supply it.  This code converts
our locale to a BCP 47 language by stripping the dot and later and
replacing the first underscore, if any, with a hyphen.  This will
avoid errors like [4]:

  WARN[0000] failed to parse language "en_US.UTF-8": language: tag is not well-formed

when feeding language.Parse(...).

[1]: https://tools.ietf.org/html/bcp47#section-2.1
[2]: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html#tag_08_02
[3]: https://github.com/golang/go/issues/25340
[4]: https://github.com/containers/libpod/issues/2494

Signed-off-by: W. Trevor King <wking@tremily.us>
2019-03-05 22:02:50 -08:00
Peter Hunt 6c8f2072aa Append hosts to dependency container's /etc/hosts file
Before, any container with a netNS dependency simply used its dependency container's hosts file, and didn't abide its configuration (mainly --add-host). Fix this by always appending to the dependency container's hosts file, creating one if necessary.

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-03-05 13:15:25 -05:00
Peter Hunt a784071902 Don't start running dependencies
Before, a container being run or started in a pod always restarted the infra container. This was because we didn't take running dependencies into account. Fix this by filtering for dependencies in the running state.

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-02-19 09:28:58 -05:00
Sebastian Jug 7141f97270 OpenTracing support added to start, stop, run, create, pull, and ps
Drop context.Context field from cli.Context

Signed-off-by: Sebastian Jug <sejug@redhat.com>
2019-02-18 09:57:08 -05:00
Peter Hunt 81804fc464 pod infra container is started before a container in a pod is run, started, or attached.
Prior, a pod would have to be started immediately when created, leading to confusion about what a pod state should be immediately after creation. The problem was podman run --pod ... would error out if the infra container wasn't started (as it is a dependency). Fix this by allowing for recursive start, where each of the container's dependencies are started prior to the new container. This is only applied to the case where a new container is attached to a pod.

Also rework container_api Start, StartAndAttach, and Init functions, as there was some duplicated code, which made addressing the problem easier to fix.

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-02-15 16:39:24 -05:00
Daniel J Walsh 52df1fa7e0
Fix volume handling in podman
iFix builtin volumes to work with podman volume

Currently builtin volumes are not recored in podman volumes when
they are created automatically. This patch fixes this.

Remove container volumes when requested

Currently the --volume option on podman remove does nothing.
This will implement the changes needed to remove the volumes
if the user requests it.

When removing a volume make sure that no container uses the volume.

Signed-off-by: Daniel J Walsh dwalsh@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-02-14 13:21:52 -05:00
Matthew Heon 19a03976f7 Retain a copy of container exit file on cleanup
When cleaning up containers, we presently remove the exit file
created by Conmon, to ensure that if we restart the container, we
won't have conflicts when Conmon tries writing a new exit file.

Unfortunately, we need to retain that exit file (at least until
we get a workable events system), so we can read it in cases
where the container has been removed before 'podman run' can read
its exit code.

So instead of removing it, rename it, so there's no conflict with
Conmon, and we can still read it later.

Fixes: #1640

Signed-off-by: Matthew Heon <mheon@redhat.com>
2019-02-12 12:57:11 -05:00
Matthew Heon 3c52accbc9 Preserve exited state across reboot
Instead of unconditionally resetting to ContainerStateConfigured
after a reboot, allow containers in the Exited state to remain
there, preserving their exit code in podman ps after a reboot.

This does not affect the ability to use and restart containers
after a reboot, as the Exited state can be used (mostly)
interchangeably with Configured for starting and managing
containers.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-02-05 15:37:56 -05:00
baude eadaa5fb42 podman-remote inspect
base enablement of the inspect command.

Signed-off-by: baude <bbaude@redhat.com>
2019-01-18 15:43:11 -06:00
Matthew Heon 33889c642d Ensure that wait exits on state transition
When waiting for a container, there is a long interval between
status checks - plenty long enough for the container in question
to start, then subsequently be cleaned up and returned to Created
state to be restarted. As such, we can't wait on container state
to go to Stopped or Exited - anything that is not Running or
Paused indicates the container is dead.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-01-16 10:33:01 -05:00
Matthew Heon 167d50a9fa Move all libpod/ JSON references over to jsoniter
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-01-10 15:48:09 -05:00
W. Trevor King f6a2b6bf2b hooks: Add pre-create hooks for runtime-config manipulation
There's been a lot of discussion over in [1] about how to support the
NVIDIA folks and others who want to be able to create devices
(possibly after having loaded kernel modules) and bind userspace
libraries into the container.  Currently that's happening in the
middle of runc's create-time mount handling before the container
pivots to its new root directory with runc's incorrectly-timed
prestart hook trigger [2].  With this commit, we extend hooks with a
'precreate' stage to allow trusted parties to manipulate the config
JSON before calling the runtime's 'create'.

I'm recycling the existing Hook schema from pkg/hooks for this,
because we'll want Timeout for reliability and When to avoid the
expense of fork/exec when a given hook does not need to make config
changes [3].

[1]: https://github.com/opencontainers/runc/pull/1811
[2]: https://github.com/opencontainers/runc/issues/1710
[3]: https://github.com/containers/libpod/issues/1828#issuecomment-439888059

Signed-off-by: W. Trevor King <wking@tremily.us>
2019-01-08 21:06:17 -08:00
Matthew Heon d4b2f11601 Convert pods to SHM locks
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
2019-01-04 09:51:09 -05:00
Matthew Heon 3de560053f Convert containers to SHM locking
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
2019-01-04 09:51:09 -05:00
Matthew Heon 945d0e8700 Log container command before starting the container
Runc does not produce helpful error messages when the container's
command is not found, so print the command ourselves.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-01-02 12:11:50 -05:00
Daniel J Walsh df99522c67
Fixes to handle /dev/shm correctly.
We had two problems with /dev/shm, first, you mount the
container read/only then /dev/shm was mounted read/only.
This is a bug a tmpfs directory should be read/write within
a read-only container.

The second problem is we were ignoring users mounted /dev/shm
from the host.

If user specified

podman run -d -v /dev/shm:/dev/shm ...

We were dropping this mount and still using the internal mount.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-12-24 09:03:53 -05:00
Daniel J Walsh c657dc4fdb
Switch all referencs to image.ContainerConfig to image.Config
This will more closely match what Docker is doing.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-12-21 15:59:34 -05:00
Matthew Heon bc57ecec42 Prevent a second lookup of user for image volumes
Instead of forcing another user lookup when mounting image
volumes, just use the information we looked up when we started
generating the spec.

This may resolve #1817

Signed-off-by: Matthew Heon <mheon@redhat.com>
2018-12-11 13:36:50 -05:00
Matthew Heon 176f76d794 Fix errors where OCI hooks directory does not exist
Signed-off-by: Matthew Heon <mheon@redhat.com>
2018-12-07 11:35:43 -05:00
baude 39a036e24d bind mount /etc/resolv.conf|hosts in pods
containers inside pods need to make sure they get /etc/resolv.conf
and /etc/hosts bind mounted when network is expected

Signed-off-by: baude <bbaude@redhat.com>
2018-12-06 13:56:57 -06:00
W. Trevor King a4b483c848 libpod/container_internal: Deprecate implicit hook directories
Part of the motivation for 800eb863 (Hooks supports two directories,
process default and override, 2018-09-17, #1487) was [1]:

> We only use this for override. The reason this was caught is people
> are trying to get hooks to work with CoreOS. You are not allowed to
> write to /usr/share... on CoreOS, so they wanted podman to also look
> at /etc, where users and third parties can write.

But we'd also been disabling hooks completely for rootless users.  And
even for root users, the override logic was tricky when folks actually
had content in both directories.  For example, if you wanted to
disable a hook from the default directory, you'd have to add a no-op
hook to the override directory.

Also, the previous implementation failed to handle the case where
there hooks defined in the override directory but the default
directory did not exist:

  $ podman version
  Version:       0.11.2-dev
  Go Version:    go1.10.3
  Git Commit:    "6df7409cb5a41c710164c42ed35e33b28f3f7214"
  Built:         Sun Dec  2 21:30:06 2018
  OS/Arch:       linux/amd64
  $ ls -l /etc/containers/oci/hooks.d/test.json
  -rw-r--r--. 1 root root 184 Dec  2 16:27 /etc/containers/oci/hooks.d/test.json
  $ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
  time="2018-12-02T21:31:19-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
  time="2018-12-02T21:31:19-08:00" level=warning msg="failed to load hooks: {}%!(EXTRA *os.PathError=open /usr/share/containers/oci/hooks.d: no such file or directory)"

With this commit:

  $ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
  time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
  time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /etc/containers/oci/hooks.d"
  time="2018-12-02T21:33:07-08:00" level=debug msg="added hook /etc/containers/oci/hooks.d/test.json"
  time="2018-12-02T21:33:07-08:00" level=debug msg="hook test.json matched; adding to stages [prestart]"
  time="2018-12-02T21:33:07-08:00" level=warning msg="implicit hook directories are deprecated; set --hooks-dir="/etc/containers/oci/hooks.d" explicitly to continue to load hooks from this directory"
  time="2018-12-02T21:33:07-08:00" level=error msg="container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"process_linux.go:382: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: oh, noes!\\\\n\\\"\""

(I'd setup the hook to error out).  You can see that it's silenly
ignoring the ENOENT for /usr/share/containers/oci/hooks.d and
continuing on to load hooks from /etc/containers/oci/hooks.d.

When it loads the hook, it also logs a warning-level message
suggesting that callers explicitly configure their hook directories.
That will help consumers migrate, so we can drop the implicit hook
directories in some future release.  When folks *do* explicitly
configure hook directories (via the newly-public --hooks-dir and
hooks_dir options), we error out if they're missing:

  $ podman --hooks-dir /does/not/exist run --rm docker.io/library/alpine echo 'successful container'
  error setting up OCI Hooks: open /does/not/exist: no such file or directory

I've dropped the trailing "path" from the old, hidden --hooks-dir-path
and hooks_dir_path because I think "dir(ectory)" is already enough
context for "we expect a path argument".  I consider this name change
non-breaking because the old forms were undocumented.

Coming back to rootless users, I've enabled hooks now.  I expect they
were previously disabled because users had no way to avoid
/usr/share/containers/oci/hooks.d which might contain hooks that
required root permissions.  But now rootless users will have to
explicitly configure hook directories, and since their default config
is from ~/.config/containers/libpod.conf, it's a misconfiguration if
it contains hooks_dir entries which point at directories with hooks
that require root access.  We error out so they can fix their
libpod.conf.

[1]: https://github.com/containers/libpod/pull/1487#discussion_r218149355

Signed-off-by: W. Trevor King <wking@tremily.us>
2018-12-03 12:54:30 -08:00
OpenShift Merge Robot b504623a11
Merge pull request #1317 from rhatdan/privileged
Disable mount options when running --privileged
2018-11-30 11:09:51 -08:00
Daniel J Walsh a5be3ffa4d
/dev/shm should be mounted even in rootless mode.
Currently we are mounting /dev/shm from disk, it should be from a tmpfs.
User Namespace supports tmpfs mounts for nonroot users, so this section of
code should work fine in bother root and rootless mode.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-11-28 15:48:25 -05:00
baude 61d4db4806 Fix golang formatting issues
Whe running unittests on newer golang versions, we observe failures with some
formatting types when no declared correctly.

Signed-off-by: baude <bbaude@redhat.com>
2018-11-28 09:26:24 -06:00
OpenShift Merge Robot effd63d6d5
Merge pull request #1848 from adrianreber/master
Add tcp-established to checkpoint/restore
2018-11-28 07:00:24 -08:00