We may want to ship configurations including more than one
runtime configuration - for example, crun and runc and kata, all
configured. However, we don't want to make these extra runtimes
hard requirements, so let's not fatally error when we can't find
their executables.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Allow Podman containers to request to use a specific OCI runtime
if multiple runtimes are configured. This is the first step to
properly supporting containers in a multi-runtime environment.
The biggest changes are that all OCI runtimes are now initialized
when Podman creates its runtime, and containers now use the
runtime requested in their configuration (instead of always the
default runtime).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
For docker scripting compatibility, allow for json-file logging when creating args for conmon. That way, when json-file is supported, that case can be easily removed.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
When available, using the on-disk spec will show full mount
options in use when the container is running, which can differ
from mount options provided in the original spec - on generating
the final spec, for example, we ensure that some form of root
propagation is set.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
While we're at it, rewrite how we populate it. There were several
potential segfaults in the optional spec.Process block, and a few
fields not being populated correctly versus 'docker inspect'.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Extend kill's error message to include the container's ID and state.
This address cases where error messages caused by other containers
may confuse users.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Currently we report cgroupmanager default as systemd, even if the user modified
the libpod.conf. Also cgroupmanager does not work in rootless mode. This
PR correctly identifies the default cgroup manager or reports it is not supported.
Also add homeDir to correctly get the homedir if the $HOME is not set. Will
attempt to get Homedir out of /etc/passwd.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
- PREFIX is now passed saved in the binary at build-time so that default
paths match installation paths.
- ETCDIR is also overridable in a similar way.
- DESTDIR is now applied on top of PREFIX for install/uninstall steps.
Previously, a DESTDIR=/foo PREFIX=/bar make would install into /bar,
rather than /foo/bar.
Signed-off-by: Lawrence Chan <element103@gmail.com>
This flag switches to removing containers directly from c/storage
and is mostly used to remove orphan containers.
It's a superior solution to our former one, which attempted
removal from storage under certain circumstances and could, under
some conditions, not trigger.
Also contains the beginning of support for storage in `ps` but
wiring that in is going to be a much bigger pain.
Fixes#3329.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We're no longer using either of these JSON libraries, dropped
them in favor of jsoniter. We can't completely remove ffjson as
c/storage uses it and can't easily migrate, but we can make sure
that libpod itself isn't doing anything with them anymore.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
add a new configuration `runtime_supports_json` to list what OCI
runtimes support the --log-format=json option. If the runtime is not
listed here, libpod will redirect stdout/stderr from the runtime
process.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We were formerly dumping spec.Mount structs, with no care as to
whether it was user-generated or not - a relic of the very early
days when we didn't know whether a user made a mount or not.
Now that we do, match our output to Docker's dedicated mount
struct.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This way a tool can determine if the container exists or not, but is in the
wrong state.
Since 126 is documeted as:
**_126_** if the **_contained command_** cannot be invoked
It makes sense that the container would exit with this state.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When using slirp4netns, be sure the built-in DNS server is the first
one to be used.
Closes: https://github.com/containers/libpod/issues/3277
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The storage driver and the storage options in storage.conf should
match, but if you change the storage driver via the command line
then we need to nil out the default storage options from storage.conf.
If the user wants to change the storage driver and use storage options,
they need to specify them on the command line.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The option to restore a container from an external checkpoint archive
(podman container restore -i /tmp/checkpoint.tar.gz) restores a
container with the same name and same ID as id had before checkpointing.
This commit adds the option '--name,-n' to 'podman container restore'.
With this option the restored container gets the name specified after
'--name,-n' and a new ID. This way it is possible to restore one
container multiple times.
If a container is restored with a new name Podman will not try to
request the same IP address for the container as it had during
checkpointing. This implicitly assumes that if a container is restored
from a checkpoint archive with a different name, that it will be
restored multiple times and restoring a container multiple times with
the same IP address will fail as each IP address can only be used once.
Signed-off-by: Adrian Reber <areber@redhat.com>
This commit adds an option to the checkpoint command to export a
checkpoint into a tar.gz file as well as importing a checkpoint tar.gz
file during restore. With all checkpoint artifacts in one file it is
possible to easily transfer a checkpoint and thus enabling container
migration in Podman. With the following steps it is possible to migrate
a running container from one system (source) to another (destination).
Source system:
* podman container checkpoint -l -e /tmp/checkpoint.tar.gz
* scp /tmp/checkpoint.tar.gz destination:/tmp
Destination system:
* podman pull 'container-image-as-on-source-system'
* podman container restore -i /tmp/checkpoint.tar.gz
The exported tar.gz file contains the checkpoint image as created by
CRIU and a few additional JSON files describing the state of the
checkpointed container.
Now the container is running on the destination system with the same
state just as during checkpointing. If the container is kept running
on the source system with the checkpoint flag '-R', the result will be
that the same container is running on two different hosts.
Signed-off-by: Adrian Reber <areber@redhat.com>
This adds a couple of function in structure members needed in the next
commit to make container migration actually work. This just splits of
the function which are not modifying existing code.
Signed-off-by: Adrian Reber <areber@redhat.com>
Let's put inspect structs where they're actually being used. We
originally made pkg/inspect to solve circular import issues.
There are no more circular import issues.
Image structs remain for now, I'm focusing on container inspect.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Remove this IsNotExist out which was added along with the rest of this
block in f6a2b6bf2b (hooks: Add pre-create hooks for runtime-config
manipulation, 2018-11-19, #1830). Besides the obvious "hook directory
does not exist", it was swallowing the less-obvious "hook command does
not exist". And either way, folks are likely going to want non-zero
podman exits when we fail to load a hook directory they explicitly
pointed us towards.
Signed-off-by: W. Trevor King <wking@tremily.us>
Add a journald reader that translates the journald entry to a k8s-file formatted line, to be added as a log line
Note: --follow with journald hasn't been implemented. It's going to be a larger undertaking that can wait.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
since we now enter the user namespace prior to read the conmon.pid, we
can write the conmon.pid file again to the runtime dir.
This reverts commit 6c6a865436.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
when running in rootless mode, be sure psgo is honoring the user
namespace settings for huser and hgroup.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Commit 27f9e23a0b already prevents setting the profile when creating
the spec but we also need to avoid loading and setting the profile when
creating the container.
Fixes: #3112
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
replace two usage of kwait.ExponentialBackoff in favor of WaitForFile
that uses inotify when possible.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
enable polling also when using inotify. It is generally useful to
have it as under high load inotify can lose notifications. It also
solves a race condition where the file is created while the watcher
is configured and it'd wait until the timeout and fail.
Closes: https://github.com/containers/libpod/issues/2942
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Mainly add support for podman build using --overlay mounts.
Updates containers/image also adds better support for new registries.conf
file.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
`string.Split()` splits into slice of size greater than 2
which may result in loss of environment variables
fixes#3132
Signed-off-by: Divyansh Kamboj <kambojdivyansh2000@gmail.com>
use a pause process to keep the user and mount namespace alive.
The pause process is created immediately on reload, and all successive
Podman processes will refer to it for joining the user&mount
namespace.
This solves all the race conditions we had on joining the correct
namespaces using the conmon processes.
As a fallback if the join fails for any reason (e.g. the pause process
was killed), then we try to join the running containers as we were
doing before.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
once the default event logger was removed from libpod.conf, we need to
set the default based on whether the systemd build tag is used or not.
Signed-off-by: baude <bbaude@redhat.com>
StartAndAttach() runs start() in a goroutine, which can allow it
to fire after the caller returns - and thus, after the defer to
unlock the container lock has fired.
The start() call _must_ occur while the container is locked, or
else state inconsistencies may occur.
Fixes#3114
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
print a clearer error message when an unprivileged user attempts to
create a network using CNI.
Closes: https://github.com/containers/libpod/issues/3118
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
If the systemd development files are not present on the system which
builds podman, then `podman events` will error on runtime creation.
Beside this, a warning will be printed when compiling podman.
This commit mainly exists because projects which depend on libpod
would not need the podman event support and therefore do not need to
rely on the systemd headers.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
When using CGroupfs, we see races during pod removal between
removing the CGroup and the cleanup process starting (in the
CGroup, thus preventing removal).
The simplest way to avoid this is to prevent the forking of the
cleanup process. Conveniently, we can do this via the CGroup that
we already created for Conmon - we just need to update the PID
limit to 0, which completely inhibits new forks.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Instead of rewriting the logic, reuse the standard logic we use
for removing containers, which is much better tested.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Ensure that, if an error occurs somewhere along the way when we
remove a pod, it's preserved until the end and returned, even as
we continue to remove the pod.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Removing a pod must first removal all containers in the pod.
Libpod requires the state to remain consistent at all times, so
references to a deleted pod must all be cleansed first.
Pods can have many containers in them. We presently iterate
through all of them, and if an error occurs trying to clean up
and remove any single container, we abort the entire operation
(but cannot recover anything already removed - pod removal is not
an atomic operation).
Because of this, if a removal error occurs partway through, we
can end up with a pod in an inconsistent state that is no longer
usable. What's worse, if the error is in the infra container, and
it's persistent, we get zombie pods - completely unable to be
removed.
When we saw some of these same issues with containers not in
pods, we modified the removal code there to aggressively purge
containers from the database, then try to clean up afterwards.
Take the same approach here, and make cleanup errors nonfatal.
Once we've gone ahead and removed containers, we need to see
pod deletion through to the end - we'll log errors but keep
going.
Also, fix some other small things (most notably, we didn't make
events for the containers removed).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
After a reboot, when we refresh Podman's state, we retrieved the
lock from the fresh SHM instance, but we did not mark it as
allocated to prevent it being handed out to other containers and
pods.
Provide a method for marking locks as in-use, and use it when we
refresh Podman state after a reboot.
Fixes#2900
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We want to start supporting the registries.conf format.
Also start showing blocked registries in podman info
Fix sorting so all registries are listed together in podman info.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The on-failure restart option supports restarting only a given
number of times. To do this, we need one additional field in the
DB to track restart count (which conveniently fills a field in
Inspect we weren't populating), plus some plumbing logic.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This field indicates that a container was explciitly stopped by
an API call, and did not exit naturally. It's used when
implementing restart policy for containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Fallback to executing ps(1) in case we hit an unknown psgo descriptor.
This ensures backwards compatibility with docker-top, which was purely
ps(1) driven.
Also support comma-separated descriptors as input.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
We merged #2950 with some nits still remaining, as Giuseppe was
going on PTO. This addresses those small requested changes.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
As part of this, rework the number of workers used by various
Podman tasks to match original behavior - need an explicit
fallthrough in the switch statement for that block to work as
expected.
Also, trivial change to Podman cleanup to work on initialized
containers - we need to reset to a different state after cleaning
up the OCI runtime.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
build a podman-remote binary for windows that allows users to use the
remote client on windows and interact with podman on linux system.
Signed-off-by: baude <bbaude@redhat.com>
it is useful to migrate existing containers to a new version of
podman. Currently, it is needed to migrate rootless containers that
were created with podman <= 1.2 to a newer version which requires all
containers to be running in the same user namespace.
Closes: https://github.com/containers/libpod/issues/2935
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The --read-only-tmpfs option caused podman to mount tmpfs on /run, /tmp, /var/tmp
if the container is running int read-only mode.
The default is true, so you would need to execute a command like
--read-only --read-only-tmpfs=false to turn off this behaviour.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Rework our expectations of how images that are derived from each other
look, so that we don't assume that an image that's derived from a base
image always adds layers relative to that base image.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add a context.Context parameter to Image.GetParent(), Image.IsParent(),
Image.GetChildren(), Image.Remove(), and Runtime.PruneImages().
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
add the ability for podman to read and write events to journald instead
of just a logfile. This can be controlled in libpod.conf with the
`events_logger` attribute of `journald` or `file`. The default will be
set to `journald`.
Signed-off-by: baude <bbaude@redhat.com>
We refer to the pause_image and pause_container in the libpod.conf
description, but internally we had infra_image and infra_container.
This means it the user made changes to the conf, it would not effect the
actual tool using libpod.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Currently in Docker if you commit with --change 'CMD a b c'
The command that gets added is
[/bin/sh -c "a b c"]
If you commit --change 'CMD ["a","b","c"]'
You get
[a b c]
This patch set makes podman match this behaviour.
Similar change required for Entrypoint.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If you pass in an invalid CHANGE ENV or LABEL option without the "=" character
podman crashes.
I see that there were other problems with the handling of commit --change handling.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
if the mount was already umounted as part of the cleanup (i.e. being a
submount), the umount would fail.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
All IDs in libpod are stored as a full container ID. We can get a
container by full ID faster with GetContainer (which directly
retrieves) than LookupContainer (which finds a match, then
retrieves). No reason to use Lookup when we have full IDs present
and available.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The SELinux label for the CRIU dump.log was explicitly set in Podman.
The label for the restore.log, however, not. This just moves the code
to label the log file into a function and calls that functions during
checkpoint and restore.
Signed-off-by: Adrian Reber <areber@redhat.com>
The 'docker commit' will never include a container's volumes when
committing, without an explicit request through '--change'.
Podman, however, defaulted to including user volumes as image
volumes.
Make this behavior depend on a new flag, '--include-volumes',
and make the default behavior match Docker.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
when we run in a user namespace, there are cases where we have not
enough privileges to mount a fresh sysfs on /sys. To circumvent this
limitation, we rbind /sys from the host. This carries inside of the
container also some mounts we probably don't want to. We are also
limited by the kernel to use rbind instead of bind, as allowing a bind
would uncover paths that were not previously visible.
This is a slimmed down version of the intermediate mount namespace
logic we had before, where we only set /sys to slave, so the umounts
done to the storage by the cleanup process are propagated back to the
host. We also don't setup any new directory, so there is no
additional cleanup to do.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
create immediately a namespace if we need a refresh. This is
necessary to access the rootless storage.
Closes: https://github.com/containers/libpod/issues/2894
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Don't sort OCI hooks using the locale collation order; it does not
make sense for the same system-wide directory to be interpreted differently
depending on the user's LC_COLLATE setting, and the language-specific
collation order can even change over time.
Besides, the current collation order determination code has never worked
with the most common LC_COLLATE values like en_US.UTF-8.
Ideally, we would like to just order based on Unicode code points
to be reliably stable, but the existing implementation is case-insensitive,
so we are forced to rely on the unicode case mapping tables at least.
(This gives up on canonicalization and width-insensitivity, potentially
breaking users who rely on these previously documented properties.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
* refactor command output to use one function
* Add new worker pool parallel operations
* Implement podman-remote umount
* Refactored podman wait to use printCmdOutput()
Signed-off-by: Jhon Honce <jhonce@redhat.com>
when deleting a commited image, the path for deletion has an early exit
and the image remove event was not being triggered.
Signed-off-by: baude <bbaude@redhat.com>
The Commit test is blatantly wrong and testing buggy behavior. We
should be commiting the destination, if anything - and more
likely nothing at all.
When force-removing volumes, don't remove the volumes of
containers we need to remove. This can lead to a chicken and the
egg problem where the container removes the volume before we can.
When we re-add volume locks this could lead to deadlocks. I don't
really want to deal with this, and this doesn't seem a
particularly harmful quirk, so we'll let this slide until we get
a bug report.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We were never using it. It's actually a potentially quite sizable
field (very expensive to decode an array of structs!). Removing
it should do no harm.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The flag should be substantially more durable, and no longer
relies on the create artifact.
This should allow it to properly handle our new named volume
implementation.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Now that named volumes must be explicitly enumerated rather than
passed in with all other volumes, we need to split normal and
named volumes up before passing them into libpod. This PR does
this.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This swaps the previous handling (parse all volume mounts on the
container and look for ones that might refer to named volumes)
for the new, explicit named volume lists stored per-container.
It also deprecates force-removing volumes that are in use. I
don't know how we want to handle this yet, but leaving containers
that depend on a volume that no longer exists is definitely not
correct.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When generating headers for search, we unconditionally
access element 0 of an array, and I saw this segfault in our CI.
There's no reason we have to do this, we're just going through it
to get field names with reflect, so just make a new copy of the
struct in question.
Also, move this code, which is only for CLI display, into
cmd/podman from libpod/image.
Signed-off-by: Matthew Heon <mheon@redhat.com>
simplify the rootless implementation to use a single user namespace
for all the running containers.
This makes the rootless implementation behave more like root Podman,
where each container is created in the host environment.
There are multiple advantages to it: 1) much simpler implementation as
there is only one namespace to join. 2) we can join namespaces owned
by different containers. 3) commands like ps won't be limited to what
container they can access as previously we either had access to the
storage from a new namespace or access to /proc when running from the
host. 4) rootless varlink works. 5) there are only two ways to enter
in a namespace, either by creating a new one if no containers are
running or joining the existing one from any container.
Containers created by older Podman versions must be restarted.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
in the few places where we care about skipping the storage
initialization, we can simply use the process effective UID, instead
of relying on a global boolean flag.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Docker's upstream name validation regex has two major differences
from ours that we pick up in this PR.
The first requires that the first character of a name is a letter
or number, not a special character.
The second allows periods in names.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We have an issue in the current implementation where the cleanup
process is not able to umount the storage as it is running in a
separate namespace.
Simplify the implementation for user namespaces by not using an
intermediate mount namespace. For doing it, we need to relax the
permissions on the parent directories and allow browsing
them. Containers that are running without a user namespace, will still
maintain mode 0700 on their directory.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Currently in rootless containers, we end up not using the blob cache.
We also don't store the blob cache based on the users specified graph
storage. This change will cause the cache directory to be stored with
the rest of the containe images.
While doing this patch, I found that we had duplicated GetSystemContext in
two places in libpod. I cleaned this up.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We accidentally patched this out trying to enable ns:/path/to/ns
This should restore the ability to configure nondefault CNI
networks with Podman, by ensuring that they request creation of a
network namespace.
Completely remove the WithNetNS() call when we do use an explicit
namespace from a path. We use that call to indicate that a netns
is going to be created - there should not be any question about
whether it actually does.
Fixes#2795
Signed-off-by: Matthew Heon <mheon@redhat.com>
from _LIBPOD to _CONTAINERS. The same change was done in buildah
unshare.
This is necessary for podman to detect we are running in a rootless
environment and work properly from a "buildah unshare" session.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Before, there were SELinux denials when a volume was bind-mounted by podman play kube.
Partially fix this by setting the default private label for mounts created by play kube (with DirectoryOrCreate)
For volumes mounted as Directory, the user will have to set their own SELinux permissions on the mount point
also remove left over debugging print statement
Signed-off-by: Peter Hunt <pehunt@redhat.com>
We have a very high performance JSON library that doesn't need to
perform code generation. Let's use it instead of our questionably
performant, reflection-dependent deep copy library.
Most changes because some functions can now return errors.
Also converts cmd/podman to use jsoniter, instead of pkg/json,
for increased performance.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Specifically, we want to be able to specify whether resolv.conf
and /etc/hosts will be create and bind-mounted into the
container.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We have a new event for container 'Exited' which has been renamed to
'died'.
also removed the stream bool from the varlink endpoint for events
because it can be determined by the varlink more value.
Signed-off-by: baude <bbaude@redhat.com>
podman will not start a transient service and timer for healthchecks.
this handles the tracking of the timing for health checks.
added the 'started' status which represents the time that a container is
in its start-period.
the systemd timing can be disabled with an env variable of
DISABLE_HC_SYSTEMD="true".
added filter for ps where --filter health=[starting, healthy, unhealthy]
can now be used.
Signed-off-by: baude <bbaude@redhat.com>
when --uidmap is used, the user won't be able to access
/var/lib/containers/storage/volumes. Use the intermediate mount
namespace, that is accessible to root in the container, for mounting
the volumes inside the container.
Closes: https://github.com/containers/libpod/issues/2713
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This can help scripts provide a more meaningful message when coming
across issues [1] which require the container to be re-created.
[1] eg., https://github.com/containers/libpod/issues/2673
Signed-off-by: Debarshi Ray <rishi@fedoraproject.org>
when running podman logs on a created container (which has no logs),
podman should return gracefully (like docker) with a 0 return code. if
multiple containers are provided and one is only in the created state
(and no follow is used), we still display the logs for the other ids.
fixes issue #2677
Signed-off-by: baude <bbaude@redhat.com>
split the generation for the default storage.conf and when we write it
if not existing for a rootless user.
This is necessary because during the startup we might be overriding
the default configuration through --storage-driver and --storage-opt,
that would not be written down to the storage.conf file we generated.
Closes: https://github.com/containers/libpod/issues/2659
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We cannot use the RunDir for writing the conmon.pid file as we might
not be able to read it before we join a namespace, since it is owned
by the root in the container which can be a different uid when using
uidmap. To avoid completely the issue, we will just write it to the
static dir which is always readable by the unprivileged user.
Closes: https://github.com/containers/libpod/issues/2673
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
add the ability for users to specify more than one container at a time
while using podman logs. If more than one container is being displayed,
podman will also prepend a shortened container id of the container on
the log line.
also, enabled the podman-remote logs command during the refactoring of
the above ability.
fixes issue #2219
Signed-off-by: baude <bbaude@redhat.com>
SCTP is already present and enabled in the CNI plugins, so all we
need to do to add support is not error on attempting to bind
ports to reserve them.
I investigated adding this binding for SCTP, but support for SCTP
in Go is honestly a mess - there's no widely-supported library
for doing it that will do what we need.
For now, warn that port reservation for SCTP is not supported and
forward the ports.
Signed-off-by: Matthew Heon <mheon@redhat.com>
When creating a new image volume to be mounted into a container, we need to
make sure the new volume matches the Ownership and permissions of the path
that it will be mounted on.
For example if a volume inside of a containre image is owned by the database
UID, we want the volume to be mounted onto the image to be owned by the
database UID.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
if the settings are available in the user config file, do not override
them with the global configuration.
Closes: https://github.com/containers/libpod/issues/2614
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
we had two functions NewRuntimeFromConfig and NewRuntime that differed
only for the config file they use.
Move comon logic to newRuntimeFromConfig and let it lookup the
configuration file to use when one is not specified.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Older versions of slirp4netns do not have the --disable-host-loopback
flag.
Remove the check once we are sure the updated version is available
everywhere.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
to protect against regressions, we need to add a few gating tasks:
* build with varlink
* build podman-remote
* build podman-remote-darwin
we already have a gating task for building without varlink
Signed-off-by: baude <bbaude@redhat.com>
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Vendors in fsouza/docker-client, docker/docker and
a few more related. Of particular note, changes to the TweakCapabilities()
function from docker/docker along with the parse.IDMappingOptions() function
from Buildah. Please pay particular attention to the related changes in
the call from libpod to those functions during the review.
Passes baseline tests.
integration of healthcheck into create and run as well as inspect.
healthcheck enhancements are as follows:
* add the following options to create|run so that non-docker images can
define healthchecks at the container level.
* --healthcheck-command
* --healthcheck-retries
* --healthcheck-interval
* --healthcheck-start-period
* podman create|run --healthcheck-command=none disables healthcheck as
described by an image.
* the healthcheck itself and the healthcheck "history" can now be
observed in podman inspect
* added the wiring for healthcheck history which logs the health history
of the container, the current failed streak attempts, and log entries
for the last five attempts which themselves have start and stop times,
result, and a 500 character truncated (if needed) log of stderr/stdout.
The timings themselves are not implemented in this PR but will be in
future enablement (i.e. next).
Signed-off-by: baude <bbaude@redhat.com>
In lipod, we now log major events that occurr. These events
can be displayed using the `podman events` command. Each
event contains:
* Type (container, image, volume, pod...)
* Status (create, rm, stop, kill, ....)
* Timestamp in RFC3339Nano format
* Name (if applicable)
* Image (if applicable)
The format of the event and the varlink endpoint are to not
be considered stable until cockpit has done its enablement.
Signed-off-by: baude <bbaude@redhat.com>
When mounting a tmpfs, runc attempts to make the directory it
will be mounted at. Unfortunately, Golang's os.MkdirAll deals
very poorly with symlinks being part of the path. I looked into
fixing this in runc, but it's honestly much easier to just ensure
we don't trigger the issue on our end.
Fixes BZ #1686610
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When importing an image from a file somewhere, we already know how to
download data from a URL to a file, so do the same for stdin, in case
it's unexpectedly large.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
allow to configure the path to the network-cmd binary, either via an
option flag --network-cmd-path or through the libpod.conf
configuration file.
This is currently used to customize the path to the slirp4netns
binary.
Closes: https://github.com/containers/libpod/issues/2506
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When sourcing from an image, we need to grab its entrypoint first
and then add command on to mimic the behavior of Docker.
The default Kube pause image just sets ENTRYPOINT, and not CMD,
so nothing changes there, but this ought to fix other images
(for example, nginx would try to run the pause command instead of
an nginx process without this patch)
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The logic of deleting and recreating /etc/hosts and
/etc/resolv.conf only makes sense when we're the one that creates
the files - when we don't, it just removes them, and there's
nothing left to use.
Fixes#2602
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
I was seeing some segfaults where image config was being passed
as nil, causing a nil dereference segfault. Fix the apparent
cause and add some safety fencing to try and ensure it doesn't
happen again.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Trying to remove circular dependencies between libpod and buildah.
First step to move pkg content from libpod to buildah.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If the pod infra container is overriden, we want to run the entry point of the image, instead of the default infra command. This allows users to override the infra-image with greater ease.
Also use process environment variables from image
Signed-off-by: Peter Hunt <pehunt@redhat.com>
we use "podman info" to reconfigure the runtime after a reboot, but we
don't propagate the error message back if something goes wrong.
Closes: https://github.com/containers/libpod/issues/2584
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
In the previous version I forgot to add the fds to preserve into
AdditionalFiles. It doesn't make a difference as the files were still
preserved, but this seems to be the correct way of making it
explicit.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When looking up a container or pod by from user input, we handle
collisions between names and IDs differently than Docker at
present.
In Docker, when there is a container with an ID starting with
"c1" and a container named "c1", commands on "c1" will always act
on the container named "c1". For the same scenario in podman, we
throw an error about name collision.
Change Podman to follow Docker, by returning the named container
or pod instead of erroring.
This should also have a positive effect on performance in the
lookup-by-full-name case, which no longer needs to fully traverse
the list of all pods or containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Before, a pod create would fail if it was set to share no namespaces, but had an infra container. While inefficient (you add a container for no reason), it shouldn't be a fatal failure. Fix this by only failing if the pod was set to share namespaces, but had no infra container, and writing a warning if vice versa.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
We're going to feed this into Go's BCP 47 language parser. Language
tags have the form [1]:
language
["-" script]
["-" region]
*("-" variant)
*("-" extension)
["-" privateuse]
and locales have the form [2]:
[language[_territory][.codeset][@modifier]]
The modifier is useful for collation, but Go's language-based API
[3] does not provide a way for us to supply it. This code converts
our locale to a BCP 47 language by stripping the dot and later and
replacing the first underscore, if any, with a hyphen. This will
avoid errors like [4]:
WARN[0000] failed to parse language "en_US.UTF-8": language: tag is not well-formed
when feeding language.Parse(...).
[1]: https://tools.ietf.org/html/bcp47#section-2.1
[2]: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html#tag_08_02
[3]: https://github.com/golang/go/issues/25340
[4]: https://github.com/containers/libpod/issues/2494
Signed-off-by: W. Trevor King <wking@tremily.us>
Add the ability to manually run a container's healthcheck command.
This is only the first phase of implementing the healthcheck.
Subsequent pull requests will deal with the exposing the results and
history of healthchecks as well as the scheduling.
Signed-off-by: baude <bbaude@redhat.com>
Before, any container with a netNS dependency simply used its dependency container's hosts file, and didn't abide its configuration (mainly --add-host). Fix this by always appending to the dependency container's hosts file, creating one if necessary.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
When the configuration file is specified, be sure to fill rootless
compatible values in the default configuration.
Closes: https://github.com/containers/libpod/issues/2510
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
To be able to use OCI runtimes which do not implement checkpoint/restore
this adds a check to the checkpoint code path and the checkpoint/restore
tests to see if it knows about the checkpoint subcommand. If the used
OCI runtime does not implement checkpoint/restore the tests are skipped
and the actual 'podman container checkpoint' returns an error.
Signed-off-by: Adrian Reber <areber@redhat.com>
Allow passing in of AttachStreams to libpod.Exec() for usage in podman healthcheck. An API caller can now specify different streams for stdout, stderr and stdin, or no streams at all.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
No reason to do it in util/ anymore. It's always going to be a
subdirectory of c/storage graph root by default, so we can just
set it after the return.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Instead of passing in defaults via WithStorageConfig after
computing them in cmd/podman/libpodruntime, do all defaults in
libpod itself.
This can alleviate ordering issues which caused settings in the
libpod config (most notably, volume path) to be ignored.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When removing volumes with rm --volumes we want to only remove
volumes that were created with the container. Volumes created
separately via 'podman volume create' should not be removed.
Also ensure that --rm implies volumes will be removed.
Fixes#2441
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
If this doesn't match, we end up not being able to access named
volumes mounted into containers, which is bad. Use the same
validation that we use for other critical paths to ensure this
one also matches.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We want named volumes to be created in a subdirectory of the
c/storage graph root, the same as the libpod root directory is
now. As such, we need to adjust its location when the graph root
changes location.
Also, make a change to how we set the default. There's no need to
explicitly set it every time we initialize via an option - that
might conflict with WithStorageConfig setting it based on graph
root changes. Instead, just initialize it in the default config
like our other settings.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
CRIU creates a log file during checkpointing in .../userdata/dump.log.
The problem with this file is, is that CRIU injects a parasite code into
the container processes and this parasite code also writes to the same
log file. At this point a process from the inside of the container is
trying to access the log file on the outside of the container and
SELinux prohibits this. To enable writing to the log file from the
injected parasite code, this commit creates an empty log file and labels
the log file with c.MountLabel(). CRIU uses existing files when writing
it logs so the log file label persists and now, with the correct label,
SELinux no longer blocks access to the log file.
Signed-off-by: Adrian Reber <areber@redhat.com>
We should just bind mount the original containers /etc/resolv.conf and /etchosts
into the new container. Changes in the resolv.conf and hosts should be seen
by all containers, This matches Docker behaviour.
In order to make this work the labels on these files need to have a shared
SELinux label.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If user specifies network namespace and the /etc/netns/XXX/resolv.conf
exists, we should use this rather then /etc/resolv.conf
Also fail cleaner if the user specifies an invalid Network Namespace.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Vendors in Buildah 1.7 into Podman.
Also the latest imagebuilder and changes for
`build --target`
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
if --runtime is specified, then it has higher priority on the
runtime_path option, which was added for backward compatibility.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
enable the remote client to be able to inspect a pod. also, bonus of
enabling the podman pod exists command which returns a 0 or 1 depending
on whether the given pod exists.
Signed-off-by: baude <bbaude@redhat.com>
The original intent behind the requirement was to ensure that, if
two SHM lock structs were open at the same time, we should not
make such a runtime available to the user, and should clean it up
instead.
It turns out that we don't even need to open a second SHM lock
struct - if we get an error mapping the first one due to a lock
count mismatch, we can just delete it, and it cleans itself up
when it errors. So there's no reason not to return a valid
runtime.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When we're renumbering locks, we're destroying all existing
allocations anyways, so destroying the old lock struct is not a
particularly big deal. Existing long-lived libpod instances will
continue to use the old locks, but that will be solved in a
followon.
Also, solve an issue with returning error values in the C code.
There were a few places where we return ERRNO where it was not
set, so make them return actual error codes).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We can't do renumbering after init - we need to open a
potentially invalid locks file (too many/too few locks), and then
potentially delete the old locks and make new ones.
We need to be in init to bypass the checks that would otherwise
make this impossible.
This leaves us with two choices: make RenumberLocks a separate
entrypoint from NewRuntime, duplicating a lot of configuration
load code (we need to know where the locks live, how many there
are, etc) - or modify NewRuntime to allow renumbering during it.
Previous experience says the first is not really a viable option
and produces massive code bloat, so the second it is.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
I was looking into why we have locks in volumes, and I'm fairly
convinced they're unnecessary.
We don't have a state whose accesses we need to guard with locks
and syncs. The only real purpose for the lock was to prevent
concurrent removal of the same volume.
Looking at the code, concurrent removal ought to be fine with a
bit of reordering - one or the other might fail, but we will
successfully evict the volume from the state.
Also, remove the 'prune' bool from RemoveVolume. None of our
other API functions accept it, and it only served to toggle off
more verbose error messages.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Renumber is a way of renumbering container locks after the number
of locks available has changed.
For now, renumber only works with containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Refactor the image-search logic from cmd/podman/search.go to
libpod/image/search.go and update podman-search and the Varlink API to
use it.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Before, a container being run or started in a pod always restarted the infra container. This was because we didn't take running dependencies into account. Fix this by filtering for dependencies in the running state.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
Prior, a pod would have to be started immediately when created, leading to confusion about what a pod state should be immediately after creation. The problem was podman run --pod ... would error out if the infra container wasn't started (as it is a dependency). Fix this by allowing for recursive start, where each of the container's dependencies are started prior to the new container. This is only applied to the case where a new container is attached to a pod.
Also rework container_api Start, StartAndAttach, and Init functions, as there was some duplicated code, which made addressing the problem easier to fix.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
Add backward compatibility for `runtime_path` that was used by older
versions of Podman.
The issue was introduced with: 650cf122e1
If `runtime_path` is specified, it overrides any other configuration
and a warning is printed.
It should be considered deprecated and will be removed in future.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Cockpit wants to be able to search images on systems without
tlsverify turned on.
tlsverify should be an optional parameter, if not set then we default
to the system defaults defined in /etc/containers/registries.conf.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
iFix builtin volumes to work with podman volume
Currently builtin volumes are not recored in podman volumes when
they are created automatically. This patch fixes this.
Remove container volumes when requested
Currently the --volume option on podman remove does nothing.
This will implement the changes needed to remove the volumes
if the user requests it.
When removing a volume make sure that no container uses the volume.
Signed-off-by: Daniel J Walsh dwalsh@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
allow users to remotely prune volumes.
this is the last volume command for remote enablement. as such,
the volume commands are being folded back into main because they
are supported for both local and remote clients.
also, enable all volume tests that do not use containers
as containers are not enabled for the remote client yet.
Signed-off-by: baude <bbaude@redhat.com>
add the ability to build images using files local to the remote-client
but over a varlink interface to a "remote" server.
Signed-off-by: baude <bbaude@redhat.com>
in cases where a container is part of a network namespace, we should
show the network namespace's ports when dealing with ports. this
impacts ps, kube, and port.
fixes: #846
Signed-off-by: baude <bbaude@redhat.com>
When parsing a string name for repo and tag (for images output), we
should be using parsenormalizedname and reference.Canonical to
get the proper output.
Resolves: #2175
Signed-off-by: baude <bbaude@redhat.com>
enable podman-remote push so that users can push images from a
remote client.
change in push API to deal with the need to see output over the
varlink connection.
Signed-off-by: baude <bbaude@redhat.com>
When cleaning up containers, we presently remove the exit file
created by Conmon, to ensure that if we restart the container, we
won't have conflicts when Conmon tries writing a new exit file.
Unfortunately, we need to retain that exit file (at least until
we get a workable events system), so we can read it in cases
where the container has been removed before 'podman run' can read
its exit code.
So instead of removing it, rename it, so there's no conflict with
Conmon, and we can still read it later.
Fixes: #1640
Signed-off-by: Matthew Heon <mheon@redhat.com>
At present, when manually detaching from an attached container
(using the detach hotkeys, default C-p C-q), Podman will still
wait for the container to exit to obtain its exit code (so we can
set Podman's exit code to match). This is correct in the case
where attach finished because the container exited, but very
wrong for the manual detach case.
As a result of this, we can no longer guarantee that the cleanup
and --rm functions will fire at the end of 'podman run' - we may
be exiting before we get that far. Cleanup is easy enough - we
swap to unconditionally using the cleanup processes we've used
for detached and rootless containers all along. To duplicate --rm
we need to also teach 'podman cleanup' to optionally remove
containers instead of cleaning them up.
(There is an argument for just using 'podman rm' instead of
'podman cleanup --rm', but cleanup does have different semantics
given that we only ever expect it to run when the container has
just exited. I think it might be useful to keep the two separate
for things like 'podman events'...)
Signed-off-by: Matthew Heon <mheon@redhat.com>
Image more clearly describes what the type represents.
Also, only include the image name in the `ImageNotFound` error returned
by `GetImage()`, not the full error message.
Signed-off-by: Lars Karlitski <lars@karlitski.net>
This is more consistent and eaiser to parse than the format that
golang's time.String() returns.
Fixes#2260
Signed-off-by: Lars Karlitski <lars@karlitski.net>
when checking for a container's mountpoint, you must lock and sync
the container or the result may be "".
Fixes: #2304
Signed-off-by: baude <bbaude@redhat.com>
Currently we can get into a state where a container exists in
storage but does not exist in libpod. If the user forces a
removal of this container, then we should remove it from storage
even if the container is owned by another tool.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We intend to migrate to the cobra cli from urfave/cli because the
project is more well maintained. There are also some technical reasons
as well which extend into our remote client work.
Signed-off-by: baude <bbaude@redhat.com>
Instead of unconditionally resetting to ContainerStateConfigured
after a reboot, allow containers in the Exited state to remain
there, preserving their exit code in podman ps after a reboot.
This does not affect the ability to use and restart containers
after a reboot, as the Exited state can be used (mostly)
interchangeably with Configured for starting and managing
containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We are missing the equivalence of the docker system commands
This patch set adds `podman system prune`
and `podman system info`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
addition of import and export for the podman-remote client. This includes
the ability to send and receive files between the remote-client and the
"podman" host using an upgraded varlink connection.
Signed-off-by: baude <bbaude@redhat.com>
if some paths are overriden in the global configuration file, be sure
that rootless podman honors them.
Closes: https://github.com/containers/libpod/issues/2174
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
To get the more-robust handling from 0f6535cf (libpod/image: Use
ParseNormalizedNamed in RepoDigests, 2019-01-08, #2106) here too.
Signed-off-by: W. Trevor King <wking@tremily.us>
The toolbox project would benefit from a few changes to more closely
resembe the original atomic cli project. Changes made are:
* only pull image for container runlabel if the label exists in the image
* if a container image does not have the desired label, exit with non-zero
Signed-off-by: baude <bbaude@redhat.com>
we now, by default, only prune dangling images. if --all is passed, we
prune dangling images AND images that do not have an associated containers.
also went ahead and enabled the podman-remote image prune side of things.
Fixes: #2192
Signed-off-by: baude <bbaude@redhat.com>
initial enablement of podman-remote version. includes add a APIVersion const
that will allow us to check compatibility between host/client when connections
are made.
also added client related information to podman info.
Signed-off-by: baude <bbaude@redhat.com>
add support for ports redirection from the host.
It needs slirp4netns v0.3.0-alpha.1.
Closes: https://github.com/containers/libpod/issues/2081
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This will now verify labels passed in by the user.
Will also prevent users from accidently relabeling their homedir.
podman run -ti -v ~/home/user:Z fedora sh
Is not a good idea.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We try to keep c.config immutable, but Go doesn't really agree
with me that things other than strings and ints can be immutable,
so occasionally things like this slip through.
When unmarshalling the OCI spec from disk, do it into a separate
struct, to ensure we don't make lasting modifications to the
spec in the Container struct (which could affect container
restart).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When waiting for a container, there is a long interval between
status checks - plenty long enough for the container in question
to start, then subsequently be cleaned up and returned to Created
state to be restarted. As such, we can't wait on container state
to go to Stopped or Exited - anything that is not Running or
Paused indicates the container is dead.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We clean up the code by eliminating stuttering references when we embed
the runtime struct into localRuntime. Makes for less change in the future
as well.
++ jhonce
Signed-off-by: baude <bbaude@redhat.com>
Add the ability to run the integration (ginkgo) suite using
the remote client.
Only the images_test.go file is run right now; all the rest are
isolated with a // +build !remotelinux. As more content is
developed for the remote client, we can unblock the files and
just block single tests as needed.
Signed-off-by: baude <bbaude@redhat.com>
we can define multiple OCI runtimes that can be chosen with
--runtime.
in libpod.conf is possible to specify them with:
[runtimes]
foo = [
"/usr/bin/foo",
"/usr/sbin/foo",
]
bar = [
"/usr/bin/foo",
"/usr/sbin/foo",
]
If the argument to --runtime is an absolute path then it is used
directly without any lookup in the configuration.
Closes: https://github.com/containers/libpod/issues/1750
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This deprecates the libpod.conf variable of `runtime_path=`, and now has
`runtimes=`, like a map for naming the runtime, preparing for a
`--runtime` flag to `podman run` (i.e. runc, kata, etc.)
Reference: https://github.com/containers/libpod/issues/1750
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
Finally, these members no longer have any users.
Future users should usually call referenceWithRegistry / normalizedReference,
and work with the returned value, instead of reintroducing these variables.
Similarly, direct uses of unnormalizedRef should be rare (only for cases
where the registry and/or path truly does not matter).
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Image.MatchRepoTag and findImageInRepoTags do some kind of
heuristic search; the motivation and design of both, and how they
should deal with digests, is not obvious to me.
Instead of figuring that out now, just factor it out into a
scary-named method and leave the "tag" value (with its "latest"/"none"
value) alone.
Similarly, the .registry and .name fields should typically not be used;
users should use either hasRegistry or normalized reference types;
so, isolate the difficult-to-understand search code, and computation
of these values, into this new search-specific helper.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
imageruntime.getImage, through ParseStoreReference, already uses
reference.TagNameOnly on the input, so this extra lookup is completely
redundant to the lookup that has already happened.
Should not change behavior, apart from speeding up the code a bit.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead of returning a string, return a native value and convert it
into the string in the caller, to make it that small bit more
common to use reference types.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Again, rely on the official API, instead of the suprising "suspiciousTagValueForSearch"
value (set to :latest on untagged images, and :none on digested ones!)
CHANGES BEHAVIOR, but the previous output of normalization of digested values was
not even syntatically valid, so this can't really be worse.
Still, maybe we should refuse to tag with digested references in the first place.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is another step to using reference values instead of strings here.
CHANGES BEHAVIOR: docker.io/busybox is now normalized to docker.io/library/busybox.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This will be used in normalizeTag to work with references instead of strings.
Not used anywhere yet, should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... instead of open-coding something similar. Eventually
we will use the reference type further in here.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This shortens the code a bit, but most importantly ensures that all pulls from
docker.Transport are processed exactly the same way, and there is only a single
store.ParseStoreReference in the pull code.
It's a bit wasteful to call decompose() in getPullRefPair just after
pullGoalFromPossiblyUnqualifiedName has qualified the name, but on balance
only having exactly one code path seems worth it. Alternatively we could
split getPullRefPairToQualifiedDestination from getPullRefPair.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
CHANGES BEHAVIOR.
This bypasses .assemble, and preserves the original
lack of tag / original digest instead of adding :latest/:none
(still subject to ParseStoreReference normalization).
Using the original digest seems clearly correct; dropping the :latest
suffix from .image strings, and adding /library to docker.io/shortname,
only affects user-visible input; later uses of the return value of
pullImageFrom... use ParseStoreReference, which calls reference.ParseNormalizedNamed
and reference.TagNameOnly, so the image name should be processed
the same way whether it contains a tag, or libray/, or not.
This also allows us to drop the problematic hasShaInInputName heuristic/condition/helper.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
CHANGES BEHAVIOR.
This bypasses .assemble, and preserves the original
lack of tag / original digest instead of adding :latest/:none
(still subject to ParseStoreReference normalization).
Using the original digest seems clearly correct; dropping the :latest
suffix from .image strings only affects user-visible input; later
uses of the return value of pullImageFrom... use ParseStoreReference,
which calls reference.TagNameOnly, so the image name should be processed
the same way whether it contains a tag or not.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is the primary goal of decompose()+assemble(), to support
qualifying an image name.
Does not have any users yet, so does not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
CHANGES BEHAVIOR.
If the name is qualified, instead of decomposing it into components and
re-assembling, just use the input name unmodified:
- For name:tag values, .assemble() just recreates the input.
- For untagged values, .assemble() adds ":latest"; we keep
the input as is, but both docker.ParseReference and storage.Transport.ParseStoreReference
use reference.TagNameOnly() already.
- For digested references, .assemble() adds ":none", but
the code was already bypassing .assemble() on that path
already - for the source reference. For the destination,
this replaces a :none destination with a the @digest reference,
as expected.
Note that while decompose() has already parsed the input,
it (intentionally) bypassed the docker.io/library normalization;
therefore we parse the input again (via docker.ParseReference) to ensure
that the reference is normalized.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Both imageParts and this function implicitly assume docker.Transport
troughout, so instead of pretending to be flexible about DefaultTransport,
just hard-code docker.ParseReference directly.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It is only ever set to DefaulTransport, and all of the code
is docker/reference-specific anyway, so there's no point in
making this a variable.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
After inlining assembleWithTransport, we have two branches with
the same prepending of decomposedImage.transport; move that out of
the branches.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
imageParts.transport is a constant, and the design of imageParts
is not transport-independent in any sense; we will want to eliminate
the transport member entirely.
As a first step, drop assembleWithTransport and inline an exact
equivalent into all callers.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We will eventually want to eliminate most members of imageParts
in favor of using the c/image/docker/reference API directly.
For now, just record the reference.Named value, and we will
replace uses of the other members before removing them.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Now that DecomposeString has no users, make the type private again.
Any new users of it should come with a rationale - and new users
of the "none"/"latest" handling of untagged/digested names that is
currently implemented should have an exceptionaly unusual rationale.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
- It is used to fill Repository(misnamed)/Tag members which have no users;
so it's completely unclear why this is useful.
- Given the mishandling of tags by imageParts.tag, at the very least
all new code should primarily use reference.Named (even if
after a decompose() to internally deal with unqualified names first),
introducing new uses of original decompose() just reintroduces known
trouble - so without any provided rationale, reverting seems
a reasonable default action.
- This drags in all of libpod/image into the "remote client" build,
which seems undesirable.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Read the $PODMAN_VARLINK_BRIDGE environment variable
(normally looks like: "ssh user@host varlink bridge")
Also respect $PODMAN_VARLINK_ADDRESS as an override,
if using a different podman socket than the default.
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
The json-iterator package will panic on attempting to use
MarshalIndent with a non-space indentation. This is sort of silly
but swapping from tabs to spaces is not a big issue for us, so
let's work around the silly panic.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The jsoniter library does not require code generation, which is a
massive advantage over easyjson (it's also about the same in
performance). Begin moving over to it by removing the existing
easyjson code.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
If we are not able to make arbitrary changes to the RLIMIT_NOFILE when
lacking CAP_SYS_RESOURCE, don't fail but bump the limit to the maximum
allowed. In this way the same code path works with rootless mode.
Closes: https://github.com/containers/libpod/issues/2123
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Avoid generating
quay.io/openshift-release-dev/ocp-release@sha256@sha256:239... and
similar when the image name is already digest-based [1]. It's not
clear exactly how we get into this state, but as shown by the unit
tests, the new code handles this case correctly (while the previous
code does not).
[1]: https://github.com/containers/libpod/issues/2086
Signed-off-by: W. Trevor King <wking@tremily.us>
Closes: #2106
Approved by: rhatdan
Apply the default AppArmor profile at container initialization to cover
all possible code paths (i.e., podman-{start,run}) before executing the
runtime. This allows moving most of the logic into pkg/apparmor.
Also make the loading and application of the default AppArmor profile
versio-indepenent by checking for the `libpod-default-` prefix and
over-writing the profile in the run-time spec if needed.
The intitial run-time spec of the container differs a bit from the
applied one when having started the container, which results in
displaying a potentially outdated AppArmor profile when inspecting
a container. To fix that, load the container config from the file
system if present and use it to display the data.
Fixes: #2107
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The initial implementation to request the same IP address for a
container during a restore was based on environment variables
influencing CNI.
With this commit the IP address selection switches to Podman's internal
static IP API.
This commit does a comment change in libpod/container_easyjson.go to
avoid unnecessary re-generation of libpod/container_easyjson.go during
build as this fails in CI. The reason for this is that make sees that
libpod/container_easyjson.go needs to be re-created. The commit,
however, only changes a part of libpod/container.go which is marked as
'ffjson: skip'.
Signed-off-by: Adrian Reber <areber@redhat.com>
There's been a lot of discussion over in [1] about how to support the
NVIDIA folks and others who want to be able to create devices
(possibly after having loaded kernel modules) and bind userspace
libraries into the container. Currently that's happening in the
middle of runc's create-time mount handling before the container
pivots to its new root directory with runc's incorrectly-timed
prestart hook trigger [2]. With this commit, we extend hooks with a
'precreate' stage to allow trusted parties to manipulate the config
JSON before calling the runtime's 'create'.
I'm recycling the existing Hook schema from pkg/hooks for this,
because we'll want Timeout for reliability and When to avoid the
expense of fork/exec when a given hook does not need to make config
changes [3].
[1]: https://github.com/opencontainers/runc/pull/1811
[2]: https://github.com/opencontainers/runc/issues/1710
[3]: https://github.com/containers/libpod/issues/1828#issuecomment-439888059
Signed-off-by: W. Trevor King <wking@tremily.us>
During an earlier bugfix, we swapped all instances of
ContainerConfig to Config, which was meant to fix some data we
were returning from Inspect. This unfortunately also renamed a
libpod internal struct for container configs. Undo the rename
here.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Add the ability to build a remote client in golang that uses all
the same front-end cli code and output code. The initial limitations
here are that it can only be a local client while the bridge and
resolver code is being written for the golang varlink client.
Tests and docs will be added in subsequent PRs.
Signed-off-by: baude <bbaude@redhat.com>
Users have no idea what storage configuration file is used to setup
storage, so adding this to podman info, should make it easier to
discover.
This requires a revendor of containers/storage
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This patch makes the path unigue to each UID.
Also cleans up some return code to return the path it is trying to lock.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
when using container runlabel, if a --name is not provided, we must
deduce the container name from the base name of the image to maintain
parity with the atomic cli.
fixed small bug where we split the cmd on " " rather than using fields could
lead to extra spaces in command output.
Signed-off-by: baude <bbaude@redhat.com>
Don't initialize the lock manager until almost the end of libpod
init, so we can guarantee our tmp dir is properly set up and
exists. This wasn't an issue on systems that had previously run
Podman, but CI caught it.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This will hopefully help cases where libpod is initialized
multiple times on the same system (as on our CI tests).
We still run into potential issues where multiple Podmans with
multiple tmp paths try to run on the same system - we could end
up thrashing the locks.
I think we need a file locks driver for situations like that. We
can also see about storing paths in the SHM segment, to make sure
multiple libpod instances aren't using the same one.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Remove runtime's lockDir as it is no longer needed after the lock
rework.
Add a trivial in-memory lock manager for unit testing
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Golint wants to rename the struct. I think the name is fine. I
can disable golint. Golint will no longer complain about the
name.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Move SHM specific code into a subpackage. Within the main locks
package, move the manager to be linux-only and add a non-Linux
unsupported build file.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Check value of semaphores when incrementing to ensure we never go
beyond 1, preserving mutex invariants.
Also, add cleanup code to the lock tests, ensuring that we never
leave the locks in a bad state after a test. We aren't destroying
and recreating the SHM every time, so we have to be careful not
to leak state between test runs.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Add support for executing an init binary as PID 1 in a container to
forward signals and reap processes. When the `--init` flag is set for
podman-create or podman-run, the init binary is bind-mounted to
`/dev/init` in the container and "/dev/init --" is prepended to the
container's command.
The default base path of the container-init binary is `/usr/libexec/podman`
while the default binary is catatonit [1]. This default can be changed
permanently via the `init_path` field in the `libpod.conf` configuration
file (which is recommended for packaging) or temporarily via the
`--init-path` flag of podman-create and podman-run.
[1] https://github.com/openSUSE/catatonitFixes: #1670
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Overriding storage.conf is not intuitive behavior, so pop up an
error message when it happens, so people know that bad things are
happening.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Runc does not produce helpful error messages when the container's
command is not found, so print the command ourselves.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Casting integers to strings is definitely not correct, so let the
standard library handle matters.
Fixes#2066
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Currently if the user installs runc in an alternative path
podman run uses it but podman build does not.
This patch will pass the default oci runtime to be used by podman
down to the image builder.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We had two problems with /dev/shm, first, you mount the
container read/only then /dev/shm was mounted read/only.
This is a bug a tmpfs directory should be read/write within
a read-only container.
The second problem is we were ignoring users mounted /dev/shm
from the host.
If user specified
podman run -d -v /dev/shm:/dev/shm ...
We were dropping this mount and still using the internal mount.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
podman play kube adds the ability for the user to recreate pods and containers
from a Kubernetes YAML file in libpod.
Signed-off-by: baude <bbaude@redhat.com>
Display the trust policy of the host system. The trust policy is stored in the /etc/containers/policy.json file and defines a scope of registries or repositories.
Signed-off-by: Qi Wang <qiwan@redhat.com>
This will allow container processes to write to the CRIU socket that gets injected
into the container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When sharing a network namespace, containers should also share
resolv.conf and /etc/hosts in case a container process made
changes to either (for example, if I set up a VPN client in
container A and join container B to its network namespace, I
expect container B to use the DNS servers from A to ensure it can
see everything on the VPN).
Resolves: #1546
Signed-off-by: Matthew Heon <mheon@redhat.com>
Instead of forcing another user lookup when mounting image
volumes, just use the information we looked up when we started
generating the spec.
This may resolve#1817
Signed-off-by: Matthew Heon <mheon@redhat.com>
Using the default capabilities, we can determine which caps were
added and dropped. Now added them to the security context structure.
Signed-off-by: baude <bbaude@redhat.com>
If one of storage GraphRoot or RunRoot are specified, but the
other is not, c/storage will not use the default, and will throw
an error instead. Ensure that in cases where this would happen,
we populate the fields with the c/storage defaults ourselves.
Signed-off-by: Matthew Heon <mheon@redhat.com>
like podman stop of containers, we should allow the user to specify
a timeout override when stopping pods; otherwise they have to wait
the full timeout time specified during the pod/container creation.
Signed-off-by: baude <bbaude@redhat.com>
i.e. actually reflect the environment variable and/or rootless mode
instead of always using the default path.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
DockerRegistryOptions.DockerInsecureSkipTLSVerify as an types.OptionalBool
can now represent that value, so forceSecure is redundant.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
DockerRegistryOptions.DockerInsecureSkipTLSVerify as an types.OptionalBool
can now represent that value, so forceSecure is redundant.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Following SystemContext.DockerInsecureSkipTLSVerify, make the
DockerRegistryOne also an OptionalBool, and update callers.
Explicitly document that --tls-verify=true and --tls-verify unset
have different behavior in those commands where the behavior changed
(or where it hasn't changed but the documentation needed updating).
Also make the --tls-verify man page sections a tiny bit more consistent
throughout.
This is a minimal fix, without changing the existing "--tls-verify=true"
paths nor existing manual insecure registry lookups.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
containers inside pods need to make sure they get /etc/resolv.conf
and /etc/hosts bind mounted when network is expected
Signed-off-by: baude <bbaude@redhat.com>
Previously not needed as it only worked inside of Batch(), but
now that it can be called anywhere we need to add mutual
exclusion on its config changes.
Signed-off-by: Matthew Heon <mheon@redhat.com>
With the changes made recently to ensure Podman does not hit the
OCI runtime as often to sync state, we can find ourselves in a
situation where the runtime's state does not match ours.
Add a --sync flag to podman rm to ensure we can still remove
containers when this happens.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Add support for podman volume and its subcommands.
The commands supported are:
podman volume create
podman volume inspect
podman volume ls
podman volume rm
podman volume prune
This is a tool to manage volumes used by podman. For now it only handle
named volumes, but eventually it will handle all volumes used by podman.
Signed-off-by: umohnani8 <umohnani@redhat.com>
Allow user to prune unused/unnamed images, the layer images from building,
via podman rmi --prune.
Allow user to prune stopped/exiuted containers via podman rm --prune.
This should resolve#1910
Signed-off-by: baude <bbaude@redhat.com>
When an image config sets config.User [1] to a numeric group (like
1000:1000), but those values do not exist in the container's
/etc/group, libpod is currently breaking:
$ podman run --rm registry.svc.ci.openshift.org/ci-op-zvml7cd6/pipeline:installer --help
error creating temporary passwd file for container 228f6e9943d6f18b93c19644e9b619ec4d459a3e0eb31680e064eeedf6473678: unable to get gid 1000 from group file: no matching entries in group file
However, the OCI spec requires converters to copy numeric uid and gid
to the runtime config verbatim [2].
With this commit, I'm frontloading the "is groupspec an integer?"
check and only bothering with lookup.GetGroup when it was not.
I've also removed a few .Mounted checks, which are originally from
00d38cb3 (podman create/run need to load information from the image,
2017-12-18, #110). We don't need a mounted container filesystem to
translate integers. And when the lookup code needs to fall back to
the mounted root to translate names, it can handle erroring out
internally (and looking it over, it seems to do that already).
[1]: https://github.com/opencontainers/image-spec/blame/v1.0.1/config.md#L118-L123
[2]: https://github.com/opencontainers/image-spec/blame/v1.0.1/conversion.md#L70
Signed-off-by: W. Trevor King <wking@tremily.us>
We don't use CNI to configure networks for rootless containers,
so no need to set it up. It may also cause issues with inotify,
so disabling it resolves some potential problems.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Instead of storing the runtime's file lock dir in the BoltDB
state, refer to the runtime inside the Bolt state instead, and
use the path stored in the runtime.
This is necessary since we moved DB initialization very far up in
runtime init, before the locks dir is properly initialized (and
it must happen before the locks dir can be created, as we use the
DB to retrieve the proper path for the locks dir now).
Signed-off-by: Matthew Heon <mheon@redhat.com>
Part of the motivation for 800eb863 (Hooks supports two directories,
process default and override, 2018-09-17, #1487) was [1]:
> We only use this for override. The reason this was caught is people
> are trying to get hooks to work with CoreOS. You are not allowed to
> write to /usr/share... on CoreOS, so they wanted podman to also look
> at /etc, where users and third parties can write.
But we'd also been disabling hooks completely for rootless users. And
even for root users, the override logic was tricky when folks actually
had content in both directories. For example, if you wanted to
disable a hook from the default directory, you'd have to add a no-op
hook to the override directory.
Also, the previous implementation failed to handle the case where
there hooks defined in the override directory but the default
directory did not exist:
$ podman version
Version: 0.11.2-dev
Go Version: go1.10.3
Git Commit: "6df7409cb5a41c710164c42ed35e33b28f3f7214"
Built: Sun Dec 2 21:30:06 2018
OS/Arch: linux/amd64
$ ls -l /etc/containers/oci/hooks.d/test.json
-rw-r--r--. 1 root root 184 Dec 2 16:27 /etc/containers/oci/hooks.d/test.json
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:31:19-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:31:19-08:00" level=warning msg="failed to load hooks: {}%!(EXTRA *os.PathError=open /usr/share/containers/oci/hooks.d: no such file or directory)"
With this commit:
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /etc/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="added hook /etc/containers/oci/hooks.d/test.json"
time="2018-12-02T21:33:07-08:00" level=debug msg="hook test.json matched; adding to stages [prestart]"
time="2018-12-02T21:33:07-08:00" level=warning msg="implicit hook directories are deprecated; set --hooks-dir="/etc/containers/oci/hooks.d" explicitly to continue to load hooks from this directory"
time="2018-12-02T21:33:07-08:00" level=error msg="container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"process_linux.go:382: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: oh, noes!\\\\n\\\"\""
(I'd setup the hook to error out). You can see that it's silenly
ignoring the ENOENT for /usr/share/containers/oci/hooks.d and
continuing on to load hooks from /etc/containers/oci/hooks.d.
When it loads the hook, it also logs a warning-level message
suggesting that callers explicitly configure their hook directories.
That will help consumers migrate, so we can drop the implicit hook
directories in some future release. When folks *do* explicitly
configure hook directories (via the newly-public --hooks-dir and
hooks_dir options), we error out if they're missing:
$ podman --hooks-dir /does/not/exist run --rm docker.io/library/alpine echo 'successful container'
error setting up OCI Hooks: open /does/not/exist: no such file or directory
I've dropped the trailing "path" from the old, hidden --hooks-dir-path
and hooks_dir_path because I think "dir(ectory)" is already enough
context for "we expect a path argument". I consider this name change
non-breaking because the old forms were undocumented.
Coming back to rootless users, I've enabled hooks now. I expect they
were previously disabled because users had no way to avoid
/usr/share/containers/oci/hooks.d which might contain hooks that
required root permissions. But now rootless users will have to
explicitly configure hook directories, and since their default config
is from ~/.config/containers/libpod.conf, it's a misconfiguration if
it contains hooks_dir entries which point at directories with hooks
that require root access. We error out so they can fix their
libpod.conf.
[1]: https://github.com/containers/libpod/pull/1487#discussion_r218149355
Signed-off-by: W. Trevor King <wking@tremily.us>
We don't need this for anything more than rootless work in Libpod
now, but Buildah still uses it as it was originally written, so
leave it intact as part of our API.
Signed-off-by: Matthew Heon <mheon@redhat.com>
When graphroot is set by the user, we should set libpod's static
directory to a subdirectory of that by default, to duplicate
previous behavior.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Ensure that the directory where we will create the Podman db
exists prior to creating the database - otherwise creating the DB
will fail.
Signed-off-by: Matthew Heon <mheon@redhat.com>
When validating fields against the DB, report more verbosely the
name of the field being validated if it fails. Specifically, add
the name used in config files, so people will actually know what
to change it errors happen.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Ensure we don't break the unit tests by creating a locks
directory (which, prior to the last commit, would be created by
BoltDB state init).
Signed-off-by: Matthew Heon <mheon@redhat.com>
We already create the locks directory as part of the libpod
runtime's init - no need to do it again as part of BoltDB's init.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Previous commits ensured that we would use database-configured
paths if not explicitly overridden.
However, our runtime generation did unconditionally override
storage config, which made this useless.
Move rootless storage configuration setup to libpod, and change
storage setup so we only override if a setting is explicitly
set, so we can still override what we want.
Signed-off-by: Matthew Heon <mheon@redhat.com>
If the DB contains default paths, and the user has not explicitly
overridden them, use the paths in the DB over our own defaults.
The DB validates these paths, so it would error and prevent
operation if they did not match. As such, instead of erroring, we
can use the DB's paths instead of our own.
Signed-off-by: Matthew Heon <mheon@redhat.com>
To configure runtime fields from the database, we need to know
whether they were explicitly overwritten by the user (we don't
want to overwrite anything that was explicitly set). Store a
struct containing whether the variables we'll grab from the DB
were explicitly set by the user so we know what we can and can't
overwrite.
This determines whether libpod runtime and static dirs were set
via config file in a horribly hackish way (double TOML decode),
but I can't think of a better way, and it shouldn't be that
expensive as the libpod config is tiny.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Previously, we implicitly validated runtime configuration against
what was stored in the database as part of database init. Make
this an explicit step, so we can call it after the database has
been initialized. This will allow us to retrieve paths from the
database and use them to overwrite our defaults if they differ.
Signed-off-by: Matthew Heon <mheon@redhat.com>
When we configure a runtime, we now will need to hit the DB early
on, so we can verify the paths we're going to use for c/storage are correct.
Signed-off-by: Matthew Heon <mheon@redhat.com>