The on-failure restart option supports restarting only a given
number of times. To do this, we need one additional field in the
DB to track restart count (which conveniently fills a field in
Inspect we weren't populating), plus some plumbing logic.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This field indicates that a container was explciitly stopped by
an API call, and did not exit naturally. It's used when
implementing restart policy for containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
As part of this, rework the number of workers used by various
Podman tasks to match original behavior - need an explicit
fallthrough in the switch statement for that block to work as
expected.
Also, trivial change to Podman cleanup to work on initialized
containers - we need to reset to a different state after cleaning
up the OCI runtime.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
All IDs in libpod are stored as a full container ID. We can get a
container by full ID faster with GetContainer (which directly
retrieves) than LookupContainer (which finds a match, then
retrieves). No reason to use Lookup when we have full IDs present
and available.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Don't sort OCI hooks using the locale collation order; it does not
make sense for the same system-wide directory to be interpreted differently
depending on the user's LC_COLLATE setting, and the language-specific
collation order can even change over time.
Besides, the current collation order determination code has never worked
with the most common LC_COLLATE values like en_US.UTF-8.
Ideally, we would like to just order based on Unicode code points
to be reliably stable, but the existing implementation is case-insensitive,
so we are forced to rely on the unicode case mapping tables at least.
(This gives up on canonicalization and width-insensitivity, potentially
breaking users who rely on these previously documented properties.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
* refactor command output to use one function
* Add new worker pool parallel operations
* Implement podman-remote umount
* Refactored podman wait to use printCmdOutput()
Signed-off-by: Jhon Honce <jhonce@redhat.com>
This swaps the previous handling (parse all volume mounts on the
container and look for ones that might refer to named volumes)
for the new, explicit named volume lists stored per-container.
It also deprecates force-removing volumes that are in use. I
don't know how we want to handle this yet, but leaving containers
that depend on a volume that no longer exists is definitely not
correct.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We have an issue in the current implementation where the cleanup
process is not able to umount the storage as it is running in a
separate namespace.
Simplify the implementation for user namespaces by not using an
intermediate mount namespace. For doing it, we need to relax the
permissions on the parent directories and allow browsing
them. Containers that are running without a user namespace, will still
maintain mode 0700 on their directory.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
podman will not start a transient service and timer for healthchecks.
this handles the tracking of the timing for health checks.
added the 'started' status which represents the time that a container is
in its start-period.
the systemd timing can be disabled with an env variable of
DISABLE_HC_SYSTEMD="true".
added filter for ps where --filter health=[starting, healthy, unhealthy]
can now be used.
Signed-off-by: baude <bbaude@redhat.com>
When creating a new image volume to be mounted into a container, we need to
make sure the new volume matches the Ownership and permissions of the path
that it will be mounted on.
For example if a volume inside of a containre image is owned by the database
UID, we want the volume to be mounted onto the image to be owned by the
database UID.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
In lipod, we now log major events that occurr. These events
can be displayed using the `podman events` command. Each
event contains:
* Type (container, image, volume, pod...)
* Status (create, rm, stop, kill, ....)
* Timestamp in RFC3339Nano format
* Name (if applicable)
* Image (if applicable)
The format of the event and the varlink endpoint are to not
be considered stable until cockpit has done its enablement.
Signed-off-by: baude <bbaude@redhat.com>
We're going to feed this into Go's BCP 47 language parser. Language
tags have the form [1]:
language
["-" script]
["-" region]
*("-" variant)
*("-" extension)
["-" privateuse]
and locales have the form [2]:
[language[_territory][.codeset][@modifier]]
The modifier is useful for collation, but Go's language-based API
[3] does not provide a way for us to supply it. This code converts
our locale to a BCP 47 language by stripping the dot and later and
replacing the first underscore, if any, with a hyphen. This will
avoid errors like [4]:
WARN[0000] failed to parse language "en_US.UTF-8": language: tag is not well-formed
when feeding language.Parse(...).
[1]: https://tools.ietf.org/html/bcp47#section-2.1
[2]: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html#tag_08_02
[3]: https://github.com/golang/go/issues/25340
[4]: https://github.com/containers/libpod/issues/2494
Signed-off-by: W. Trevor King <wking@tremily.us>
Before, any container with a netNS dependency simply used its dependency container's hosts file, and didn't abide its configuration (mainly --add-host). Fix this by always appending to the dependency container's hosts file, creating one if necessary.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
Before, a container being run or started in a pod always restarted the infra container. This was because we didn't take running dependencies into account. Fix this by filtering for dependencies in the running state.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
Prior, a pod would have to be started immediately when created, leading to confusion about what a pod state should be immediately after creation. The problem was podman run --pod ... would error out if the infra container wasn't started (as it is a dependency). Fix this by allowing for recursive start, where each of the container's dependencies are started prior to the new container. This is only applied to the case where a new container is attached to a pod.
Also rework container_api Start, StartAndAttach, and Init functions, as there was some duplicated code, which made addressing the problem easier to fix.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
iFix builtin volumes to work with podman volume
Currently builtin volumes are not recored in podman volumes when
they are created automatically. This patch fixes this.
Remove container volumes when requested
Currently the --volume option on podman remove does nothing.
This will implement the changes needed to remove the volumes
if the user requests it.
When removing a volume make sure that no container uses the volume.
Signed-off-by: Daniel J Walsh dwalsh@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When cleaning up containers, we presently remove the exit file
created by Conmon, to ensure that if we restart the container, we
won't have conflicts when Conmon tries writing a new exit file.
Unfortunately, we need to retain that exit file (at least until
we get a workable events system), so we can read it in cases
where the container has been removed before 'podman run' can read
its exit code.
So instead of removing it, rename it, so there's no conflict with
Conmon, and we can still read it later.
Fixes: #1640
Signed-off-by: Matthew Heon <mheon@redhat.com>
Instead of unconditionally resetting to ContainerStateConfigured
after a reboot, allow containers in the Exited state to remain
there, preserving their exit code in podman ps after a reboot.
This does not affect the ability to use and restart containers
after a reboot, as the Exited state can be used (mostly)
interchangeably with Configured for starting and managing
containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When waiting for a container, there is a long interval between
status checks - plenty long enough for the container in question
to start, then subsequently be cleaned up and returned to Created
state to be restarted. As such, we can't wait on container state
to go to Stopped or Exited - anything that is not Running or
Paused indicates the container is dead.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
There's been a lot of discussion over in [1] about how to support the
NVIDIA folks and others who want to be able to create devices
(possibly after having loaded kernel modules) and bind userspace
libraries into the container. Currently that's happening in the
middle of runc's create-time mount handling before the container
pivots to its new root directory with runc's incorrectly-timed
prestart hook trigger [2]. With this commit, we extend hooks with a
'precreate' stage to allow trusted parties to manipulate the config
JSON before calling the runtime's 'create'.
I'm recycling the existing Hook schema from pkg/hooks for this,
because we'll want Timeout for reliability and When to avoid the
expense of fork/exec when a given hook does not need to make config
changes [3].
[1]: https://github.com/opencontainers/runc/pull/1811
[2]: https://github.com/opencontainers/runc/issues/1710
[3]: https://github.com/containers/libpod/issues/1828#issuecomment-439888059
Signed-off-by: W. Trevor King <wking@tremily.us>
Runc does not produce helpful error messages when the container's
command is not found, so print the command ourselves.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We had two problems with /dev/shm, first, you mount the
container read/only then /dev/shm was mounted read/only.
This is a bug a tmpfs directory should be read/write within
a read-only container.
The second problem is we were ignoring users mounted /dev/shm
from the host.
If user specified
podman run -d -v /dev/shm:/dev/shm ...
We were dropping this mount and still using the internal mount.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Instead of forcing another user lookup when mounting image
volumes, just use the information we looked up when we started
generating the spec.
This may resolve#1817
Signed-off-by: Matthew Heon <mheon@redhat.com>
containers inside pods need to make sure they get /etc/resolv.conf
and /etc/hosts bind mounted when network is expected
Signed-off-by: baude <bbaude@redhat.com>
Part of the motivation for 800eb863 (Hooks supports two directories,
process default and override, 2018-09-17, #1487) was [1]:
> We only use this for override. The reason this was caught is people
> are trying to get hooks to work with CoreOS. You are not allowed to
> write to /usr/share... on CoreOS, so they wanted podman to also look
> at /etc, where users and third parties can write.
But we'd also been disabling hooks completely for rootless users. And
even for root users, the override logic was tricky when folks actually
had content in both directories. For example, if you wanted to
disable a hook from the default directory, you'd have to add a no-op
hook to the override directory.
Also, the previous implementation failed to handle the case where
there hooks defined in the override directory but the default
directory did not exist:
$ podman version
Version: 0.11.2-dev
Go Version: go1.10.3
Git Commit: "6df7409cb5a41c710164c42ed35e33b28f3f7214"
Built: Sun Dec 2 21:30:06 2018
OS/Arch: linux/amd64
$ ls -l /etc/containers/oci/hooks.d/test.json
-rw-r--r--. 1 root root 184 Dec 2 16:27 /etc/containers/oci/hooks.d/test.json
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:31:19-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:31:19-08:00" level=warning msg="failed to load hooks: {}%!(EXTRA *os.PathError=open /usr/share/containers/oci/hooks.d: no such file or directory)"
With this commit:
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /etc/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="added hook /etc/containers/oci/hooks.d/test.json"
time="2018-12-02T21:33:07-08:00" level=debug msg="hook test.json matched; adding to stages [prestart]"
time="2018-12-02T21:33:07-08:00" level=warning msg="implicit hook directories are deprecated; set --hooks-dir="/etc/containers/oci/hooks.d" explicitly to continue to load hooks from this directory"
time="2018-12-02T21:33:07-08:00" level=error msg="container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"process_linux.go:382: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: oh, noes!\\\\n\\\"\""
(I'd setup the hook to error out). You can see that it's silenly
ignoring the ENOENT for /usr/share/containers/oci/hooks.d and
continuing on to load hooks from /etc/containers/oci/hooks.d.
When it loads the hook, it also logs a warning-level message
suggesting that callers explicitly configure their hook directories.
That will help consumers migrate, so we can drop the implicit hook
directories in some future release. When folks *do* explicitly
configure hook directories (via the newly-public --hooks-dir and
hooks_dir options), we error out if they're missing:
$ podman --hooks-dir /does/not/exist run --rm docker.io/library/alpine echo 'successful container'
error setting up OCI Hooks: open /does/not/exist: no such file or directory
I've dropped the trailing "path" from the old, hidden --hooks-dir-path
and hooks_dir_path because I think "dir(ectory)" is already enough
context for "we expect a path argument". I consider this name change
non-breaking because the old forms were undocumented.
Coming back to rootless users, I've enabled hooks now. I expect they
were previously disabled because users had no way to avoid
/usr/share/containers/oci/hooks.d which might contain hooks that
required root permissions. But now rootless users will have to
explicitly configure hook directories, and since their default config
is from ~/.config/containers/libpod.conf, it's a misconfiguration if
it contains hooks_dir entries which point at directories with hooks
that require root access. We error out so they can fix their
libpod.conf.
[1]: https://github.com/containers/libpod/pull/1487#discussion_r218149355
Signed-off-by: W. Trevor King <wking@tremily.us>
Currently we are mounting /dev/shm from disk, it should be from a tmpfs.
User Namespace supports tmpfs mounts for nonroot users, so this section of
code should work fine in bother root and rootless mode.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Whe running unittests on newer golang versions, we observe failures with some
formatting types when no declared correctly.
Signed-off-by: baude <bbaude@redhat.com>
We now default to setting storage options to "nodev", when running
privileged containers, we need to turn this off so the processes can
manipulate the image.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This is basically the same change as
ff47a4c2d5 (Use a struct to pass options to Checkpoint())
just for the Restore() function. It is used to pass multiple restore
options to the API and down to conmon which is used to restore
containers. This is for the upcoming changes to support checkpointing
and restoring containers with '--tcp-established'.
Signed-off-by: Adrian Reber <areber@redhat.com>
/etc/resolv.conf and /etc/hosts should not be created and mounted when the
network is disabled.
We should not be calling the network setup and cleanup functions when it is
disabled either.
In doing this patch, I found that all of the bind mounts were particular to
Linux along with the generate functions, so I moved them to
container_internal_linux.go
Since we are checking if we are using a network namespace, we need to check
after the network namespaces has been created in the spec.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Container images can be created without passwd or group file, currently
if one of these containers gets run with a --user flag the container blows
up complaining about t a missing /etc/passwd file.
We just need to check if the error on read is ENOEXIST then allow the
read to return, not fail.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When we read the conmon error status file, if Atoi fails to parse
the string we read from the file as an int, print the string as
part of the error message so we know what might have gone wrong.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Instead of running a full sync after starting a container to pick
up its PID, grab it from Conmon instead.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
When syncing container state, we normally call out to runc to see
the container's status. This does have significant performance
implications, though, and we've seen issues with large amounts of
runc processes being spawned.
This patch attempts to use stat calls on the container exit file
created by Conmon instead to sync state. This massively decreases
the cost of calling updateContainer (it has gone from an
almost-unconditional fork/exec of runc to a single stat call that
can be avoided in most states).
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
After stopping containers, we run updateContainerStatus to sync
our state with runc (pick up exit code, for example). Then we
proceed to not save this to the database, requiring us to grab it
again on the next sync. This should remove the need to read the
exit file more than once.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
for the purposes of performance and security, we use securejoin to contstruct
the root fs's path so that symlinks are what they appear to be and no pointing
to something naughty.
then instead of chrooting to parse /etc/passwd|/etc/group, we now use the runc user/group
methods which saves us quite a bit of performance.
Signed-off-by: baude <bbaude@redhat.com>
run prepare() -- which consists of creating a network namespace and
mounting the container image is now run in parallel. This saves 25-40ms.
Signed-off-by: baude <bbaude@redhat.com>
ensure the volume paths are resolved in the mountpoint scope.
Otherwise we might end up using host paths.
Closes: https://github.com/containers/libpod/issues/1608
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
If someone runs podman as a user (uid) that is not defined in the container
we want generate a passwd file so that getpwuid() will work inside of container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Adds a few missing things from writeStringToRundir() to the new
resolv.conf function, specifically relabelling and returning a
path compatible with rootless podman
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
The vendoring issues with libnetwork were significant (it was
dragging in massive amounts of code) and were just not worth
spending the time to work through. Highly unlikely we'll ever end
up needing to update this code, so move it directly into pkg/ so
we don't need to vendor libnetwork. Make a few small changes to
remove the need for the remainder of libnetwork.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Libnetwork provides a well-tested package for generating
resolv.conf from the host's that has some features our current
implementation does not. Swap to using their code and remove our
built-in implementation.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
runc uses CRIU to support checkpoint and restore of containers. This
brings an initial checkpoint/restore implementation to podman.
None of the additional runc flags are yet supported and container
migration optimization (pre-copy/post-copy) is also left for the future.
The current status is that it is possible to checkpoint and restore a
container. I am testing on RHEL-7.x and as the combination of RHEL-7 and
CRIU has seccomp troubles I have to create the container without
seccomp.
With the following steps I am able to checkpoint and restore a
container:
# podman run --security-opt="seccomp=unconfined" -d registry.fedoraproject.org/f27/httpd
# curl -I 10.22.0.78:8080
HTTP/1.1 403 Forbidden # <-- this is actually a good answer
# podman container checkpoint <container>
# curl -I 10.22.0.78:8080
curl: (7) Failed connect to 10.22.0.78:8080; No route to host
# podman container restore <container>
# curl -I 10.22.0.78:8080
HTTP/1.1 403 Forbidden
I am using CRIU, runc and conmon from git. All required changes for
checkpoint/restore support in podman have been merged in the
corresponding projects.
To have the same IP address in the restored container as before
checkpointing, CNI is told which IP address to use.
If the saved network configuration cannot be found during restore, the
container is restored with a new IP address.
For CRIU to restore established TCP connections the IP address of the
network namespace used for restore needs to be the same. For TCP
connections in the listening state the IP address can change.
During restore only one network interface with one IP address is handled
correctly. Support to restore containers with more advanced network
configuration will be implemented later.
v2:
* comment typo
* print debug messages during cleanup of restore files
* use createContainer() instead of createOCIContainer()
* introduce helper CheckpointPath()
* do not try to restore a container that is paused
* use existing helper functions for cleanup
* restructure code flow for better readability
* do not try to restore if checkpoint/inventory.img is missing
* git add checkpoint.go restore.go
v3:
* move checkpoint/restore under 'podman container'
v4:
* incorporated changes from latest reviews
Signed-off-by: Adrian Reber <areber@redhat.com>
unfortunately the papr CI system cannot test ubuntu as a VM; therefore,
this PR still keeps travis. but it does include fixes that will be required
for running on modern versions of ubuntu.
Signed-off-by: baude <bbaude@redhat.com>
The same relabel is already done in writeStringToRundir so we don't
need to do it twice. The version in writeStringToRundir takes into
account the correct file path when using user namespaces.
Closes: https://github.com/containers/libpod/pull/1584
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We call cleanup() (which calls cleanupRuntime()) as part of
removing containers, after the container has already been removed
from the database. cleanupRuntime() tries to update and save the
state, which obviously fails if the container no longer exists.
Make the save() conditional on the container not being in the
process of being removed.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
To work better with Kata containers, we need to delete() from the
OCI runtime as a part of cleanup, to ensure resources aren't
retained longer than they need to be.
To enable this, we need to add a new state to containers,
ContainerStateExited. Containers transition from
ContainerStateStopped to ContainerStateExited via cleanupRuntime
which is invoked as part of cleanup(). A container in the Exited
state is identical to Stopped, except it has been removed from
the OCI runtime and thus will be handled differently when
initializing the container.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Runc disables systemd cgroup support when build statically, so
don't tell people to do that now that we're defaulting to systemd
for cgroup management.
Also, fix some error messages to use the proper ID() call for
containers.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
ALso cleanup files section or podman man page
Add description of policy.json
Sort alphabetically.
Add more info on oci hooks
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Closes: #1487
Approved by: umohnani8
Prevent a runc error that doesn't like symlinks as part
of the rootfs.
Closes: https://github.com/containers/libpod/issues/1389
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #1390
Approved by: rhatdan
In some cases, /etc/resolv.conf can be a symlink to something like
/run/systemd/resolve/resolv.conf. We currently check for that file
and if it exists, use it instead of /etc/resolv.conf. However, we are
no seeing cases where the systemd resolv.conf exists but /etc/resolv.conf
is NOT a symlink.
Therefore, we now obtain the endpoint for /etc/resolv.conf whether it is a
symlink or not. That endpoint is now what is read to generate a container's
resolv.conf.
Signed-off-by: baude <bbaude@redhat.com>
Closes: #1368
Approved by: rhatdan
A pause container is added to the pod if the user opts in. The default pause image and command can be overridden. Pause containers are ignored in ps unless the -a option is present. Pod inspect and pod ps show shared namespaces and pause container. A pause container can't be removed with podman rm, and a pod can be removed if it only has a pause container.
Signed-off-by: haircommander <pehunt@redhat.com>
Closes: #1187
Approved by: mheon
Need to get some small changes into libpod to pull back into buildah
to complete buildah transition.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Closes: #1270
Approved by: mheon
Currently we add mounts from images, volumes and internal.
We can accidently over mount an existing mount. This patch sorts the mounts
to make sure a parent directory is always mounted before its content.
Had to change the default propagation on image volume mounts from shared
to private to stop mount points from leaking out of the container.
Also switched from using some docker/docker/pkg to container/storage/pkg
to remove some dependencies on Docker.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Closes: #1243
Approved by: mheon
podman umount will currently only unmount file system if not other
process is using it, otherwise the umount decrements the container
storage to indicate that the caller is no longer using the mount
point, once the count gets to 0, the file system is actually unmounted.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Closes: #1184
Approved by: TomSweeneyRedHat
refresh() is the only major command we had that did not perform a
sync before running, and thus was not guaranteed to pick up a
good copy of the state. Fix this by updating the state before a
refresh().
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Closes: #1186
Approved by: rhatdan
Moved contents of RestartWithTimeout to restartWithTimeout in container_internal to be able to call restart without locking in function.
Refactored startNode to be able to either start or restart a node.
Built pod Restart() with new startNode with refresh true.
Signed-off-by: haircommander <pehunt@redhat.com>
Closes: #1152
Approved by: rhatdan
Currently we unmount storage that is still in use.
We should not be unmounting storeage that we mounted
via a different command or by podman mount. This
change relies on containers/storage to umount keep track of
how many times the storage was mounted before really unmounting
it from the system.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
vendor in containers/storage
vendor in containers/image
vendor in projectatomic/buildah
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Closes: #1114
Approved by: mheon