Initial attempt at writing a framework for podman system tests.
The idea is to define a useful set of primitives that will
make it easy to write actual tests and to interpret results
of failing ones.
This is a proof-of-concept right now; only a small number of
tests, by no means comprehensive. I am requesting review in
order to find showstopper problems: reasons why this approach
cannot work. Should there be none, we can work toward running
these as gating tests for Fedora and RHEL8.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add the ability to manually run a container's healthcheck command.
This is only the first phase of implementing the healthcheck.
Subsequent pull requests will deal with the exposing the results and
history of healthchecks as well as the scheduling.
Signed-off-by: baude <bbaude@redhat.com>
Before, any container with a netNS dependency simply used its dependency container's hosts file, and didn't abide its configuration (mainly --add-host). Fix this by always appending to the dependency container's hosts file, creating one if necessary.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
The current aliased commands
podman container list
and
podman image list
podman image rm
Do not work properly. The global storage options are broken.
This patch fixes this issue.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We have little to no testing to make sure we don't break podman image and
podman container commands that wrap traditional commands.
This PR adds tests for each of the commands.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Some callers assume when SystemExec returns, the command has completed.
Other callers explicitly wait for completion (as required). However,
forgetting to do that is an incredibly easy mistake to make. Fix this
by adding an explicit parameter to the function. This requires
every caller to deliberately state whether or not a completion-check
is required.
Also address **many** resource naming / cleanup completion-races.
Signed-off-by: Chris Evich <cevich@redhat.com>
To be able to use OCI runtimes which do not implement checkpoint/restore
this adds a check to the checkpoint code path and the checkpoint/restore
tests to see if it knows about the checkpoint subcommand. If the used
OCI runtime does not implement checkpoint/restore the tests are skipped
and the actual 'podman container checkpoint' returns an error.
Signed-off-by: Adrian Reber <areber@redhat.com>
pr #2480 fixed the missing 'podman image list/rm' commands;
it broke their usage messages. This corrects both usage
messages and also their examples.
Also: add an e2e test for 'podman image rm' (untested)
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Changes the short option `-s` to the fully specified `--storage-driver`.
The short version is no longer supported.
There is currently still one SELinux related checkpoint/restore problem:
https://github.com/containers/libpod/issues/2334
To avoid unnecessary CI failures the checkpoint/restore tests are
temporarily disabled on Fedora.
It is not necessary to disable the tests on Ubuntu as it is running
without SELinux and it is also not necessary to disable the RHEL 7 tests
as RHEL's CRIU is too old to run the checkpoint/restore tests at all.
Signed-off-by: Adrian Reber <areber@redhat.com>
Make it easy for scripts to determine if an image removal
failure. If only errors were no such image exit with 1
versus 125.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Make it easy for scripts to determine if a container removal
fails versus the container did not exist.
If only errors were no such container exit with 1 versus 125.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If user specifies network namespace and the /etc/netns/XXX/resolv.conf
exists, we should use this rather then /etc/resolv.conf
Also fail cleaner if the user specifies an invalid Network Namespace.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Prior, a pod would have to be started immediately when created, leading to confusion about what a pod state should be immediately after creation. The problem was podman run --pod ... would error out if the infra container wasn't started (as it is a dependency). Fix this by allowing for recursive start, where each of the container's dependencies are started prior to the new container. This is only applied to the case where a new container is attached to a pod.
Also rework container_api Start, StartAndAttach, and Init functions, as there was some duplicated code, which made addressing the problem easier to fix.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
allow users to remotely prune volumes.
this is the last volume command for remote enablement. as such,
the volume commands are being folded back into main because they
are supported for both local and remote clients.
also, enable all volume tests that do not use containers
as containers are not enabled for the remote client yet.
Signed-off-by: baude <bbaude@redhat.com>
We have a consistent CI failure with the notify_socket test that
I can't reproduce locally. There's no reason for the test to have
--rm, so try removing it.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Previously, 'podman create --rm' did not work - it wouldn't error
but it did nothing.
It is now fixed, but unfortunately the unit tests used it a lot,
in ways that just do not work when it actually functions.
Begin the process of fixing now-failing tests.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This is the final cleanup to remove urfave/sli from libpod. Removed
old, disabled tests that have not been run in over a year.
Signed-off-by: baude <bbaude@redhat.com>
Go templates were not being processed or printed correctly for podman
pod stats. Added the ability to do templates as well as honor the
table identifier.
Fixes#2258
Signed-off-by: baude <bbaude@redhat.com>
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Add --all-tags for the `podman pull` command so all tags
of an image will be pulled, not just ':latest'. Emulates
the change in Buildah https://github.com/containers/buildah/pull/1263
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Add e2e/test/common_test.go to the single integration test
instructions. Without it the documented process fails.
We intend to migrate to the cobra cli from urfave/cli because the
project is more well maintained. There are also some technical reasons
as well which extend into our remote client work.
Signed-off-by: baude <bbaude@redhat.com>
This registry responds differently depending on the the platform
accessing it. It also occasionally goes down or returns 404s. Improve
the reliability of the e2e tests by using the registry/image used
for gating pull-requests.
This way, if there's a registry/networking problem, the gating test
will fail and prevent anything else from running. This is a better
failure to have early, rather than wait and need to re-run all the
e2e tests again later.
Signed-off-by: Chris Evich <cevich@redhat.com>
* Make sure that all vendored dependencies are in sync with the code and
the vendor.conf by running `make vendor` with a follow-up status check
of the git tree.
* Vendor ginkgo and gomega to include the test dependencies.
Signed-off-by: Chris Evic <cevich@redhat.com>
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The sig-proxy test creates a FIFO, runs podman with actions
that write to it, then tries reading from the FIFO.
Opening a FIFO for read or write blocks until the other end is
opened for the corresponding write/read. If our podman process
fails for any reason, the test's FIFO open will hang forever.
Solution: open with O_NONBLOCK.
Signed-off-by: Ed Santiago <santiago@redhat.com>
We are missing the equivalence of the docker system commands
This patch set adds `podman system prune`
and `podman system info`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
addition of import and export for the podman-remote client. This includes
the ability to send and receive files between the remote-client and the
"podman" host using an upgraded varlink connection.
Signed-off-by: baude <bbaude@redhat.com>
For whatever reason, this specific test frequently fails on Ubuntu with
an error similar to:
```
Timed out after 1.000s.
Expected process to exit. It did not.
/var/tmp/go/src/github.com/containers/libpod/test/e2e/info_test.go:38
```
Ths changes alters the test behavior to use the `defaultWaitTimeout`
value (so 90 vs former 60 seconds) only for this test.
Signed-off-by: Chris Evich <cevich@redhat.com>
The toolbox project would benefit from a few changes to more closely
resembe the original atomic cli project. Changes made are:
* only pull image for container runlabel if the label exists in the image
* if a container image does not have the desired label, exit with non-zero
Signed-off-by: baude <bbaude@redhat.com>
we now, by default, only prune dangling images. if --all is passed, we
prune dangling images AND images that do not have an associated containers.
also went ahead and enabled the podman-remote image prune side of things.
Fixes: #2192
Signed-off-by: baude <bbaude@redhat.com>
initial enablement of podman-remote version. includes add a APIVersion const
that will allow us to check compatibility between host/client when connections
are made.
also added client related information to podman info.
Signed-off-by: baude <bbaude@redhat.com>
Masking main level, image, and container commands that are not yet
implemented for the remote client. As each command is completed, be
sure to unmask it.
Also, masking podman command line switches that are not applicable
to the remote client.
Signed-off-by: baude <bbaude@redhat.com>
When using --pid=host don't try to cover /proc paths, as they are
coming from the /proc bind mounted from the host.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Add the ability to run the integration (ginkgo) suite using
the remote client.
Only the images_test.go file is run right now; all the rest are
isolated with a // +build !remotelinux. As more content is
developed for the remote client, we can unblock the files and
just block single tests as needed.
Signed-off-by: baude <bbaude@redhat.com>
Restoring a container from a checkpoint should give the container the
same IP as before checkpointing. This adds a test to make sure the IP
stays the same.
Signed-off-by: Adrian Reber <areber@redhat.com>
Add support for executing an init binary as PID 1 in a container to
forward signals and reap processes. When the `--init` flag is set for
podman-create or podman-run, the init binary is bind-mounted to
`/dev/init` in the container and "/dev/init --" is prepended to the
container's command.
The default base path of the container-init binary is `/usr/libexec/podman`
while the default binary is catatonit [1]. This default can be changed
permanently via the `init_path` field in the `libpod.conf` configuration
file (which is recommended for packaging) or temporarily via the
`--init-path` flag of podman-create and podman-run.
[1] https://github.com/openSUSE/catatonitFixes: #1670
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Now that the correct distribution CRIU packages are installed the
checkpoint/restore tests should no longer fail. This re-enables the
disabled tests on Fedora.
Signed-off-by: Adrian Reber <areber@redhat.com>
We had two problems with /dev/shm, first, you mount the
container read/only then /dev/shm was mounted read/only.
This is a bug a tmpfs directory should be read/write within
a read-only container.
The second problem is we were ignoring users mounted /dev/shm
from the host.
If user specified
podman run -d -v /dev/shm:/dev/shm ...
We were dropping this mount and still using the internal mount.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
when testing rootless containers, it is more reliable to stop
a container with a zero timeout than kill a container. We made
this change in non-rootless tests as well. When IO or CPU are
taxed, it avoids a situation where the kill signal is sent but the
container has not been able to update its status when a subsequent
action occurs.
Signed-off-by: baude <bbaude@redhat.com>
Display the trust policy of the host system. The trust policy is stored in the /etc/containers/policy.json file and defines a scope of registries or repositories.
Signed-off-by: Qi Wang <qiwan@redhat.com>
Since the most recent TWO versions of Fedora are officially supported
upstream, both need to be tested. Implement the concept of a 'prior'
Fedora release in both base-image and cache-image production. Utilize
the produced cache-image to test libpod. Remove F28 testing from PAPR.
Much thanks to @baude @giuseppe for help with this.
Signed-off-by: Chris Evich <cevich@redhat.com>
Add functional tests to start a container from systemd.
This patch will:
- create a systemd unit file to start redis container
- create the container with `podman create`
- enable the service
- start the container with systemd
- check that the service is actually running
Signed-off-by: Emilien Macchi <emilien@redhat.com>
when starting or running a container that has --rm, if the starting
container fails (like due to an invalid command), the container should
get removed.
Resolves: #1985
Signed-off-by: baude <bbaude@redhat.com>
With rootless containers we cannot really restart an existing container
as we would need to join the mount namespace as well to be able to reuse
the storage, so ensure the container is stopped first.
Closes: https://github.com/containers/libpod/issues/1965
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Allow user to prune unused/unnamed images, the layer images from building,
via podman rmi --prune.
Allow user to prune stopped/exiuted containers via podman rm --prune.
This should resolve#1910
Signed-off-by: baude <bbaude@redhat.com>
when a user specifies --pod to podman create|run, we should create that pod
automatically. the port bindings from the container are then inherited by
the infra container. this signicantly improves the workflow of running
containers inside pods with podman. the user is still encouraged to use
podman pod create to have more granular control of the pod create options.
Signed-off-by: baude <bbaude@redhat.com>
Part of the motivation for 800eb863 (Hooks supports two directories,
process default and override, 2018-09-17, #1487) was [1]:
> We only use this for override. The reason this was caught is people
> are trying to get hooks to work with CoreOS. You are not allowed to
> write to /usr/share... on CoreOS, so they wanted podman to also look
> at /etc, where users and third parties can write.
But we'd also been disabling hooks completely for rootless users. And
even for root users, the override logic was tricky when folks actually
had content in both directories. For example, if you wanted to
disable a hook from the default directory, you'd have to add a no-op
hook to the override directory.
Also, the previous implementation failed to handle the case where
there hooks defined in the override directory but the default
directory did not exist:
$ podman version
Version: 0.11.2-dev
Go Version: go1.10.3
Git Commit: "6df7409cb5a41c710164c42ed35e33b28f3f7214"
Built: Sun Dec 2 21:30:06 2018
OS/Arch: linux/amd64
$ ls -l /etc/containers/oci/hooks.d/test.json
-rw-r--r--. 1 root root 184 Dec 2 16:27 /etc/containers/oci/hooks.d/test.json
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:31:19-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:31:19-08:00" level=warning msg="failed to load hooks: {}%!(EXTRA *os.PathError=open /usr/share/containers/oci/hooks.d: no such file or directory)"
With this commit:
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /etc/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="added hook /etc/containers/oci/hooks.d/test.json"
time="2018-12-02T21:33:07-08:00" level=debug msg="hook test.json matched; adding to stages [prestart]"
time="2018-12-02T21:33:07-08:00" level=warning msg="implicit hook directories are deprecated; set --hooks-dir="/etc/containers/oci/hooks.d" explicitly to continue to load hooks from this directory"
time="2018-12-02T21:33:07-08:00" level=error msg="container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"process_linux.go:382: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: oh, noes!\\\\n\\\"\""
(I'd setup the hook to error out). You can see that it's silenly
ignoring the ENOENT for /usr/share/containers/oci/hooks.d and
continuing on to load hooks from /etc/containers/oci/hooks.d.
When it loads the hook, it also logs a warning-level message
suggesting that callers explicitly configure their hook directories.
That will help consumers migrate, so we can drop the implicit hook
directories in some future release. When folks *do* explicitly
configure hook directories (via the newly-public --hooks-dir and
hooks_dir options), we error out if they're missing:
$ podman --hooks-dir /does/not/exist run --rm docker.io/library/alpine echo 'successful container'
error setting up OCI Hooks: open /does/not/exist: no such file or directory
I've dropped the trailing "path" from the old, hidden --hooks-dir-path
and hooks_dir_path because I think "dir(ectory)" is already enough
context for "we expect a path argument". I consider this name change
non-breaking because the old forms were undocumented.
Coming back to rootless users, I've enabled hooks now. I expect they
were previously disabled because users had no way to avoid
/usr/share/containers/oci/hooks.d which might contain hooks that
required root permissions. But now rootless users will have to
explicitly configure hook directories, and since their default config
is from ~/.config/containers/libpod.conf, it's a misconfiguration if
it contains hooks_dir entries which point at directories with hooks
that require root access. We error out so they can fix their
libpod.conf.
[1]: https://github.com/containers/libpod/pull/1487#discussion_r218149355
Signed-off-by: W. Trevor King <wking@tremily.us>
like containers and images, users would benefit from being able to check
if a pod exists in local storage. if the pod exists, the return code is 0.
if the pod does not exists, the return code is 1. Any other return code
indicates a real errors, such as permissions or runtime.
Signed-off-by: baude <bbaude@redhat.com>
We regressed on this at some point. Adding a new test should help
ensure that doesn't happen again.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
The conmon exit command is running inside of a namespace where the
process is running with uid=0. When it launches again podman for the
cleanup, podman is not running in rootless mode as the uid=0.
Export some more env variables to tell podman we are in rootless
mode.
Closes: https://github.com/containers/libpod/issues/1859
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This adds checkpoint/restore test cases for the newly added options
* --leave-running
* --tcp-established
* --all
* --latest
Signed-off-by: Adrian Reber <areber@redhat.com>
In some test env, mount with shared options is not included relatime
in the mountinfo file. So remove this from the test case.
Signed-off-by: Yiqiao Pu <ypu@redhat.com>
Add an exists subcommand to podman container and podman image that allows
users to verify the existence of a container or image by ID or name. The return
code can be 0 (success), 1 (failed to find), or 125 (failed to work with runtime).
Issue #1845
Signed-off-by: baude <bbaude@redhat.com>
Set the root propagation based on the properties of volumes and default
mounts. To remain compatibility, follow the semantics of Docker. If a
volume is shared, keep the root propagation shared which works for slave
and private volumes too. For slave volumes, it can either be shared or
rshared. Do not change the root propagation for private volumes and
stick with the default.
Fixes: #1834
Signed-off-by: Valentin Rothberg <vrothberg@suse.com>
we need to allow users to expose ports to the host for the purposes
of networking, like a webserver. the port exposure must be done at
the time the pod is created.
strictly speaking, the port exposure occurs on the infra container.
Signed-off-by: baude <bbaude@redhat.com>
The tests can be filter by --focus and --skip to fit different test
target. Also be able to set global options and cmd options by export
it to ENV to fit different test matrix.
Signed-off-by: Yiqiao Pu <ypu@redhat.com>
We now can remove a paused container by sending it a kill signal while it
is paused. We then unpause the container and it is immediately killed.
Also, reworked how the parallelWorker results are handled to provide a
more consistent approach to how each subcommand implements it. It also
fixes a bug where if one container errors, the error message is duplicated
when printed out.
Signed-off-by: baude <bbaude@redhat.com>
When running integration tests in our CI, we observe a problem where paused containers
are not able to be stopped; and therefore cannot be cleaned up. This leaves dangling mounts
and sometimes zombied conmon processes.
Signed-off-by: baude <bbaude@redhat.com>
Operations like kill, pause, and unpause -- which can operation on one or
more containers -- can greatly benefit from parallizing its main job (eq kill).
In the case of pauseand unpause, an --all option as was added. pause --all will
pause all **running** containers. And unpause --all will unpause all **paused**
containers.
Signed-off-by: baude <bbaude@redhat.com>
When attempting to restart many containers, we can benefit from making
the restarts parallel. For convenience, two new options are added:
--all attempts to restart all containers
--run-only when used with --all will attempt to restart only running containers
Signed-off-by: baude <bbaude@redhat.com>