volatile containers are a storage optimization that disables *sync()
syscalls for the container rootfs.
If a container is created with --rm, then automatically set the
volatile storage flag as anyway the container won't persist after a
reboot or machine crash.
[NO TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Migrate the Podman code base over to `common/libimage` which replaces
`libpod/image` and a lot of glue code entirely.
Note that I tried to leave bread crumbs for changed tests.
Miscellaneous changes:
* Some errors yield different messages which required to alter some
tests.
* I fixed some pre-existing issues in the code. Others were marked as
`//TODO`s to prevent the PR from exploding.
* The `NamesHistory` of an image is returned as is from the storage.
Previously, we did some filtering which I think is undesirable.
Instead we should return the data as stored in the storage.
* Touched handlers use the ABI interfaces where possible.
* Local image resolution: previously Podman would match "foo" on
"myfoo". This behaviour has been changed and Podman will now
only match on repository boundaries such that "foo" would match
"my/foo" but not "myfoo". I consider the old behaviour to be a
bug, at the very least an exotic corner case.
* Futhermore, "foo:none" does *not* resolve to a local image "foo"
without tag anymore. It's a hill I am (almost) willing to die on.
* `image prune` prints the IDs of pruned images. Previously, in some
cases, the names were printed instead. The API clearly states ID,
so we should stick to it.
* Compat endpoint image removal with _force_ deletes the entire not
only the specified tag.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Some packages used by the remote client imported the libpod package.
This is not wanted because it adds unnecessary bloat to the client and
also causes problems with platform specific code(linux only), see #9710.
The solution is to move the used functions/variables into extra packages
which do not import libpod.
This change shrinks the remote client size more than 6MB compared to the
current master.
[NO TESTS NEEDED]
I have no idea how to test this properly but with #9710 the cross
compile should fail.
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
The --trace has helped in early stages analyze Podman code. However,
it's contributing to dependency and binary bloat. The standard go
tooling can also help in profiling, so let's turn `--trace` into a NOP.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Unlocking an already unlocked lock is a panic. As such, we have
to make sure that the deferred c.lock.Unlock() in
c.StopWithTimeout() always runs on a locked container. There was
a case in c.stop() where we could return an error after we unlock
the container to stop it, but before we re-lock it - thus
allowing for a double-unlock to occur. Fix the error return to
not happen until after the lock has been re-acquired.
Fixes#9615
Signed-off-by: Matthew Heon <mheon@redhat.com>
Traditionally, the path resolution for containers has been resolved on
the *host*; relative to the container's mount point or relative to
specified bind mounts or volumes.
While this works nicely for non-running containers, it poses a problem
for running ones. In that case, certain kinds of mounts (e.g., tmpfs)
will not resolve correctly. A tmpfs is held in memory and hence cannot
be resolved relatively to the container's mount point. A copy operation
will succeed but the data will not show up inside the container.
To support these kinds of mounts, we need to join the *running*
container's mount namespace (and PID namespace) when copying.
Note that this change implies moving the copy and stat logic into
`libpod` since we need to keep the container locked to avoid race
conditions. The immediate benefit is that all logic is now inside
`libpod`; the code isn't scattered anymore.
Further note that Docker does not support copying to tmpfs mounts.
Tests have been extended to cover *both* path resolutions for running
and created containers. New tests have been added to exercise the
tmpfs-mount case.
For the record: Some tests could be improved by using `start -a` instead
of a start-exec sequence. Unfortunately, `start -a` is flaky in the CI
which forced me to use the more expensive start-exec option.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Instead of using the container's mountpoint as the base of the
chroot and indexing from there by the volume directory, instead
use the full path of what we want to copy as the base of the
chroot and copy everything in it. This resolves the bug, ends up
being a bit simpler code-wise (no string concatenation, as we
already have the full path calculated for other checks), and
seems more understandable than trying to resolve things on the
destination side of the copy-up.
Fixes#9354
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This one is rather bizarre because it triggers only on some
systems. I've included a CI test, for example, but I'm 99% sure
we use images in CI that have volumes over empty directories, and
the earlier patch to change copy-up implementation passed CI
without complaint.
I can reproduce this on a stock F33 VM, but that's the only place
I have been able to see it.
Regardless, the issue: under certain as-yet-unidentified
environmental conditions, the copier.Get method will return an
ENOENT attempting to stream a directory that is empty. Work
around this by avoiding the copy altogether in this case.
Signed-off-by: Matthew Heon <mheon@redhat.com>
The old copy-up implementation was very unhappy with symlinks,
which could cause containers to fail to start for unclear reasons
when a directory we wanted to copy-up contained one. Rewrite to
use the Buildah Copier, which is more recent and should be both
safer and less likely to blow up over links.
At the same time, fix a deadlock in copy-up for volumes requiring
mounting - the Mountpoint() function tried to take the
already-acquired volume lock.
Fixes#6003
Signed-off-by: Matthew Heon <mheon@redhat.com>
Implement podman secret create, inspect, ls, rm
Implement podman run/create --secret
Secrets are blobs of data that are sensitive.
Currently, the only secret driver supported is filedriver, which means creating a secret stores it in base64 unencrypted in a file.
After creating a secret, a user can use the --secret flag to expose the secret inside the container at /run/secrets/[secretname]
This secret will not be commited to an image on a podman commit
Signed-off-by: Ashley Cui <acui@redhat.com>
This implements support for mounting and unmounting volumes
backed by volume plugins. Support for actually retrieving
plugins requires a pull request to land in containers.conf and
then that to be vendored, and as such is not yet ready. Given
this, this code is only compile tested. However, the code for
everything past retrieving the plugin has been written - there is
support for creating, removing, mounting, and unmounting volumes,
which should allow full functionality once the c/common PR is
merged.
A major change is the signature of the MountPoint function for
volumes, which now, by necessity, returns an error. Named volumes
managed by a plugin do not have a mountpoint we control; instead,
it is managed entirely by the plugin. As such, we need to cache
the path in the DB, and calls to retrieve it now need to access
the DB (and may fail as such).
Notably absent is support for SELinux relabelling and chowning
these volumes. Given that we don't manage the mountpoint for
these volumes, I am extremely reluctant to try and modify it - we
could easily break the plugin trying to chown or relabel it.
Also, we had no less than *5* separate implementations of
inspecting a volume floating around in pkg/infra/abi and
pkg/api/handlers/libpod. And none of them used volume.Inspect(),
the only correct way of inspecting volumes. Remove them all and
consolidate to using the correct way. Compat API is likely still
doing things the wrong way, but that is an issue for another day.
Fixes#4304
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Podman defers stopping the container to the runtime, which can take some
time. Keeping the lock while waiting for the runtime to complete the
stop procedure, prevents other commands from acquiring the lock as shown
in #8501.
To improve the user experience, release the lock before invoking the
runtime, and re-acquire the lock when the runtime is finished. Also
introduce an intermediate "stopping" to properly distinguish from
"stopped" containers etc.
Fixes: #8501
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Our users are missing certain warning messages that would
make debugging issues with Podman easier.
For example if you do a podman build with a Containerfile
that contains the SHELL directive, the Derective is silently
ignored.
If you run with the log-level warn you get a warning message explainging
what happened.
$ podman build --no-cache -f /tmp/Containerfile1 /tmp/
STEP 1: FROM ubi8
STEP 2: SHELL ["/bin/bash", "-c"]
STEP 3: COMMIT
--> 7a207be102a
7a207be102aa8993eceb32802e6ceb9d2603ceed9dee0fee341df63e6300882e
$ podman --log-level=warn build --no-cache -f /tmp/Containerfile1 /tmp/
STEP 1: FROM ubi8
STEP 2: SHELL ["/bin/bash", "-c"]
STEP 3: COMMIT
WARN[0000] SHELL is not supported for OCI image format, [/bin/bash -c] will be ignored. Must use `docker` format
--> 7bd96fd25b9
7bd96fd25b9f755d8a045e31187e406cf889dcf3799357ec906e90767613e95f
These messages will no longer be lost, when we default to WARNing level.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This makes things a lot more clear - if we are actually joining a
CNI network, we are guaranteed to get a non-zero length list of
networks.
We do, however, need to know if the network we are joining is the
default network for inspecting containers as it determines how we
populate the response struct. To handle this, add a bool to
indicate that the network listed was the default network, and
only the default network.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Convert the existing network aliases set/remove code to network
connect and disconnect. We can no longer modify aliases for an
existing network, but we can add and remove entire networks. As
part of this, we need to add a new function to retrieve current
aliases the container is connected to (we had a table for this
as of the first aliases PR, but it was not externally exposed).
At the same time, remove all deconflicting logic for aliases.
Docker does absolutely no checks of this nature, and allows two
containers to have the same aliases, aliases that conflict with
container names, etc - it's just left to DNS to return all the
IP addresses, and presumably we round-robin from there? Most
tests for the existing code had to be removed because of this.
Convert all uses of the old container config.Networks field,
which previously included all networks in the container, to use
the new DB table. This ensures we actually get an up-to-date list
of in-use networks. Also, add network aliases to the output of
`podman inspect`.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When making containers, we want to lock all named volumes we are
adding the container to, to ensure they aren't removed from under
us while we are working. Unfortunately, this code did not account
for a container having the same volume mounted in multiple places
so it could deadlock. Add a map to ensure that we don't lock the
same name more than once to resolve this.
Fixes#8221
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Most of the builtin golang functions like os.Stat and
os.Open report errors including the file system object
path. We should not wrap these errors and put the file path
in a second time, causing stuttering of errors when they
get presented to the user.
This patch tries to cleanup a bunch of these errors.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add a new "image" mount type to `--mount`. The source of the mount is
the name or ID of an image. The destination is the path inside the
container. Image mounts further support an optional `rw,readwrite`
parameter which if set to "true" will yield the mount writable inside
the container. Note that no changes are propagated to the image mount
on the host (which in any case is read only).
Mounts are overlay mounts. To support read-only overlay mounts, vendor
a non-release version of Buildah.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
We do not populate the hostname field with the IP Address
when running within a user namespace.
Fixes https://github.com/containers/podman/issues/7490
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
In case os.Open[File], os.Mkdir[All], ioutil.ReadFile and the like
fails, the error message already contains the file name and the
operation that fails, so there is no need to wrap the error with
something like "open %s failed".
While at it
- replace a few places with os.Open, ioutil.ReadAll with
ioutil.ReadFile.
- replace errors.Wrapf with errors.Wrap for cases where there
are no %-style arguments.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
when joining an existing container user namespace, read the existing
mappings so the storage can be created with the correct ownership.
Closes: https://github.com/containers/podman/issues/7547
Signed-off-by: Giuseppe Scrivano <giuseppe@scrivano.org>
Usage:
```
$ podman network create foo
$ podman run -d --name web --hostname web --network foo nginx:alpine
$ podman run --rm --network foo alpine wget -O - http://web.dns.podman
Connecting to web.dns.podman (10.88.4.6:80)
...
<h1>Welcome to nginx!</h1>
...
```
See contrib/rootless-cni-infra for the design.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
We added code to create a `/etc/passwd` file that we bind-mount
into the container in some cases (most notably,
`--userns=keep-id` containers). This, unfortunately, was not
persistent, so user-added users would be dropped on container
restart. Changing where we store the file should fix this.
Further, we want to ensure that lookups of users in the container
use the right /etc/passwd if we replaced it. There was already
logic to do this, but it only worked for user-added mounts; it's
easy enough to alter it to use our mounts as well.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This was inspired by https://github.com/cri-o/cri-o/pull/3934 and
much of the logic for it is contained there. However, in brief,
a named return called "err" can cause lots of code confusion and
encourages using the wrong err variable in defer statements,
which can make them work incorrectly. Using a separate name which
is not used elsewhere makes it very clear what the defer should
be doing.
As part of this, remove a large number of named returns that were
not used anywhere. Most of them were once needed, but are no
longer necessary after previous refactors (but were accidentally
retained).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
--sdnotify container|conmon|ignore
With "conmon", we send the MAINPID, and clear the NOTIFY_SOCKET so the OCI
runtime doesn't pass it into the container. We also advertise "ready" when the
OCI runtime finishes to advertise the service as ready.
With "container", we send the MAINPID, and leave the NOTIFY_SOCKET so the OCI
runtime passes it into the container for initialization, and let the container advertise further metadata.
This is the default, which is closest to the behavior podman has done in the past.
The "ignore" option removes NOTIFY_SOCKET from the environment, so neither podman nor
any child processes will talk to systemd.
This removes the need for hardcoded CID and PID files in the command line, and
the PIDFile directive, as the pid is advertised directly through sd-notify.
Signed-off-by: Joseph Gooch <mrwizard@dok.org>
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
move the chown for newly created volumes after the spec generation so
the correct UID/GID are known.
Closes: https://github.com/containers/libpod/issues/5698
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
specify the mappings in the container configuration to the storage
when creating the container so that the correct mappings can be
configured.
Regression introduced with Podman 2.0.
Closes: https://github.com/containers/libpod/issues/6735
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This came out of a conversation with Valentin about
systemd-managed Podman. He discovered that unit files did not
properly handle cases where Conmon was dead - the ExecStopPost
`podman rm --force` line was not actually removing the container,
but interestingly, adding a `podman cleanup --rm` line would
remove it. Both of these commands do the same thing (minus the
`podman cleanup --rm` command not force-removing running
containers).
Without a running Conmon instance, the container process is still
running (assuming you killed Conmon with SIGKILL and it had no
chance to kill the container it managed), but you can still kill
the container itself with `podman stop` - Conmon is not involved,
only the OCI Runtime. (`podman rm --force` and `podman stop` use
the same code to kill the container). The problem comes when we
want to get the container's exit code - we expect Conmon to make
us an exit file, which it's obviously not going to do, being
dead. The first `podman rm` would fail because of this, but
importantly, it would (after failing to retrieve the exit code
correctly) set container status to Exited, so that the second
`podman cleanup` process would succeed.
To make sure the first `podman rm --force` succeeds, we need to
catch the case where Conmon is already dead, and instead of
waiting for an exit file that will never come, immediately set
the Stopped state and remove an error that can be caught and
handled.
Signed-off-by: Matthew Heon <mheon@redhat.com>
With APIv2, we cannot guarantee that exec sessions will be
removed cleanly on exit (Docker does not include an API for
removing exec sessions, instead using a timer-based reaper which
we cannot easily replicate). This is part 1 of a 2-part approach
to providing a solution to this. This ensures that exec sessions
will be reaped, at the very least, on container restart, which
takes care of any that were not properly removed during the run
of a container.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Some runtimes (e.g. Kata containers) seem to object to having us
unmount storage before the container is removed from the runtime.
This is an easy fix (change the order of operations in cleanup)
and seems to make more sense than the way we were doing things.
Signed-off-by: Matthew Heon <mheon@redhat.com>
In order to better support kata containers and systemd containers
container-selinux has added new types. Podman should execute the
container with an SELinux process label to match the container type.
Traditional Container process : container_t
KVM Container Process: containre_kvm_t
PID 1 Init process: container_init_t
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We previously attempted to work within CNI to do this, without
success. So let's do it manually, instead. We know where the
files should live, so we can remove them ourselves instead. This
solves issues around sudden reboots where containers do not have
time to fully tear themselves down, and leave IP address
allocations which, for various reasons, are not stored in tmpfs
and persist through reboot.
Fixes#5433
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
As part of the rework of exec sessions, we need to address them
independently of containers. In the new API, we need to be able
to fetch them by their ID, regardless of what container they are
associated with. Unfortunately, our existing exec sessions are
tied to individual containers; there's no way to tell what
container a session belongs to and retrieve it without getting
every exec session for every container.
This adds a pointer to the container an exec session is
associated with to the database. The sessions themselves are
still stored in the container.
Exec-related APIs have been restructured to work with the new
database representation. The originally monolithic API has been
split into a number of smaller calls to allow more fine-grained
control of lifecycle. Support for legacy exec sessions has been
retained, but in a deprecated fashion; we should remove this in
a few releases.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Before, we were getting the exit code from the file, in which we waited an arbitrary amount of time (5 seconds) for the file, and segfaulted if we didn't find it. instead, we should be a bit more certain conmon has sent the exit code. Luckily, it sends the exit code along the sync pipe fd, so we can read it from there
Adapt the ExecContainer interface to pass along a channel to get the pid and exit code from conmon, to be able to read both from the pipe
Signed-off-by: Peter Hunt <pehunt@redhat.com>
We can easily tell if we're going to deadlock by comparing lock
IDs before actually taking the lock. Add a few checks for this in
common places where deadlocks might occur.
This does not yet cover pod operations, where detection is more
difficult (and costly) due to the number of locks being involved
being higher than 2.
Also, add some error wrapping on the Podman side, so we can tell
people to use `system renumber` when it occurs.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
when using usernamespace, dnsname respondes from cni were not making it into the containers /etc/resolv.conf because of a timing issue. this corrects that behavior.
Fixes: #5256
Signed-off-by: Brent Baude <bbaude@redhat.com>
When Docker performs a copy up, it first verifies that the volume
being copied into is empty; thus, for volumes that have been
modified elsewhere (e.g. manually copying into then), the copy up
will not be performed at all. Duplicate this behavior in Podman
by checking if the volume is empty before copying.
Furthermore, move setting copyup to false further up. This will
prevent a potential race where copy up could happen more than
once if Podman was killed after some files had been copied but
before the DB was updated.
This resolves CVE-2020-1726.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This makes restart a bit slower for root containers, but it does
make it more consistent with `podman stop` and `podman start` on
a container. Importantly, `podman restart` will now recreate
firewall rules if they were somehow purged.
Fixes#5051
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
`gocritic` is a powerful linter that helps in preventing certain kinds
of errors as well as enforcing a coding style.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Create service command
Use cd cmd/service && go build .
$ systemd-socket-activate -l 8081 cmd/service/service &
$ curl http://localhost:8081/v1.24/images/json
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Correct Makefile
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Two more stragglers
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Report errors back as http headers
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Split out handlers, updated output
Output aligned to docker structures
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Refactored routing, added more endpoints and types
* Encapsulated all the routing information in the handler_* files.
* Added more serviceapi/types, including podman additions. See Info
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Cleaned up code, implemented info content
* Move Content-Type check into serviceHandler
* Custom 404 handler showing the url, mostly for debugging
* Refactored images: better method names and explicit http codes
* Added content to /info
* Added podman fields to Info struct
* Added Container struct
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Add a bunch of endpoints
containers: stop, pause, unpause, wait, rm
images: tag, rmi, create (pull only)
Signed-off-by: baude <bbaude@redhat.com>
Add even more handlers
* Add serviceapi/Error() to improve error handling
* Better support for API return payloads
* Renamed unimplemented to unsupported these are generic endpoints
we don't intend to ever support. Swarm broken out since it uses
different HTTP codes to signal that the node is not in a swarm.
* Added more types
* API Version broken out so it can be validated in the future
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Refactor to introduce ServiceWriter
Signed-off-by: Jhon Honce <jhonce@redhat.com>
populate pods endpoints
/libpod/pods/..
exists, kill, pause, prune, restart, remove, start, stop, unpause
Signed-off-by: baude <bbaude@redhat.com>
Add components to Version, fix Error body
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Add images pull output, fix swarm routes
* docker-py tests/integration/api_client_test.py pass 100%
* docker-py tests/integration/api_image_test.py pass 4/16
+ Test failures include services podman does not support
Signed-off-by: Jhon Honce <jhonce@redhat.com>
pods endpoint submission 2
add create and others; only top and stats is left.
Signed-off-by: baude <bbaude@redhat.com>
Update pull image to work from empty registry
Signed-off-by: Jhon Honce <jhonce@redhat.com>
pod create and container create
first pass at pod and container create. the container create does not
quite work yet but it is very close. pod create needs a partial
rewrite. also broken off the DELETE (rm/rmi) to specific handler funcs.
Signed-off-by: baude <bbaude@redhat.com>
Add docker-py demos, GET .../containers/json
* Update serviceapi/types to reflect libpod not podman
* Refactored removeImage() to provide non-streaming return
Signed-off-by: Jhon Honce <jhonce@redhat.com>
create container part2
finished minimal config needed for create container. started demo.py
for upcoming talk
Signed-off-by: baude <bbaude@redhat.com>
Stop server after honoring request
* Remove casting for method calls
* Improve WriteResponse()
* Update Container API type to match docker API
Signed-off-by: Jhon Honce <jhonce@redhat.com>
fix namespace assumptions
cleaned up namespace issues with libpod.
Signed-off-by: baude <bbaude@redhat.com>
wip
Signed-off-by: baude <bbaude@redhat.com>
Add sliding window when shutting down server
* Added a Timeout rather than closing down service on each call
* Added gorilla/schema dependency for Decode'ing query parameters
* Improved error handling
* Container logs returned and multiplexed for stdout and stderr
* .../containers/{name}/logs?stdout=True&stderr=True
* Container stats
* .../containers/{name}/stats
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Improve error handling
* Add check for at least one std stream required for /containers/{id}/logs
* Add check for state in /containers/{id}/top
* Fill in more fields for /info
* Fixed error checking in service start code
Signed-off-by: Jhon Honce <jhonce@redhat.com>
get rest of image tests for pass
Signed-off-by: baude <bbaude@redhat.com>
linting our content
Signed-off-by: baude <bbaude@redhat.com>
more linting
Signed-off-by: baude <bbaude@redhat.com>
more linting
Signed-off-by: baude <bbaude@redhat.com>
pruning
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]apiv2 pods
migrate from using args in the url to using a json struct in body for
pod create.
Signed-off-by: baude <bbaude@redhat.com>
fix handler_images prune
prune's api changed slightly to deal with filters.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]enabled base container create tests
enabling the base container create tests which allow us to get more into
the stop, kill, etc tests. many new tests now pass.
Signed-off-by: baude <bbaude@redhat.com>
serviceapi errors: append error message to API message
I dearly hope this is not breaking any other tests but debugging
"Internal Server Error" is not helpful to any user. In case, it
breaks tests, we can rever the commit - that's why it's a small one.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
serviceAPI: add containers/prune endpoint
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
add `service` make target
Also remove the non-functional sub-Makefile.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
add make targets for testing the service
* `sudo make run-service` for running the service.
* `DOCKERPY_TEST="tests/integration/api_container_test.py::ListContainersTest" \
make run-docker-py-tests`
for running a specific tests. Run all tests by leaving the env
variable empty.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Split handlers and server packages
The files were split to help contain bloat. The api/server package will
contain all code related to the functioning of the server while
api/handlers will have all the code related to implementing the end
points.
api/server/register_* will contain the methods for registering
endpoints. Additionally, they will have the comments for generating the
swagger spec file.
See api/handlers/version.go for a small example handler,
api/handlers/containers.go contains much more complex handlers.
Signed-off-by: Jhon Honce <jhonce@redhat.com>
[CI:DOCS]enabled more tests
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]libpod endpoints
small refactor for libpod inclusion and began adding endpoints.
Signed-off-by: baude <bbaude@redhat.com>
Implement /build and /events
* Include crypto libraries for future ssh work
Signed-off-by: Jhon Honce <jhonce@redhat.com>
[CI:DOCS]more image implementations
convert from using for to query structs among other changes including
new endpoints.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add bindings for golang
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add volume endpoints for libpod
create, inspect, ls, prune, and rm
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]apiv2 healthcheck enablement
wire up container healthchecks for the api.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]Add mount endpoints
via the api, allow ability to mount a container and list container
mounts.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]Add search endpoint
add search endpoint with golang bindings
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]more apiv2 development
misc population of methods, etc
Signed-off-by: baude <bbaude@redhat.com>
rebase cleanup and epoch reset
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add more network endpoints
also, add some initial error handling and convenience functions for
standard endpoints.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]use helper funcs for bindings
use the methods developed to make writing bindings less duplicative and
easier to use.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add return info for prereview
begin to add return info and status codes for errors so that we can
review the apiv2
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]first pass at adding swagger docs for api
Signed-off-by: baude <bbaude@redhat.com>
Currently, if a user requests the size on a container (inspect --size -t container),
the SizeRw does not show up if the value is 0. It's because InspectContainerData is
defined as int64 and there is an omit when empty.
We do want to display it even if the value is empty. I have changed the type of SizeRw to be a pointer to an int64 instead of an int64. It will allow us todistinguish the empty value to the missing value.
I updated the test "podman inspect container with size" to ensure we check thatSizeRw is displayed correctly.
Closes#4744
Signed-off-by: NevilleC <neville.cain@qonto.eu>
When a container is in a PID namespace, it is enought to send
the stop signal to the PID 1 of the namespace, only send signals
to all processes in the container when the container is not in
a pid namespace.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We currently rely on exec sessions being removed from the state
by the Exec() API itself, on detecting the session stopping. This
is not a reliable method, though. The Podman frontend for exec
could be killed before the session ended, or another Podman
process could be holding the lock and prevent update (most
notable in `run --rm`, when a container with an active exec
session is stopped).
To resolve this, add a function to reap active exec sessions from
the state, and use it on cleanup (to clear sessions after the
container stops) and remove (to do the same when --rm is passed).
This is a bit more complicated than it ought to be because Kata
and company exist, and we can't guarantee the exec session has a
PID on the host, so we have to plumb this through to the OCI
runtime.
Fixes#4666
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When Libpod removes a container, there is the possibility that
removal will not fully succeed. The most notable problems are
storage issues, where the container cannot be removed from
c/storage.
When this occurs, we were faced with a choice. We can keep the
container in the state, appearing in `podman ps` and available for
other API operations, but likely unable to do any of them as it's
been partially removed. Or we can remove it very early and clean
up after it's already gone. We have, until now, used the second
approach.
The problem that arises is intermittent problems removing
storage. We end up removing a container, failing to remove its
storage, and ending up with a container permanently stuck in
c/storage that we can't remove with the normal Podman CLI, can't
use the name of, and generally can't interact with. A notable
cause is when Podman is hit by a SIGKILL midway through removal,
which can consistently cause `podman rm` to fail to remove
storage.
We now add a new state for containers that are in the process of
being removed, ContainerStateRemoving. We set this at the
beginning of the removal process. It notifies Podman that the
container cannot be used anymore, but preserves it in the DB
until it is fully removed. This will allow Remove to be run on
these containers again, which should successfully remove storage
if it fails.
Fixes#3906
Signed-off-by: Matthew Heon <mheon@redhat.com>
In conmon 2.0.3, we add another fifo to handle window resizing. This needs to be cleaned up for commands like restore, where the same path is used.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
We have a lot of checks for container state scattered throughout
libpod. Many of these need to ensure the container is in one of a
given set of states so an operation may safely proceed.
Previously there was no set way of doing this, so we'd use unique
boolean logic for each one. Introduce a helper to standardize
state checks.
Note that this is only intended to replace checks for multiple
states. A simple check for one state (ContainerStateRunning, for
example) should remain a straight equality, and not use this new
helper.
Signed-off-by: Matthew Heon <mheon@redhat.com>
When a container is created with a given OCI runtime, but then it
is uninstalled or removed from the configuration file, Libpod
presently reacts very poorly. The EvictContainer code can
potentially remove these containers, but we still can't see them
in `podman ps` (aside from the massive logrus.Errorf messages
they create).
Providing a minimal OCI runtime implementation for missing
runtimes allows us to behave better. We'll be able to retrieve
containers from the database, though we still pop up an error for
each missing runtime. For containers which are stopped, we can
remove them as normal.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
For future work, we need multiple implementations of the OCI
runtime, not just a Conmon-wrapped runtime matching the runc CLI.
As part of this, do some refactoring on the interface for exec
(move to a struct, not a massive list of arguments). Also, add
'all' support to Kill and Stop (supported by runc and used a bit
internally for removing containers).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
when runc returns an error about not being v2 complient, catch the error
and logrus an actionable message for users.
Signed-off-by: baude <bbaude@redhat.com>
CNI expects that a DELETE be run before re-creating container
networks. If a reboot occurs quickly enough that containers can't
stop and clean up, that DELETE never happens, and Podman
currently wipes the old network info and thinks the state has
been entirely cleared. Unfortunately, that may not be the case on
the CNI side. Some things - like IP address reservations - may
not have been cleared.
To solve this, manually re-run CNI Delete on refresh. If the
container has already been deleted this seems harmless. If not,
it should clear lingering state.
Fixes: #3759
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We've been seeing a lot of issues (ref: #4061, but there are
others) where Podman hiccups on trying to start a container,
because some temporary files have been retained and Conmon will
not overwrite them.
If we're calling start() we can safely assume that we really want
those files gone so the container starts without error, so invoke
the cleanup routine. It's relatively cheap (four file removes) so
it shouldn't hurt us that much.
Also contains a small simplification to the removeConmonFiles
logic - we don't need to stat-then-remove when ignoring ENOENT is
fine.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We should not be throwing errors because the operation we wanted
to perform is already done. Now, it is definitely strange that a
container is actually unmounted, but shows as mounted in the DB -
if this reoccurs in a way where we can investigate, it's worth
tearing into.
Fixes#4033
Signed-off-by: Matthew Heon <mheon@redhat.com>
If you are running a rootless container on cgroupV1
you can not pause the container. We need to report the proper error
if this happens.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
when executing a healthcheck, we were not cleaning up after exec's use
of a socket. we now remove the socket file and ignore if for reason it
does not exist.
Fixes: #3962
Signed-off-by: baude <bbaude@redhat.com>
This is mostly used with Systemd, which really wants to manage
CGroups itself when managing containers via unit file.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Previously, we only did this for volumes created at the same time
as the container. However, this is not correct behavior - Docker
does so for all named volumes, even those made with
'podman volume create' and mounted into a container later.
Fixes#3945
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When we fail to remove a container's SHM, that's an error, and we
need to report it as such. This may be part of our lingering
storage woes.
Also, remove MNT_DETACH. It may be another cause of the storage
removal failures.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When volume options and the local volume driver are specified,
the volume is intended to be mounted using the 'mount' command.
Supported options will be used to volume the volume before the
first container using it starts, and unmount the volume after the
last container using it dies.
This should work for any local filesystem, though at present I've
only tested with tmpfs and btrfs.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Support generating systemd unit files for a pod. Podman generates one
unit file for the pod including the PID file for the infra container's
conmon process and one unit file for each container (excluding the infra
container).
Note that this change implies refactorings in the `pkg/systemdgen` API.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Old versions of conmon have a bug where they create the exit file before
closing open file descriptors causing a race condition when restarting
containers with open ports since we cannot bind the ports as they're not
yet closed by conmon.
Killing the old conmon PID is ~okay since it forces the FDs of old
conmons to be closed, while it's a NOP for newer versions which should
have exited already.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
When forcibly removing a container, we are initiating an explicit
stop of the container, which is not reflected in 'podman events'.
Swap to using our standard 'stop()' function instead of a custom
one for force-remove, and move the event into the internal stop
function (so internal calls also register it).
This does add one more database save() to `podman remove`. This
should not be a terribly serious performance hit, and does have
the desirable side effect of making things generally safer.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
the exit file
If the container exit code needs to be retained, it cannot be retained
in tmpfs, because libpod runs in a memcg itself so it can't leave
traces with a daemon-less design.
This wasn't a memleak detectable by kmemleak for example. The kernel
never lost track of the memory and there was no erroneous refcounting
either. The reference count dependencies however are not easy to track
because when a refcount is increased, there's no way to tell who's
still holding the reference. In this case it was a single page of
tmpfs pagecache holding a refcount that kept pinned a whole hierarchy
of dying memcg, slab kmem, cgropups, unrechable kernfs nodes and the
respective dentries and inodes. Such a problem wouldn't happen if the
exit file was stored in a regular filesystem because the pagecache
could be reclaimed in such case under memory pressure. The tmpfs page
can be swapped out, but that's not enough to release the memcg with
CONFIG_MEMCG_SWAP_ENABLED=y.
No amount of more aggressive kernel slab shrinking could have solved
this. Not even assigning slab kmem of dying cgroups to alive cgroup
would fully solve this. The only way to free the memory of a dying
cgroup when a struct page still references it, would be to loop over
all "struct page" in the kernel to find which one is associated with
the dying cgroup which is a O(N) operation (where N is the number of
pages and can reach billions). Linking all the tmpfs pages to the
memcg would cost less during memcg offlining, but it would waste lots
of memory and CPU globally. So this can't be optimized in the kernel.
A cronjob running this command can act as workaround and will allow
all slab cache to be released, not just the single tmpfs pages.
rm -f /run/libpod/exits/*
This patch solved the memleak with a reproducer, booting with
cgroup.memory=nokmem and with selinux disabled. The reason memcg kmem
and selinux were disabled for testing of this fix, is because kmem
greatly decreases the kernel effectiveness in reusing partial slab
objects. cgroup.memory=nokmem is strongly recommended at least for
workstation usage. selinux needs to be further analyzed because it
causes further slab allocations.
The upstream podman commit used for testing is
1fe2965e4f (v1.4.4).
The upstream kernel commit used for testing is
f16fea666898dbdd7812ce94068c76da3e3fcf1e (v5.2-rc6).
Reported-by: Michele Baldessari <michele@redhat.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
<Applied with small tweaks to comments>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>