Previously, the following network block did not update using
docker-compose:
```
networks:
default:
driver: bridge
driver_opts:
mtu: 9000
```
In the API, the network options were previously not being handled when the
network was being created. I translated the docker options into podman
options, and added the options to the network.
When doing `podman network inspect <network>`, the results now contain
`"mtu": "9000"`
Fixes: #14482
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
Previously, `podman system df --format "{{json .}}"` would not output
`Size` and `Reclaimable` like `podman system df` would.
```
{"Type":"Images","Total":5,"Active":0,"Size":39972240,"Reclaimable":39972240}
{"Type":"Containers","Total":0,"Active":0,"Size":0,"Reclaimable":0}
{"Type":"Local Volumes","Total":0,"Active":0,"Size":0,"Reclaimable":0}
```
Closes: #14769
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
Make sure `Sync()` handles state transitions and exit codes correctly.
The function was only being called when batching which could render
containers in an unusable state when running concurrently with other
state-altering functions/commands since the state must be re-read from
the database before acting upon it.
Fixes: #14761
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
there is no need to use such long sleep intervals for such cheap
operations like opening a connection or stat'ing a file.
Also make WaitForService() honor defaultWaitTimeout.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
if the connection is successfull then return immediately instead of doing
all the iterations. It also solves a problem where connections are
leaked since there are multiple Dial but only one Close.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Following PR adds support for running containers from a manifest list
present on localstorage. Before this PR podman only supports running
containers from valid images but not from manifest list.
So `podman run -it --platform <some> <manifest-list> command` should
become functional now and users should be able to resolve images on the
bases of provided `--platform` string.
Example
```
podman manifest create test
podman build --platform linux/amd64,linux/arm64 --manifest test .
podman run --rm --platform linux/arm64/v8 test uname -a
```
Closes: https://github.com/containers/podman/issues/14773
Signed-off-by: Aditya R <arajan@redhat.com>
The retry logic in digshort() did not work because dig always exits with
0 even when the domain name is not found. To make it work we have to
check the standard output.
We work on fixing the underlying issue in aardvark/netavark but
this will take more time.
Fixes#14173Fixes#14171
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
the test is not doing what it believes to do. The containers are not
supposed to be joining the infra container cgroup.
In addition, the result is validated only on cgroup v1 systems (that
are not used in the CI).
We may want to add it back, or a variant of it, once the
--device-read-bps option applies to the pod parent cgroup.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
using the new resource backend, implement podman pod create --memory which enables
users to modify memory.max inside of the parent cgroup (the pod), implicitly impacting all
children unless overriden
Signed-off-by: Charlie Doern <cdoern@redhat.com>
PR containers/podman/pull/14449 had an outdated base. Merging it broke
builds.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When mounting paths as overlay mounts we end up passing source as is to
lowerdir options, resolve all relative paths in such cases for overlay
mounts.
Closes: https://github.com/containers/podman/issues/14797
Signed-off-by: Aditya R <arajan@redhat.com>
Fix the parse for the cgroup devices rule to correctly handle the
wildcard syntax for the device major.
Also make sure the device major and minor are not negative numbers.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
With the upcoming plans of introducing a podman-kube command with
various subcommands, rename the podman-play-kube systemd template
to podman-kube before releasing it.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
ImagesBatchRemoval and ImageRemoval now honors and accepts
`LookupManifest` parameter which further tells libimage to resolve to
manifest list if it exists instead of actual image.
Following PR also makes `podman-remote manifest rm` functional which was
broken till now.
Closes: https://github.com/containers/podman/issues/14763
Signed-off-by: Aditya R <arajan@redhat.com>
Podman Machine crashes if run as root. When creating the machine, we write the ignition so that the UID of the core user matches the UID of the user on the host. We by default, create the root user on the machine with UID 0. If the user on the host is root, the core UID and the Root UID collide, causing a the VM not to boot.
[NO NEW TESTS NEEDED]
Signed-off-by: Ashley Cui <acui@redhat.com>
Runtime verification test for container checkpoint with export
used the default runtime for test which causes test to always
pass. Problem rises when using non-default runtime, then doing
a restore. This test forcse using a non-default runtime during
container creation.
Edge case:
1. Default runtime is crun
2. Container is created with runc
3. Checkpoint without setting --runtime into archive
4. Restore without setting --runtime from archive
It should be expected that podman identifies runtime from the
checkpoint archive.
Signed-off-by: Zeyad Yasser <zeyady98@gmail.com>
currently, podman system df incorrectly calculates the reclaimable storage for
volumes, using a cumulative reclaimable variable that is incremented and placed into each
report entry causing values to rise above 100%.
Switch this variables to be in the context of the loop, so it resets per volume just like the size variable does.
resolves#13516
Signed-off-by: Charlie Doern <cdoern@redhat.com>
Some background for this PR is in discussion #14641. In short, ever so often a container inspect will return a `status.status` of `initialized` from the Docker compat socket.
From the discussion I found these lines which tries to fix a "configured" status to "created".
c936d1e611/pkg/api/handlers/compat/containers.go (L291-L294)
However, commit 141de86862 (Revamp Libpod state strings for Docker compat) removed the "configured" return value from the `String()` method called on line 291 above. Thus, making the `if` check redundant as it will never hit. But the same commit also introduces a return for "initialized" which this `if` should probably have been adapted for.
Signed-off-by: Pieter Engelbrecht <pieter@shuttle.rs>
add support for podman-remote image scp as well as direct access via the API. This entailed
a full rework of the layering of image scp functions as well as the usual API plugging and type creation
also, implemented podman image scp tagging. which makes the syntax much more readable and allows users t tag the new image
they are loading to the local/remote machine:
allow users to pass a "new name" for the image they are transferring
`podman tag` as implemented creates a new image im `image list` when tagging, so this does the same
meaning that when transferring images with tags, podman on the remote machine/user will load two images
ex: `podman image scp computer1::alpine computer2::foobar` creates alpine:latest and localhost/foobar on the remote host
implementing tags means removal of the flexible syntax. In the currently released podman image scp, the user can either specify
`podman image scp source::img dest::` or `podman image scp dest:: source::img`. However, with tags this task becomes really hard to check
which is the image (src) and which is the new tag (dst). Removal of that streamlines the arg parsing process
Signed-off-by: Charlie Doern <cdoern@redhat.com>
the "pod ps" command first retrieves the list of all pods, then
iterates over the list to inspect each pod. This introduce a race
since a pod could be deleted in the meanwhile by another process.
Solve it by ignoring the define.ErrNoSuchPod error.
Closes: https://github.com/containers/podman/issues/14736
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
add two new options to the volume create command: copy and nocopy.
When nocopy is specified, the files from the container image are not
copied up to the volume.
Closes: https://github.com/containers/podman/issues/14722
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Previously, health status events were not being generated at all. Both
the API and `podman events` will generate health_status events.
```
{"status":"health_status","id":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","from":"localhost/healthcheck-demo:latest","Type":"container","Action":"health_status","Actor":{"ID":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","Attributes":{"containerExitCode":"0","image":"localhost/healthcheck-demo:latest","io.buildah.version":"1.26.1","maintainer":"NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e","name":"healthcheck-demo"}},"scope":"local","time":1656082205,"timeNano":1656082205882271276,"HealthStatus":"healthy"}
```
```
2022-06-24 11:06:04.886238493 -0400 EDT container health_status ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63 (image=localhost/healthcheck-demo:latest, name=healthcheck-demo, health_status=healthy, io.buildah.version=1.26.1, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>)
```
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
The test must ensure that all ports in the range are free not just
the first. This flakes often because port 5355 is always in use by
systemd-resolved on fedora.
Fixes#14716
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
currently, setting any sort of resource limit in a pod does nothing. With the newly refactored creation process in c/common, podman ca now set resources at a pod level
meaning that resource related flags can now be exposed to podman pod create.
cgroupfs and systemd are both supported with varying completion. cgroupfs is a much simpler process and one that is virtually complete for all resource types, the flags now just need to be added. systemd on the other hand
has to be handeled via the dbus api meaning that the limits need to be passed as recognized properties to systemd. The properties added so far are the ones that podman pod create supports as well as `cpuset-mems` as this will
be the next flag I work on.
Signed-off-by: Charlie Doern <cdoern@redhat.com>
This bug was introduced in https://github.com/containers/podman/pull/8906.
When we use 'podman rm/restart/stop/kill etc...' command to
the container running with --rm, the OCI runtime directory
remains at /run/<runtime name> (root user) or
/run/user/<user id>/<runtime name> (rootless user).
This bug could cause other bugs.
For example, when we checkpoint the container running with
--rm (podman checkpoint --export) and restore it
(podman restore --import) with crun, error message
"Error: OCI runtime error: crun: container `<container id>`
already exists" is outputted.
This error is caused by an attempt to restore the container with
the same container ID as the remaining OCI runtime's container ID.
Therefore, I fix that the cleanupRuntime() function runs to
remove the OCI runtime directory,
even if the container has already been removed by --rm option.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
Since it may be a while before we get a true fix: add a
workaround for podman-remote checkpoint tests, in which
we pause until the 'run --rm' container is truly truly gone.
I've tried to make it as easy as possible to clean up
the workaround code once the bug is fixed.
Oh, also, remove "-it" from a podman-run. It makes no sense
and only results in nasty orange warning messages.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Libpod requires that all volumes are stored in the libpod db. Because
volume plugins can be created outside of podman, it will not show all
available plugins. This podman volume reload command allows users to
sync the libpod db with their external volume plugins. All new volumes
from the plugin are also created in the libpod db and when a volume from
the db no longer exists it will be removed if possible.
There are some problems:
- naming conflicts, in this case we only use the first volume we found.
This is not deterministic.
- race conditions, we have no control over the volume plugins. It is
possible that the volumes changed while we run this command.
Fixes#14207
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add 4 new subcommands to the testvol binary, instead of just serving the
volume api it now also can create/list/remove plugins. This is required
to test new functionality where volumes are create outside of podman in
the plugin. Podman should then be able to pick up the new volumes.
The new testvol commands are:
- serve: serve the podman api like the the testvol command before
- create: create a volume with the given name
- list: list all volume names
- remove: remove the volume with the given name
Also make a small update to the testvol Containerfile so that it can
build correctly.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Update the golang verion for the testvol image to the latest version
1.18. This requires us to build with GO111MODULE=off.
Use the FQDN to prevent the shortnames prompt.
Also add --network none to the podman build command to make sure we are
only using the copied deps and nothing else.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
I think it is confusion to have this Containerfile in the repo root. It
is used for the tests only so we should move it into the same dir.
Also adapt the Makefile target to use the new path and add the current
date as tag instead of using latest which can break CI easily when we
have to update the image.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
podman image scp and podman system connection tests were querying an existing website during testing.
Change to a URL that will never exist given an improper domain extension
also just generally clean up a few things in both scp and connection testing
resolves#14699
Signed-off-by: Charlie Doern <cdoern@redhat.com>
We should just silently fall through. The log was flooding the
system-service logs when running Gitlab runner.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
podman currently does not support relative volume paths. Add parsing for relative paths in specgen, converting
whatever volume was given to an absolute path.
Signed-off-by: Charlie Doern <cdoern@redhat.com>
* Replace "setup", "lookup", "cleanup", "backup" with
"set up", "look up", "clean up", "back up"
when used as verbs. Replace also variations of those.
* Improve language in a few places.
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
add the ability to filter networks by their dangling status via:
`network ls --filter dangling=true/false`
Fixes: #14595
Signed-off-by: Carlo Lobrano <c.lobrano@gmail.com>
use the memory limit specified for the container instead of reading it
from the cgroup. It is not reliable to read it from the cgroup since
the container could have been moved to a different cgroup and in
general the OCI runtime might create a sub-cgroup (like crun does).
Closes: https://github.com/containers/podman/issues/14676
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
expose the --shm-size flag to podman pod create and add proper handling and inheritance
for the option.
resolves#14609
Signed-off-by: Charlie Doern <cdoern@redhat.com>
When we return no containers we just return `[]` but we still have to keep
the content type header `application/json` so external tools can correctly
parse the output.
Fixes#14647
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
`podman -h` currently returns an error:
`Error: pflag: help requested`
This bug was introduced in 44d037898e, the problem is that we wrap the
error and cobra lib checks with `==` for this one and not errors.Is().
I have a PR upstream to fix this but for now this also works.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
With runc 1.1, we have the following failure:
# #| FAIL: podman emits useful diagnostic on failure
# #| expected: 'Error.*: OCI runtime error: .*: failed to set /proc/self/attr/keycreate on procfs' (using expr)
# #| actual: 'Error: OCI runtime error: runc: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument'
which is caused by the fact that runc 1.1 uses newer opencontainers/selinux
package, which changes custom errors to standard os.PathError instances (so
that they can be unwrapped if needed).
Fix the test case accordingly.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Changes:
- use --timestamp option to produce 'created' stamps
that can be reliably tested in the image-history test
- podman now supports manifest & multiarch run, so we
no longer need buildah
- bump up base alpine & busybox images
This turned out to be WAY more complicated than it should've been,
because:
- alpine 3.14 fixed 'date -Iseconds' to include a colon in
the TZ offset ("-07:00", was "-0700"). This is now consistent
with GNU date's --iso-8601 format, yay, so we can eliminate
a minor workaround.
- with --timestamp, all ADDed files are set to that timestamp,
including the custom-reference-timestamp file that many tests
rely on. So we need to split the build into two steps. But:
- ...with a two-step build I need to use --squash-all, not --squash, but:
- ... (deep sigh) --squash-all doesn't work with --timestamp (#14536)
so we need to alter existing tests to deal with new image layers.
- And, long and sordid story relating to --rootfs. TL;DR that option
only worked by a miracle relating to something special in one
specific test image; it doesn't work with any other images. Fix
seems to be complicated, so we're bypassing with a FIXME (#14505).
And, unrelated:
- remove obsolete skip and workaround in run-basic test (dating
back to varlink days)
- add a pause-image cleanup to avoid icky red warnings in logs
Fixes: #14456
Signed-off-by: Ed Santiago <santiago@redhat.com>
Update to the latest golangci-lint version. v1.46 added new linters.
I disabled nonamedreturns and exhaustruct since they enforce a certain
code style and using them would require big changes to the code base.
The nosprintfhostport is new and I fixed one problem in the tests. While
the test itself is fine because it uses ipv4 only the linter still looks
good because the sprintf use will fail for ipv6 addresses.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
I noticed 'rmi -a' in a test. I tried to fix it. Hilarity ensued.
'rmi -a' is evil: it forces a fresh pull of our test image,
which in turn almost guarantees a flake some day. We avoid
it, but once in a while it slips in.
While fixing it, I noticed a bevy of other problems that
needed cleanup.
Signed-off-by: Ed Santiago <santiago@redhat.com>
commit 1951ff168a introduced a check so
that conmon is not moved to a new cgroup when podman is running inside
of a systemd service. This is helpful to integrate podman in systemd
so that the spawned conmon lives in the same cgroup as the service
that created it.
Unfortunately this breaks when podman daemon is running in a systemd
service since the same check is in place thus all the conmon processes
end up in the same cgroup as the podman daemon. When the podman
daemon systemd service stops the conmon processes are also terminated
as well as the containers they monitor.
Improve the check to exclude podman running as a daemon.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2052697
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Commit 5fa6f686db added a regression which was fixed in eb71712626.
Apply the same fix again to prevent a panic and return a proper error
instead.
To not regress again I added a e2e test which makes sure we do not panic.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Sigh. Buildah PR https://github.com/containers/buildah/pull/3368
changed 'bud' to 'build' in tests. Podman #11585 well-intentionedly
did the same for run-buildah-bud tests ... but did so by *replacing*
'bud' with 'build', not by *adding* 'build' to the list of commands
handled by podman-build. Hence, all tests invoking 'run_buildah bud'
have been completely untested since then.
This remedies that, and deals with all the fallout. Principal among
which is the discovery that our exit-code changes are no longer
necessary: that thing we did where buildah exit status 1 or 2 became
podman exit status 125? That no longer applies. podman now exits
with the same status as buildah. This simplifies our diffs, and
lets us enable a bunch more tests.
Also:
- in run-buildah-bud-tests script, run 'sudo --validate' early on.
Reason: otherwise, the sudo step happens a few minutes after
the script starts (after the git-pull), by which time the user
may have stepped away to get coffee, then comes back ten or twenty
minutes later to find a stupid sudo prompt and no tests run.
Signed-off-by: Ed Santiago <santiago@redhat.com>
This would've caught a regression that #14549 had to fix.
Let's try to prevent the next regression.
This requires some hackery to get namespaces initialized
before the service is started; otherwise the service itself
initializes namespaces, which basically ends up with a
server process that runs forever.
Also: in stop_service(), reset service_pid, because that's
the correct thing to do.
Also: add some debug statements to try to figure out a
CI failure. (And leave them in place, because they might
be useful for future problems).
Signed-off-by: Ed Santiago <santiago@redhat.com>
Fix bad design decision (mine) by adding a simple usage check to 'skip'
and 'skip_if_remote' functions: if invoked without test-name args,
fail loudly and immediately.
Background: yeah, their usage is not intuitive. Making the first arg
be a comment helps with _reading_ the code, but not _writing_ new
additions. A developer in a hurry could write "skip this-test" and,
until now, that would be a silent NOP.
Tested by adding broken skip/skip_if_remote calls inline; I confirm
that the line number and funcname usage is correct.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The nolintlint linter does not deny the use of `//nolint`
Instead it allows us to enforce a common nolint style:
- force that a linter name must be specified
- do not add a space between `//` and `nolint`
- make sure nolint is only used when there is actually a problem
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
if /run is on a volume do not create the file /run/.containerenv as it
would leak outside of the container.
Closes: https://github.com/containers/podman/issues/14577
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This is an enhancement for the podman system prune feature.
In this issue, it is mentioned that 'network prune' should be
wired into 'podman system prune'
https://github.com/containers/podman/issues/8673
Therefore, I add the function to remove unused networks.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
Changes since 2022-05-31:
- add --omit-history option (buildah PR 4028)
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
command
Previously, if a container was not running, and the user ran the `podman
stats` command, an error would be reported: `Error: container state
improper`.
Podman now reports stats as the fields' default values for their
respective type if the container is not running:
```
$ podman stats --no-stream demo
ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU %
4b4bf8ce84ed demo 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0 0s 0.00%
```
Closes: #14498
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
implement podman pod clone, a command to create an exact copy of a pod while changing
certain config elements
current supported flags are:
--name change the pod name
--destroy remove the original pod
--start run the new pod on creation
and all infra-container related flags from podman pod create (namespaces etc)
resolves#12843
Signed-off-by: cdoern <cdoern@redhat.com>
I don't see a reason why we don't support --remove-signatures
from remote push, so adding support.
Fixes: https://github.com/containers/podman/issues/14558
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add a new `--overwrite` flag to `podman cp` to allow for overwriting in
case existing users depend on the behavior; they will have a workaround.
By default, the flag is turned off to be compatible with Docker and to
have a more sane behavior.
Fixes: #14420
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>