SpecGen is our primary container creation abstraction, and is
used to connect our CLI to the Libpod container creation backend.
Because container creation has a million options (I exaggerate
only slightly), the struct is composed of several other structs,
many of which are quite large.
The core problem is that SpecGen is also an API type - it's used
in remote Podman. There, we have a client and a server, and we
want to respect the server's containers.conf. But how do we tell
what parts of SpecGen were set by the client explicitly, and what
parts were not? If we're not using nullable values, an explicit
empty string and a value never being set are identical - and we
can't tell if it's safe to grab a default from the server's
containers.conf.
Fortunately, we only really need to do this for booleans. An
empty string is sufficient to tell us that a string was unset
(even if the user explicitly gave us an empty string for an
option, filling in a default from the config file is acceptable).
This makes things a lot simpler. My initial attempt at this
changed everything, including strings, and it was far larger and
more painful.
Also, begin the first steps of removing all uses of
containers.conf defaults from client-side. Two are gone entirely,
the rest are marked as remove-when-possible.
[NO NEW TESTS NEEDED] This is just a refactor.
Signed-off-by: Matt Heon <mheon@redhat.com>
Given that we can have multiple image digests,
fix the inspect test to check whether the digest
given matches one of the digests of the image.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Fix the image filter parsing in the common libraries
to follow an AND logic for all filters passed in ensuring
compatibility with Docker behavior.
Also fix the filter parsing on the tunnel side so that we grab
all the filters given by the user and not only the last filter
in the list.
Add tests for the fixes.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
This is one of the breaking changes in Podman 5.0: removing the
ability to create new instances of the old Bolt database. This
does not remove support for the database entirely, as existing
Bolt databases will still be usable, but all new installs will
use SQLite after this point - if Bolt is forced by config, we'll
just error.
We don't have plans to outright remove the Bolt code. If that
were to happen, it'd be Podman 6.0 at least, and a significant
enough change it'd warrant a lot of discussion and planning. We
do intend to start winding down support of BoltDB, though, and
new features may be added only to SQLite from here on.
I have added an escape hatch via an undocumented environment
variable that allows us to continue testing BoltDB in CI (and, if
necessary, locally) but I don't want this to be used for any
purpose except continued testing of the old DB to ensure we don't
break it.
Signed-off-by: Matt Heon <mheon@redhat.com>
Some OCI runtimes (cf. [1]) may tolerate container images that don't
specify an entrypoint even if no entrypoint is given on the command
line. In those cases, it's annoying for the user to have to pass a ""
argument to podman.
If no entrypoint is given, make the behavior the same as if an empty ""
entrypoint was given.
[1] https://github.com/containers/crun-vm
Signed-off-by: Alberto Faria <afaria@redhat.com>
Currently, if the container creation failed with
either run or create and you've used --pod with new:
the pod would be created nonetheless. This change ensures
the pod just created is also cleaned up in case
of container creation failure
Fixes#21228
Signed-off-by: danishprakash <danish.prakash@suse.com>
- #15074 ("subtree_control" flake). The flake is NOT FIXED, I
saw it six months ago on my (non-aarch64) laptop. However,
it looks like the frequent-flake-on-aarch64 bug is resolved.
I've been testing in #17831 and have not seen it. So,
tentatively remove the skip and see what happens.
- Closes: #19407 (broken tar, "duplicates of file paths")
All Fedoras now have a fixed tar. Debian DOES NOT, but
we're handling that in our build-ci-vm code. I.e., the
Debian VM we're using has a working tar even though there's
currently a broken tar out in the wild.
Added distro-integration tag so we can catch future problems
like this in OpenQA.
- Closes: #19471 (brq / blkio / loopbackfs in rawhide)
Bug appears to be fixed in rawhide, at least in the VMs we're
using now.
Added distro-integration tag because this test obviously
relies on other system stuff.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Update farm build to directly push images to a registry
after all the builds are complete on all the nodes.
A manifest list is then created locally and pushed to
the registry as well.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Before this, for some special Podman commands (system reset,
system migrate, system renumber), Podman would create a first
Libpod runtime to do initialization and flag parsing, then stop
that runtime and create an entirely new runtime to perform the
actual task. This is an artifact of the pre-Podman 2.0 days, when
there was almost no indirection between Libpod and the CLI, and
we only used one runtime because we didn't need a second runtime
for flag parsing and basic init.
This system was clunky, and apparently, very buggy. When we
migrated to SQLite, some logic was introduced where we'd select a
different database location based on whether or not Libpod's
StaticDir was manually set - which differed between the first
invocation of Libpod and the second. So we'd get a different
database for some commands (like `system reset`) and they would
not be able to see existing containers, meaning they would not
function properly.
The immediate cause is obviously the SQLite behavior, but I'm
certain there's a lot more baggage hiding behind this multiple
Libpod runtime logic, so let's just refactor it out. It doesn't
make sense, and complicates the code. Instead, make Reset,
Renumber, and Migrate methods of the libpod Runtime. For Reset
and Renumber, we can shut the runtime down afterwards to achieve
the desired effect (no valid runtime after). Then pipe all of
them through the ContainerEngine so cmd/podman can access them.
As part of this, remove the SystemEngine part of pkg/domain. This
was supposed to encompass these "special" commands, but every
command in SystemEngine is actually a ContainerEngine command.
Reset, Renumber, Migrate - they all need a full Libpod and access
to all containers. There's no point to a separate engine if it
just wraps Libpod in the exact same way as ContainerEngine. This
consolidation saves us a bit more code and complexity.
Signed-off-by: Matt Heon <mheon@redhat.com>
Add a wait_for_ready() to one kube-play test, to make sure
container output has made it to the journal.
Probably does not fix#18501, but I think it might fix its
most common presentation.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Cut is a cleaner & more performant api relative to SplitN(_, _, 2) added in go 1.18
Previously applied this refactoring to buildah:
https://github.com/containers/buildah/pull/5239
Signed-off-by: Philip Dubé <philip@peerdb.io>
Let's support --config option by setting environment variable
DOCKER_CONFIG instead of ignoring it for docker compatibility, so
it could be used to locate config.json as authentication file.
Also add a test case for this change, remove the deprecated one.
Signed-off-by: Ming Liu <liu.ming50@gmail.com>
- tmpfs + noswap test: requires noswap feature in kernel.
Check for it, and skip if unimplemented. (Root only.
Rootless test works regardless of kernel).
- podman generate systemd tests: always use --files option,
because otherwise the "DEPRECATED" warning gets written
to the systemd unit file.
- kube play tests: yikes. Fix longstanding bugs when checking
for containers running. This revealed a longstanding bug
in one test: multi-pod YAML never actually worked. Fixed now.
- run_podman(): that new check-for-warnings code we added
in #19878, duh, I skipped it on Debian but should've skipped
when *runc*. Do so now and update the comment. Requires
minor surgery to podman_runtime() helper to avoid
infinite recursion.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Back when we introduced ExitCleanly(), we couldn't use it
on Debian because of too many runc bugs. Now, early 2024:
- #11784 has been closed-wontfix, so add a runc special-case
in the specific test that triggers it.
- #11785 seems to have gone away? Treat it as fixed.
- #19552 is languishing, so let's just close-wontfix it too and
add another runc special case.
- and, one new rootless-cgroupsV1 exception for a warning msg
that snuck in recently.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Remove all trailing white spaces from all lines before the line by line
processing
Add test
Exclude the unit file used for the test from whitespace check
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Test "podman start container by systemd" is failed on the system in
which rootless users don't have accessibility to journald. Therefore,
skip the part that reads journal with journalctl.
Signed-off-by: Tsubasa Watanabe <w.tsubasa@fujitsu.com>
Add support for using multiple `Ulimit=` options in `.container` files.
Before, only the last `Ulimit=` option was used in the podman command.
Update podman-systemd.unit.5 docs to reflect this change.
Add `test/e2e/quadlet/ulimit.container` to e2e tests.
Signed-off-by: Paul Nettleton <k9@k9withabone.dev>
* Add BaseHostsFile to container configuration
* Do not copy /etc/hosts file from host when creating a container using Docker API
Signed-off-by: Gavin Lam <gavin.oss@tutamail.com>
Create a buildah SystemContext from the existing cli arguments
Pass the SystemContext to the build
Add system test
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
A number of tests start a container then immediately run podman stop.
This frequently flakes with:
StopSignal SIGTERM failed to stop [...] in 10 seconds, resorting to SIGKILL
Likely reason: container is still initializing, and its process
has not yet set up its signal handlers.
Solution: if possible (containers running "top"), wait for "Mem:"
to indicate that top is running. If not possible (pods / catatonit),
sleep half a second.
Intended to fix some of the flakes cataloged in #20196 but I'm
leaving that open in case we see more. These are hard to identify
just by looking in the code.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The push binding endpoint wasn't actually writing the
output data to the stream when quiet=false and there
was no push error.
Do not hard code quiet=true anymore, take into account the
user input.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Our test registry (used for login & local registry tests)
was being run using the standard podman tmpdir, hence the
standard podman database, This was then getting clobbered
in the 330-corrupt-images test, which runs "system reset".
We just didn't know this was happening. Until we added
a registry test after the system reset. Oops.
Solution: new helper function podman_isolation_opts()
sets --root, --runroot, *and --tmpdir*. Refactor all
existing --root/--runroot usages. Document.
Next problem: the "network reload" test in 500-networking.bats
did not (could not) know about our registry port, so the
"iptables -F" command reverted that to DROP, so the subsequent
podman-auth in 700-play timed out.
Solution: add a podman-isolated "network reload" to start_registry().
Final problem, because, really, those weren't enough: a BATS
bug where running with --filter-tags would set IFS=',' in setup_suite
which in turn has catastrophic consequences:
https://github.com/bats-core/bats-core/issues/812
See #20966 for details of the failure and further conversation.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Also Support for podman pod ps --format '{{ .Label label }}'
Finally fix support for --format '{{ .Podname }}'
When user specifies .Podname this implies --pod was passed.
Fixes: https://github.com/containers/podman/issues/20957
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If a user did not set an equal sign in the annotation that old code
would panic when accessing the second element in the slice.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This option accepts a file path so we should allow commas in it.
Also add tests for --decryption-key
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
New VMs have netavark 1.9, which fixes the "cannot talk to syslog"
warning when running containerized, so we can reenable clean-output
checks in containerized e2e tests
pasta: some new VMs have passt >= 2023-11-10, but f38 does not,
and f39 is unclear (my version extractor could not tell). So
I'm leaving the 20170 skip.
Debian runc now supports umask in *run*, but not *exec*. Even
with runc 1.1.10. And we don't even know what the situation is
on RHEL... so, run the podman-run umask tests but not exec.
Fixes: #19809
Signed-off-by: Ed Santiago <santiago@redhat.com>
Very rare flake, probably caused by my nemesis, podman run -d
Solution: keep the sleep-1 (vs using nanosecond resolution),
but make sure we first wait for the output from the container.
Also, bump down the iteration delay in wait_for_output, from 5s to 1.
Thanks to Paul for noticing that.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Some of the tests were doing "podman run -d" without wait_for_ready.
This may be the cause of some of the CI flakes. Maybe even all?
It's not clear why the tests have been working reliably for years
under overlay, and only started failing under vfs, but shrug.
Thanks to Chris for making that astute observation.
Fixes: #20282 (I hope)
Signed-off-by: Ed Santiago <santiago@redhat.com>
add a new option --preserve-fd that allows to specify a list of FDs to
pass down to the container.
It is similar to --preserve-fds but it allows to specify a list of FDs
instead of the maximum FD number to preserve.
--preserve-fd and --preserve-fds are mutually exclusive.
It requires crun since runc would complain if any fd below
--preserve-fds is not preserved.
Closes: https://github.com/containers/podman/issues/20844
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This PR closes#20585
Add Inital support for Entrypoint on quadlets
Add Bats Tests for Entrypoint
Updates the documentation with one example to use the Entrypoint option
Signed-off-by: Odilon Sousa <osousa@redhat.com>
When a systemd-related system test fails, we usually get:
systemctl start foo
FAILED exit status 1, try 'systemctl --status' or 'journalctl -xe'
That makes it impossible to debug flakes.
Solution: new systemctl_start() [note underscore], to be used
instead of systemctl <SPACE> start. On failure, will run log
commands.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Test flaking because (I think) one-second resolution isn't
good enough for --since. Use NS resolution.
Also, more test-name cleanup: strip off timestamps in 'since='.
This yields consistent test names in logs, which makes it easier
for me to categorize flakes.
Fixes: #20896
Signed-off-by: Ed Santiago <santiago@redhat.com>
When Podman starts, it checks a number of critical runtime paths
against stored values in the database to make sure that existing
containers are not broken by a configuration change. We recently
made some changes to this logic to make our handling of the some
options more sane (StaticDir in particular was set based on other
passed options in a way that was not particularly sane) which has
made the logic more sensitive to paths with symlinks. As a simple
fix, handle symlinks properly in our DB vs runtime comparisons.
The BoltDB bits are uglier because very, very old Podman versions
sometimes did not stuff a proper value in the database and
instead used the empty string. SQLite is new enough that we don't
have to worry about such things.
Fixes#20872
Signed-off-by: Matt Heon <mheon@redhat.com>
The option `farm` which is used to specify the farm to be used, is moved to farm build command from farm command.
closes#20752
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
When committing containers to create new images, accept a container
config blob being passed in the body of the API request by adding a
Config field to our API structures. Populate it from the body of
requests that we receive, and use its contents as the body of requests
that we make.
Make the libpod commit endpoint split changes values at newlines, just
like the compat endpoint does.
Pass both the config blob and the "changes" slice to buildah's Commit()
API, so that it can handle cases where they overlap or conflict.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Followup to #20797 (defer assertion failures). The bail-now()
helper was being defined only in setup() ... and some tests,
particularly 001-basic.bats, define their own minimalist setup().
Symptom was "bail-now: command not found", which still caused
test to fail (so no failures were hidden) but led to concern
and wasted time when analyzing failures.
Solution: add one more definition of bail-now(), in outer scope.
There is still one pathological case I'm not addressing: a
bats file that defines its own teardown() which does not invoke
basic_teardown(), then has a test that runs defer-assertion-failures
without a followup immediate-assertion-failures. This would lead
to failures that are never seen. Since teardown() without basic_teardown()
is invalid, I choose not to worry about this case.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Two newly-added tests, fail in gating:
- system connection: difference in how sockets are set up
between CI and gating
- ulimit: gating seems to run with ulimit -c -H 0. Check, and
skip if ulimit is less than what we need
Signed-off-by: Ed Santiago <santiago@redhat.com>
When InitialDelaySeconds in the kube yaml is set for a helthcheck,
don't update the healthcheck status till those initial delay seconds are over.
We were waiting to update for a failing healtcheck, but when the healthcheck
was successful during the initial delay time, the status was being updated as healthy
immediately.
This is misleading to the users wondering why their healthcheck takes
much longer to fail for a failing case while it is quick to succeed for
a healthy case. It also doesn't match what the k8s InitialDelaySeconds
does. This change is only for kube play, podman healthcheck run is
unaffected.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
For a source file like `foo.container`, look for drop in named
`foo.container.d/*.conf` and merged them into the main file. The
dropins are applied in alphabetical order, and files in earlier
diretories override later files with same name.
This is similar to how systemd dropins work, see:
https://www.freedesktop.org/software/systemd/man/latest/systemd.unit.html
Also adds some tests for these
Signed-off-by: Alexander Larsson <alexl@redhat.com>
Added additional check for event type to be remove and set the correct exitcode.
While it was getting difficult to maintain the omitempty notation for Event->ContainerExitCode, changing the type from int to int ptr gives us the ability to check for ContainerExitCode to be not nil and continue operations from there.
closes#19124
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
Add support for .pod unit files with only PodmanArgs, GlobalArgs, ContainersConfModule and PodName
Add support for linking .container units with .pod ones
Add e2e and system tests
Add to man page
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Existing test was very good, but as a multidimensional table it
was unmaintainable... and actually missed one corner case.
This version isn't much better. It's far longer, codewise. It
is a little harder to understand at first glance. It has three
uncomfortable magic conditionals. But I believe it is more
long-term maintainable: beyond the first glance, it is possible
for a human to check it for correctness. It is also extensible,
as proved by the new test cases I added.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Some system tests run deep loops:
for x in a b c; do
for y in d e f; do
.... check condition $x + $y
Normally, if one of these fails, game over. This can be frustrating
to a developer looking for failure patterns.
Here we introduce a new defer-assertion-failure function, meant
to be called before loops like these. Everything is the same,
except that tests will continue running even after failure.
When test finishes, or if test runs immediate-assertion-failure,
a new message indicates that multiple tests failed:
FAIL: X test assertions failed. Search for 'FAIL': above this line.
Signed-off-by: Ed Santiago <santiago@redhat.com>
I noticed these old debug code while looking at a log. These were
needed to debug a nasty flake[1] in the compose tests. However
it has been fixed[2] for a while and I am not aware of any flakes
around that logic so we are good to remove it.
I still leave the server logs in there as they may be useful for all
kinds of issues and are only printed when the test fails so it does not
clutter the logs.
[1] https://github.com/containers/podman/issues/10052
[2] https://github.com/containers/podman/pull/11091
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Currently if user specifies podman kube play --replace, the
pod is removed on the client side, not the server side. If
the API is called with replace=true, the pod was not being removed
and this called the API to fail. This PR removes the pod if it
exists and the caller specifies replace=true.
Fixes: https://github.com/containers/podman/discussions/20705
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We were ignoreing relabel requests on certain unsupported
file systems and not on others, this changes to consistently
logrus.Debug ENOTSUP file systems.
Fixes: https://github.com/containers/podman/discussions/20745
Still needs some work on the Buildah side.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This expands support for the (previously) boolean `Notify` directive, in
support of healthcheck determined SD-NOTIFY event emission, as
supported by Podman with the `--sdnotify=healthy` option.
Closes: #18189
Signed-off-by: Alex Palaistras <alex@deuill.org>
Add a new `no-dereference` mount option supported by crun 1.11+ to
re-create/copy a symlink if it's the source of a mount. By default the
kernel will resolve the symlink on the host and mount the target.
As reported in #20098, there are use cases where the symlink structure
must be preserved by all means.
Fixes: #20098
Fixes: issues.redhat.com/browse/RUN-1935
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
This mostly just inherits the c/common/pkg/auth implementation,
except that AuthFilePath and DockerCompatAuthFilePath can not be set
simultaneously, so don't unnecessarily explicitly set AuthFilePath.
c/common already handles that.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We're only testing vfs in CI. That's bad. #18822 tried to
remedy that but that only worked on system tests, not e2e.
Here we introduce CI_DESIRED_STORAGE, to be set in .cirrus.yml
in the same vein as all the other CI_DESIRED_X. Since it's 2023
we default to overlay, testing vfs only in priorfedora.
Fixes required:
- e2e tests:
- in cleanup, umount ROOT/overlay to avoid leaking mounts
- system tests:
- fix a few badly-written tests that assumed/hardcoded overlay
- buildx test: add weird exception to device-number test
- mount tests: add special case code for vfs
- unprivileged test: disable one section that is N/A on vfs
Signed-off-by: Ed Santiago <santiago@redhat.com>
Support UIDMap, GIDMap, SubUIDMap and SubGIDMap
If any of them are set disregard the deprecated Remap keys
Add tests and man
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
...while Ed was napping:
- create/run based on remote image: was not actually testing anything
- create/run --tls-verify: ditto
- run --decryption-key: sort of testing but not really
- Fail(), not Skip(), if we can't start registry.
- never Skip() halfway through a test: emit a message, and return
The Skip-in-the-middle thing deserves to be shouted from the rooftops.
Let's please never do that again. Skip() says "this entire test was
skipped", which can be misleading to a spelunker trying to track
down a problem related to those tests.
Also, more minor:
- reduce use of port 5000
- rename a confusingly-named test
Ref: #11205, #12009
Signed-off-by: Ed Santiago <santiago@redhat.com>
This is something I've long wanted in logs: an indicator of
which bats file the test lives in. As of v1.7.0 there is
now a way to do that, BATS_TEST_NAME_PREFIX. Use it. Logs
now look like:
ok 14 [001] podman - shutdown engines
ok 15 [005] podman info - basic test
...
not ok 195 [065] podman cp - dot notation ....
(As a bonus, we can remove the super-long "test blah blah pasta"
duplication from 505.bats).
Also, removed no-longer-necessary (fingers crossed) debug code
for the recently fixed containers-storage umount/EINVAL flake.
Signed-off-by: Ed Santiago <santiago@redhat.com>
This will only fail if someone ever adds a system test that
runs podman with "--db-backend boltdb", which nobody should
ever do, but this is a cheap way to make sure it never happens.
See #20563
Signed-off-by: Ed Santiago <santiago@redhat.com>
When using the local client, we should display the compression
algorithm.
If the compression level is set, then show this also.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
It seems certain test infrastructure prevents cloning repo which
contains symlink outside of the repo itself, generate symlink for such
test by the testsuite itself just before running test and remove it when
test is completed.
Signed-off-by: Aditya R <arajan@redhat.com>
Followup to #20318: now that sqlite is the podman default,
enforce that in CI as well. Test boltdb only in Prior Fedora.
In the process, discovered & cleaned up some duplication
and unused YAML anchors.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Docker allows the passing of -1 to indicate the maximum limit
allowed for the current process.
Fixes: https://github.com/containers/podman/issues/19319
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
when --uts=host is provided, the expectation is to use the hostname
from the host not the container name.
Closes: https://github.com/containers/podman/issues/20448
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Under some circumstances BATS tests hang, causing a CI timeout.
One prominent reason is pasta test failures: BATS will not
exit until all child processes are finished, and in some
environments the socat client can stay forever.
Workaround: run socat with a timeout, and with limited retries.
Tested on an f38 system with broken IPv6: without this fix,
bats hangs until I ^C. With this fix, bats exits as it should.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Podman build --remote is translating https://path as if it was a file
path. This change will leave it as a URL so it can be parsed on the
server side.
Fixed: https://github.com/containers/podman/issues/20475
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
All `[]string`s in containers.conf have now been migrated to attributed
string slices which require some adjustments in Buildah and Podman.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Followup from #20050. Lots of tiny problems in tests, all of
them adding up to significant maintainability problems.
These tests are currently impossible to run in a dev environment,
and super-painful to set up in 1mt, so I've just done a few hours
of cleanup and am giving up for the week.
This is ready for merge, in the sense that it's much better than
what exists now, but it still needs boatloads more work.
Signed-off-by: Ed Santiago <santiago@redhat.com>
I don't really like this solution because it can't be undone by
`--security-opt unmask=all` but I don't see another way to make
this retroactive. We can potentially change things up to do this
the right way with 5.0 (actually have it in the list of masked
paths, as opposed to adding at spec finalization as now).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Our aarch64 CI system uses 172.31.0.0/20. Because I was (and am)
lazy, my random_rfc1918_subnet() helper was only checking /24.
This causes flakes.
Solution is to actually do it right: binary arithmetic, prefix
matching. This is effectively impossible in bash, so, use a
hairy perl helper and add copious tests.
Fixes: #18693
Signed-off-by: Ed Santiago <santiago@redhat.com>
Problem: frequent CI flakes of the form:
Error: cannot listen on the TCP port: listen tcp4 :5355: bind: address already in use
Always 5355.
Cause: systemd-resolve listens on 5355, but not on 127.0.0.1. So
when GetPort() tries its is-it-in-use check by binding localhost,
it succeeds; but then podman binds * and fails.
Solution: GetPort(): test by binding 0.0.0.0.
Also, improve the failure message.
Signed-off-by: Ed Santiago <santiago@redhat.com>
There's a whole slew of networking-related flakes whose common
element seems to be improper use of curl. Fix those by:
* add --retry --retry-connrefused; and/or
* add -S ("show errors". Plain -s silences everything!); and/or
* test exit status from curl; and/or
* add wait_for_port after "podman run -d", to avoid races
* log commands, to make debugging easier
Important note: wait_for_port() was not working with rootless
podman ports. Trivial proof:
$ podman run -d --name foo -p 8192:80 \
quay.io/libpod/testimage:20221018 \
/bin/busybox-extras httpd -f -p 80
$ grep :2000 /proc/net/tcp
[no results]
Solution: use ss tool; it seems to handle this just fine.
There may be a better solution.
Oh, also, add -t1 to a podman restart, to shave 18s from test run.
Fixes: #20335 and, I think, a handful of others
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add Quadlet key and disconnect relationship withr read-only
Update and add tests
Update man with new key
Remove the reference to VolatileTmpfs in the man page to reduce its
usage, since the same functionality can be achieved using the Tmpfs key
while keeping its support to maintain backward compatibility
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Docker deals with the --all flag on the client side while Podman does it
on the server side. Hence, make sure to not set the dangling filter
with two different values in the backend.
Fixes: #20469
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
This change is the first step of integrating appendable string arrays
into containers.conf and starts with enabling the `Env`, `Mounts`, and
`Volumes` fields in the `[Containers]` table.
Both, Buildah and Podman, read (and sometimes write) the fields of the
`Config` struct at various places, so I decided to migrate the fields
step-by-step. The ones in this change are most critical ones for
customers. Once all string slices/arrays are migrated, the docs of
containers.conf will be updated. The current changes are entirely
transparent to users.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
We only care about the version so just import the define package for it,
the main buildah package causes big transitive imports which fail to
build with the remote tag (i.e. libimage)
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Tag now does a prepend internally instead of append with the names. Thus
the order changed which needs some test changes.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
commit 7ade972102 introduced the change
that caused an issue in crun since it forces the root user session
instead of the system one when DBUS_SESSION_BUS_ADDRESS is set.
I am addressing it in crun, but for the time being, let's also not
pass the variable down to conmon since the assumption is that when
running as root the containers must be created on the system bus.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When the hostNetwork option is set to true in the k8s yaml,
set the pod's hostname to the name of the machine/node as is
done in k8s. Also set the utsns to host.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Followup to #20394. For years (since BATS 1.5) we've been
seeing and ignoring nasty red warnings at the end of every
system test run. Thanks for fixing it, @giuseppe! But it
broke down in the '?' case when $expected_rc is empty:
test/system/helpers.bash: line 345: [: -eq: unary operator expected
Simple fix.
Signed-off-by: Ed Santiago <santiago@redhat.com>
always cleanup the exec session when the command specified to the
"exec" is not found.
Closes: https://github.com/containers/podman/issues/20392
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
cp tests flake constantly under VFS (discovered in #20161),
and the way these tests were written makes it very, very hard
to understand failures.
This is a (sorry) hard-to-review cleanup:
- use distinctive container names, not just "cpcontainer"
- add distinctive test names (e.g. RUNNING vs CREATED)
- remove unnecessary code
- remove --pause=false (option is deprecated and, IIUC, a NOP)
- clean up some confusing slashes in paths
- "dot notation" tests:
- add a comment linking to issue, because that's a weird one
that makes no sense whatsoever
- fix tests, because they were actually not testing
This cleanup has been tested repeatedly in 20161, I'm just bringing
it into main because 20161's future is uncertain.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Updated the error message to suggest user to use --replace option to instruct Podman to replace the existsing external container with a newly created one.
closes#16759
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
* rootful: NanoCpus needs to set more than 10000000 on cgroups v1.
* rootless: Resource limits that include NanoCPUs are not supported and ignored.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
libimage did not walk thte layers correctly which was probably
inherited by old Podman code. Fix that by vendoring in the
corresponding changes in c/common.
Fixes: #20375
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When trying to connect a container to a network and the connection
already exists, an error should only be raised if the container is
already running (or is in the `ContainerStateCreated` transition)
to mimic the behavior of Docker as described here:
https://github.com/containers/podman/pull/15516#issuecomment-1229265942
For running and connected containers 403 is returned which fixes#20365
Signed-off-by: Philipp Fruck <dev@p-fruck.de>
When a userns and netns is used we need to let the runtime create the
netns otherwise the netns is not owned by the right userns and thus
the capabilities would not be correct.
The current restart logic tries to reuse the netns which is fine if no
userns is used but when one is used we setup a new netns (which is
correct) but forgot to cleanup the old netns. This resulted in leaked
network namespaces and because no teardown was ever called leaked ipam
assignments, thus a quickly restarting container will run out of ip
space very fast.
Fixes#18615
Signed-off-by: Paul Holzinger <pholzing@redhat.com>