This fixes the problem where even as root we check the netns files from
root. But in order to catch any rootless bugs we must check the rootless
files from $XDG_RUNTIME_DIR/netns.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This test is currently disabled due to several issues, only some of which
are described in the existing comments. Add some more details to clarify
the situation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This name for the tests is misleading, since in the default configuration
podman will already configure a forwarding addres, which could forward
to either another local forwarder or an external nameserver on the host
side. What this test is really about is explicitly configuring the pasta
DNS forwarding address. Rename accordingly.
The IPv4 version of the test doesn't use the podman --dns option, only
the pasta --dns-forward option. This exercises the podman behaviour that
pasta --dns-forward options are added to /etc/resolv.conf automatically.
However there could also be other things in /etc/resolv.conf, so the
nslookup might not use the custom forwarding address for the lookup.
To fix that, split the test into two parts: one verifying that the custom
address is in /etc/resolv.conf and another performing the nslookup with an
explicit server address to make sure we exercise the pasta side as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In both the "Basic nameserver lookup" and "Local forwarder, IPv4" pasta
tests, we check whether DNS resolution is working by running "nslookup
127.0.0.1" in the container and checking if 1.0.0.127.in-addr.arpa is in
the output.
1.0.0.127.in-addr.arpa isn't the expected result of the resolution though,
it's just the DNS name that nslookup will tranlated 127.0.0.1 into. The
test mostly works, because nslookup echoes that on successful lookups.
However, it could also echo it in certain sorts of failure, so it's not a
very reliable test.
Furthermore, resolving 127.0.0.1 from a nameserver is a rather strange
thing to do. It's done that way because RFC1912[0] suggests it should
always resolve, even for nameservers on a disconnected network. But, this
doesn't really appear to be true in practice: a number of resolvers return
NXDOMAIN. That works by accident because nslookup seems to echo the
name above as part of the error message.
Change to instead looking up one of the root servers by name. This does
now rely on access to the global DNS during tests, but other podman tests
attempt to resolve google.com, so that should be ok. One of the root
servers is about as close to universal resolvability as it's possible to
get
[0] https://datatracker.ietf.org/doc/html/rfc1912#section-4.1
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The idea behind the "External resolver" tests is simply to check that we
can contact a nameserver, regardless of this configuration. To this end
the "IPv4" version looks up 127.0.0.1 which RFC1912[0] suggests should
always be resolvable.
The IPv6 version instead looks up [::1]. While it makes sense for
that to be resolvable in a similar way, there appear to be quite a few
nameservers which do not resolve it, making this test flaky.
Furthermore the idea behind resolving [::1] is that it should make
nslookup prefer to resolve over IPv6. That appears to be very
unreliable at best. Since making a different query doesn't actually
exercise anything different in pasta, drop the test.
The remaining IPv4 test isn't really specific to an "external" resolver,
it's simply checking that we can contact some sort of resolver with the
default podman configuration. Rename accordingly, and run it regardless of
IPv4 connectivity on the host: we can still query a nameserver about an
IPv4 address, even if we only have IPv6 connectivity ourselves.
[0] https://datatracker.ietf.org/doc/html/rfc1912#section-4.1
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The "Local forwarder, IPv4" pasta test, amongst other things, checks that
podman's default DNS forwarding address - 169.254.0.1 - appears in the
container's /etc/resolv.conf. That's not really related to anything else
going on in that test (which is about _changing_ that default address).
So, move it into its own test case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
...or at least as much as possible. Some tests cannot
be run in parallel due to #23750: "--events-backend=file"
does not actually work the way a naïve user would intuit.
Stop/die events are asynchronous, and can be gathered
by *ANY OTHER* podman process running after it, and if
that process has the default events-backend=journal,
that's where the event will be logged. See #23987 for
further discussion.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Need --layers=false in podman build, otherwise a buildah race
can trigger "layer not known" failures:
https://github.com/containers/buildah/issues/5674
Signed-off-by: Ed Santiago <santiago@redhat.com>
When running parallel, multiple tests could be trying to start
the registry at once. Make this parallel-safe.
Also, use a safer port range for the registry. Something
outside of /proc/sys/net/ipv4/ip_local_port_range
Sorry, I'm including a FIXME section that I haven't investigated
deeply enough.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add a few best-practices examples, and add a whole section
describing the dos and donts of writing parallel-safe tests.
Signed-off-by: Ed Santiago <santiago@redhat.com>
For tests run in parallel, show file number as |nnn| (vs [nnn])
Teach logformatter to distinguish the two, adding 'p' to anchors
in parallel tests. Necessary because in this scheme we run bats
twice, thus see 'ok 1' twice, and we want to differentiate them.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Workaround for #23292, where simultaneous 'pod create' commands
will all start a podman-build of the pause image, but only
one of them will be tagged, and the others will leak <none>
images.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The issue is closed and I recently fixed a number of races (bf74797c69)
in the remote attach API that sound like exactly like the same error
that was mentioned in issue #9597.
As such I think this works, if it start flaking again we can revert this
or better fix the actual bug.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
For the past two months we've been splitting system tests
into two categories: those that CAN be run in parallel,
and those that CANNOT. Much work has been done to replace
hardcoded names (mycontainer, mypod) with safename().
Hundreds of test runs, in CI and on Ed's laptop, have
proven this approach viable.
make {local,remote}system now runs in two steps: first
the serial ones, then the parallel ones. hack/bats will
now recognize the 'ci:parallel' tag and add --jobs (nprocs).
This requires some tweaking of leak_check, because there
can be umpteen tests running (affecting image/container/pod/etc
state) when any given test completes.
Rules for enabling parallelization in tests:
* use unique container/pod/volume/network names (safename)
* do not run 'podman rm -a' or 'rmi -a'
* never use the -l (--latest) option
* do not run 'podman ps/images' and expect precise output
Signed-off-by: Ed Santiago <santiago@redhat.com>
...and remove one old skip() for older debian, but leave
two others in place and mark that they're still a problem.
Signed-off-by: Ed Santiago <santiago@redhat.com>
convert the owner UID and GID into the user namespace only when
":idmap" mount is used.
This changes the behaviour of :idmap with an empty volume. Now the
existing directory ownership is copied up as in the other case.
Closes: https://github.com/containers/podman/issues/23347
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Use safename. Add ci:parallel tags. Use a random port, not
hardcoded 9999. Do not remove pause image. And especially
do not "rm -a" anything.
Signed-off-by: Ed Santiago <santiago@redhat.com>
...because it requires 100% control and knowledge of the
state of all images, containers, and volumes.
Use safename anyway, just in case we ever have a leak from here.
I'm finding safename sooooooo helpful when reading journal.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add ci:parallel tags; move one non-parallel-safe test to
another networking-test file; and a few drive-by fixes
Signed-off-by: Ed Santiago <santiago@redhat.com>
Use safename for containers and pods. Add ci:parallel tags.
And reenable distro-integration tests that had been skipped
due to a container-selinux bug that is now fixed.
Signed-off-by: Ed Santiago <santiago@redhat.com>
pasta added a new --map-guest-addr to option that maps a to the actual
host ip. This is exactly what we need for host.containers.internal
entry. So we now make use of this option by default but still have to
keep the exclude fallback because the option is very new and some
users/distros will not have it yet.
This also fixes an issue where the --dns-forward ip were not used when
using the bridge network mode, only useful when not using aardvark-dns
as this used the proper ips there already from the rootless netns
resolv.conf file.
Fixes#19213
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When you sort by repository a user most likely also want the tags to be
sorted as well. At the very least to get a stable output as the order
could be changed pull podman tag/pull even if they keep using the same
tag name.
Fixes#23803
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The kube generate command can now generate a yaml for
the Job kind and the kube play command can create a pod
and containers with podman when passed in a Job yaml.
Add relevant tests and docs for this.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Use safename, and add ci:parallel tags to all tests. (One
test was running "podman wait -l", which cannot work in
parallel. I choose to change it to "wait $cname", and
lose the -l testing)
Signed-off-by: Ed Santiago <santiago@redhat.com>
Where possible, use safename and add ci:parallel tags.
One test runs "podman kill -a", which would be unwise to run
in parallel with other tests.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add 'ci:parallel' tags to a few easy places. And, two
small easily-reviewed safename or random-port additions.
These have been working fine in #23275. I want to stop
carrying them there so I can work on simplifying my PR.
Signed-off-by: Ed Santiago <santiago@redhat.com>
- replace random_string with safename in container/network names
- add ci:parallel tags where possible.
- where not possible, add explanations
- fix a userns leak
Signed-off-by: Ed Santiago <santiago@redhat.com>
Workaround (NOT A FIX) for pasta issue #23482, wherein
podman logs includes a waitpid: ESRCH warning. Consensus
seems to be that this is a bug in socat.
Signed-off-by: Ed Santiago <santiago@redhat.com>
- fix a few missing safenames
- eliminate 'container rm -a'
- when running ps, do substring match, not exact
- where possible, add ci:parallel tags
- when not possible, explain
Also, fix a completely broken inspect test
Signed-off-by: Ed Santiago <santiago@redhat.com>
This started off as an attempt to make `podman stop` on a
container started with `--rm` actually remove the container,
instead of just cleaning it up and waiting for the cleanup
process to finish the removal.
In the process, I realized that `podman run --rmi` was rather
broken. It was only done as part of the Podman CLI, not the
cleanup process (meaning it only worked with attached containers)
and the way it was wired meant that I was fairly confident that
it wouldn't work if I did a `podman stop` on an attached
container run with `--rmi`. I rewired it to use the same
mechanism that `podman run --rm` uses, so it should be a lot more
durable now, and I also wired it into `podman inspect` so you can
tell that a container will remove its image.
Tests have been added for the changes to `podman run --rmi`. No
tests for `stop` on a `run --rm` container as that would be racy.
Fixes#22852
Fixes RHEL-39513
Signed-off-by: Matt Heon <mheon@redhat.com>
By default wait only waits for the exit of a container, there is really
no way to make it wait for the removal too when the container was
created with --rm. I though I found a clever way in 8a943311db but this
is not working race free. While it works most of the time any other
parallel process might call syncContainer() before the cleanup process
holds the lock until it removes it. As such the wait hack to only update
the state and not sync the exit file did not work so we can drop that.
However the test wants to wait for the removal to happen by the cleanup
process and we can already say --condition=removing to do this but this
will throw an error if the ctr was removed instead of counting this as
success so fix that as well.
Fixes#23640
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The usual, safename instead of hardcoded names or random_string.
And remove some rmi statements: we no longer clean up pause_image.
Been working great in #23275 all week.
Signed-off-by: Ed Santiago <santiago@redhat.com>
...by using a crude port lock-and-reserve mechanism. This is
a small cherrypick from code that has been working in #23275
over dozens of CI runs. Am separating out into a small PR
because it's stable, harmless to serial runs, and will
simplify the eventual review of #23275.
Closes: #23488
Signed-off-by: Ed Santiago <santiago@redhat.com>
Use safename instead of hardcoded object names. Requires moving
a test table down, into the function itself instead of global,
because the table needs to know object names.
Also: sneak in a workaround for dealing with quay flakes (in
image search). The local registry is allowing almost all tests
to pass even when quay is down, but this one test still needs
to hit quay.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Now that on-failure exits right away the test is racy as the
RestartCount is not at the value we expect as the container is still
restarting in the background. As such add a timer based approach.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The current code did several complicated state checks that simply do not
work properly on a fast restarting container. It uses a special case for
--restart=always but forgot to take care of --restart=on-failure which
always hang for 20s until it run into the timeout.
The old logic also used to call CheckConmonRunning() but synced the
state before which means it may check a new conmon every time and thus
misses exits.
To fix the new the code is much simpler. Check the conmon pid, if it is
no longer running then get then check exit file and get exit code.
This is related to #23473 but I am not sure if this fixes it because we
cannot reproduce.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When will I learn not to dismiss something as "easy"?
Anyhow, this doesn't actually change anything parallel-wise
but it does reduce a race condition seen on heavily-loaded
slow systems, wherein a container goes into unhealthy before
we want it to. This version isn't perfect; I don't think
there's an ideal fix for this.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Only one test can be parallelized. Do so, and add a comment
to the other one explaining why it can't be.
Also, add some missing error-message checks.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Very few changes needed, all of them simple.
It is impossible to parallelize this entire file, because "stop -a".
Add tags to tests that can be parallelized, and comments to those
that can't.
Signed-off-by: Ed Santiago <santiago@redhat.com>
When the cidfile does not exists and ignore is set the cli parser skips
the file without error and we call into the backend code without any
names at all. This should logically be a NOP but on remote it caused all
containers to be returned which caused podman stop to stop everything in
this case.
Fixes#23554
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Do not rely on an arbitrary delay in order to ensure the port was bound
in the container. Instead this approach checks if the port is bound in
the netns and only then starts the client. This speeds up the entire
test file by 50% but more importantly in parallel testing it solves
hangs as the timeout there was unreliable.
Fixes#23471
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add support for the ServiceName key for all unit types
Extend the PodInfo struct into UnitInfo to consolidate all prepopulated data into a single map
Use the NodesInfo map instead of the resourceName
Update the UnitInfo in the convert function instead of returning it
No need to replace extension anymore just remove it
All e2e tests with dependencies on other Quadlet files moved to a separate section
Add the capability of overriding the service name in the test
Add e2e tests for the new functionality
Adjust integration tests
Update the MAN page
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Use safename for containers, volumes, images.
Build a temporary scratch image for podman image mount, so
we can safely mount/umount it (instead of $IMAGE) without
risk of other parallel tests umounting it.
Fixed some oopsies ("$vol1" is empty string, so, NOP test)
And... an experiment. I'm leaving in my 'ci:parallel' tags
and notes, so I don't have to carry them in #23275. This
is harmless, basically just noisy comments. The drawback
is, if for some reason #23275 does not pan out, I'll have
to go back and remove those tags. Right now I'm feeling
pretty comfortable about this parallelization approach tho.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Use safename instead of hardcoded "test"
Start registry once, in setup_file(), instead of requiring
individual tests to do so.
Add explicit --authfile arg to a bunch of places that now need it
Minor cleanup and improvements in test descriptions. I may have
gotten a little carried away here, but if this test ever fails
these additions will make someone's life much easier.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Strictly speaking we don't need the path yet, but it existing
prevents a lot of strangeness in our path-checking logic to
validate the current Podman configuration, as it was the only
path that might not exist this early in init.
Fixes#23515
Signed-off-by: Matt Heon <mheon@redhat.com>
if idmap is specified for a volume, reverse the mappings when copying
up from the container, so that the original permissions are maintained.
Closes: https://github.com/containers/podman/issues/23467
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
BATS teardown logs are unreadable, making it almost impossible
to see tiny "Leaked this-or-that" messages.
Solution: new _run_podman_quiet() helper, replaces run_podman
in a small number of cases within teardown. Clunky, and
duplicative, sorry.
New helper for leak_check, basically spits out warnings (and
bumps error count) if it sees any output whatsoever from
individual "podman XXX ls" commands.
Signed-off-by: Ed Santiago <santiago@redhat.com>
We bind ports to ensure there are no conflicts and we leak them into
conmon to keep them open. However we bound the ports after the network
was set up so it was possible for a second network setup to overwrite
the firewall configs of a previous container as it failed only later
when binding the port. As such we must ensure we bind before the network
is set up.
This is not so simple because we still have to take care of
PostConfigureNetNS bool in which case the network set up happens after
we launch conmon. Thus we end up with two different conditions.
Also it is possible that we "leak" the ports that are set on the
container until the garbage collector will close them. This is not
perfect but the alternative is adding special error handling on each
function exit after prepare until we start conmon which is a lot of work
to do correctly.
Fixes https://issues.redhat.com/browse/RHEL-50746
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
I broke the kube external storage test in the course of my
safename PR: _write_test_yaml() with no command generated
a pod that did not trigger the conditions required for
this test.
Solution: run a container (top). Add new checks to prevent
this gap from happening again.
Signed-off-by: Ed Santiago <santiago@redhat.com>
These test steps check the automount feature with multi images for
following item:
1. multi images can be auotmounted with yaml file.
2. if there are same path exist in the images, the last one
should trumps.
3. the volume is mounted readonly in the container.
4. the volumes are only mounted in the specific container, but
not the whole pods.
Signed-off-by: Yiqiao Pu <ypu@redhat.com>
validate that a "podman generate" and "podman play" cycle restores the
specified user namespace.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The tests didn't check anything actually because default_ifname requires
an ip version argument to work. Thus pasta_iface was empty, add new
checks to prevent this kind of error again.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The test assumes that if more than 1 ip on the host we should be able to
set host.containers.internal. This however is not how the logic works in
the code. What it actually does is to check all ips in the
rootless-netns and then it knows that it cannot use any of these ips.
This includes any podman bridge ips.
You can reproduce the error when you have only one ipv4 on the host then
run a container as root in the background and run the test:
hack/bats --rootless 505:host.containers.internal
So the failure here was that there was already a podman container
running as root on the default bridge thus the test saw 2 ips but then
the rootless run also uses the same subnet for its bridge and the code
knew that ip would not work either. I could have made another special
condition in test but the better way to work around it is to create a
new network. A new network will make sure there are no conflicting
subnets assigned so the test will pass.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Continuing efforts on making system tests parallel-safe by
using unique names for containers and pods.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Two tests failing in gating but never CI; add some debug
instrumentation to make it possible to find out what
is going on
Signed-off-by: Ed Santiago <santiago@redhat.com>
Using the scanner is just unnecessary complicated an buggy as it will
not read the final line with a newline. There is also the problem that
it happens in a separate goroutine so it could loose output if we read
the array before the scanner was done.
The API accepts a Writer so we can just directly use a bytes.Buffer
which captures all output in memory without the need of another
goroutine.
This also means that now we always include the final newline in the
output. I checked with docker and they do the same so this is good.
Fixes#23332
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When the file is empty it is possible our code panics as bar.ProxyReader
returns nil when the bar is finished which is the case for 0 size as it
doesn't have to read anything from there. However as this happens on
different goroutines it is race and most of the time still works.
To fix this simply skip the progress bar setup for empty files.
While at it fix the deprecated argument in the tests.
Fixes#23281
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
nmap-ncat has been downgraded on Fedora, to 7.92.
nc -l -p PORT requires 7.95. Switch to nc -l ADDR PORT.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The end goal is making this test file parallel-safe, by:
1) Having all tests use unique names for all objects; and
2) Not doing "rm -a" or "expect ps to be empty".
This commit is not enough to make tests parallel-safe. The
rest of the changes are not relevant for now. This set of
changes is _necessary_ for parallelizing, and is _meaningful_
(good practice) for current linear-testing podman without
introducing any unnecessary cruft.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Make safename() invocations consistent within the same
test. This puts the onus on the caller to add a unique
element when calling multiple times, e.g. "ctr1-$(safename)".
This is not too much of a burden. Major benefit is making
it easy for a reader to associate containers, pods, volumes,
images within a given test.
And, use dashes, not underscores. "podman generate kube"
removes underscores, making it very difficult to do
things like "podman inspect $podname" (because we need
to generate "$podname_with_underscores_removed")
Signed-off-by: Ed Santiago <santiago@redhat.com>
Many instances. Simplify by having _write_test_yaml() define
the variable TESTYAML and make it available to callers.
Global replace, with care taken to undo any instances
where _write_test_yaml() is not invoked first.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Get rid of the last two instances of the clunky $testYaml
writing, by adding a 'volume=' arg to _write_test_yaml()
Signed-off-by: Ed Santiago <santiago@redhat.com>
Remnant from the very early days of this test file. There's
a boilerplate $testYaml string used in many tests; each
use requires three clunky lines of prep. Most of those
were not needed; we can (and now do) use _write_test_yaml()
instead.
There are still two instances that could not be fixed in
this commit. I will do those next. This commit is kept
relatively simple for ease of review.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Many system tests use hardcoded names for containers, images,
and everything. This has worked because system tests run
serially. It will not work if we ever run in parallel.
Create a new safename() helper, and use it as follows:
myctr=c_$(safename)
myvol1=v1_$(safename)
...
Find current instances of hardcoded names, and replace
with safe ones.
Whether or not we ever end up parallelizing system tests,
this is simply good practice.
There are far too many instances to fix in one (reviewable) PR.
This is commit 1 of N.
Signed-off-by: Ed Santiago <santiago@redhat.com>
shutdown the containers store so that the home directory mount is not
leaked when "podman system service" exits.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
netavark can use iptables or nftables as firewall driver, thus if we try
to flush rules make sure we try both to keep the test working when we
switch the default to nftables.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This commit gets tests working under the new local-registry system:
* amend a few image names, mostly just sticking to a consistent
list of those images in our registry cache. Mostly minor
tag updates.
* trickier: pull_test: change some error messages, and remove
a test that's now a NOP. Basically, with a local (unprotected)
registry we always get "404 manifest unknown"; with a real
registry we'll get "403 I can't tell you".
* trickiest: seccomp_test: build our own images at run time,
with our desired labels. Until now we've been pulling
prebuilt images, but those will not copy to the local
cache registry. Something about v1? Anyhow, I gave up
trying to cache them, and the workaround is straightforward.
Also took the liberty of strengthening a few error-message checks
Signed-off-by: Ed Santiago <santiago@redhat.com>
Run root e2e & system tests using composefs on rawhide.
Write magic settings to storage.conf. That part is easy.
e2e tests, however, ignore storage.conf. They require everything
to be specified on the command line. And "everything", in the
case of composefs, includes a long complicated --pull-options
string which in turn requires containers-storage PR 1966
which, as of this writing, is finally vendored into podman.
Signed-off-by: Ed Santiago <santiago@redhat.com>
When a system has one ipv4 and one ipv6 address hostname -I will show
both causing a failure in the case where this is only one address.
To fix this stop using hostname -I and use ip -4 to only list v4
addresses and the use jq to filter the output accordingly.
Fixes#23227
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The code currently tried to avoid joining the userns from conmon
directly and rather joined to only read the pid file and then send this
back to use so we could join the userns. From the comment this was done
because we could not read the pid file. However this is no longer true
as of commit 49eb5af301 and file is no always owned by the real user.
This means we can just remove this special logic and join the namespace
directly there. A test has been added to check the rejoin logic with a
custom uidmapping.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
- fix test name to reflect that it's not pasta-only
(followup from #21563)
- in one podman-update test run in OpenQA, defer assertion
failures so we can gather better data on regressions.
This would've been helpful in diagnosing bz2281805.
- add an error-message check to one test that needed it
(found by accident)
- add distro-integration test tag to a handful of new tests,
so they run in OpenQA. Found via 'git diff 33891e8 test/system'
and scanning for '^\+@test '. I only added tests that IMO
have some risk of interacting poorly with kernel or systemd
updates, e.g. quadlet, modules, tmpfs+noswap.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The test check the the default volume is not on tmpfs, however what it
should really check that the volume is on our container storage fs. It
is possible that users run the storage on top of tmpfs so this test
always failed there.
The better check is to compare the fs from the graphroot and the volume.
Unfortunately, for unknown reasons stat -f -c %T returns UNKNOWN and not
the actual fs. I have no idea why, to work around that we now parse
/proc/mounts manually for the fs. Not nice but at least it works
correctly.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add some test steps into quadlet - ContainerName. These steps are
used to ensure the default configuration for quadlets generated
service files is sending stdout/stderr/syslog to the journald.
Signed-off-by: Yiqiao Pu <ypu@redhat.com>
The restore code path never called completeNetworkSetup() and this means
that hosts/resolv.conf files were not populated. This fix is simply to
call this function. There is a big catch here. Technically this is
suposed to be called after the container is created but before it is
started. There is no such thing for restore, the container runs right
away. This means that if we do the call afterwards there is a short
interval where the file is still empty. Thus I decided to call it
before which makes it not working with PostConfigureNetNS (userns) but
as this does not work anyway today so I don't see it as problem.
Fixes#22901
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
With (esp. Debian) CI VM images built by
https://github.com/containers/automation_images/ pull/338 CI no-longer
tests with runc nor cgroups v1. Add logic to fail under these
conditions. Prune back high-level YAML/script envars and logic formerly
required to support these things.
Signed-off-by: Chris Evich <cevich@redhat.com>
When a user specifies a invalid connection in CONTAINER_CONNECTION then
podman should return a proper error saying so. Currently it ignored the
error and in rootFlags() just exited early with defining any flags. This
caused a panic then when trying to use the flags later.
In order to address this first store the connection error in the
PodmanConfig struct and not abort right away during flag setup. This is
important as the user might have specified a flag with a valid remote
connection. As such we check all flags and only when none were given we
return the connection error.
Also while at it I noticed that the default connection reported via
podman --help was wrong as it only used the old containers.conf field
for it and did not consider the podman-connections.json default.
New regression tests have been added to make sure it behaves correctly.
This fixes the problem reported in the PR #22997.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
PR #22821 (CI speedup) was overly aggressive in one kube test.
It's flaking. Bump up timeout from 3s to 4.
Signed-off-by: Ed Santiago <santiago@redhat.com>
At the end of all tests always check for leaks. That should make us more
robust against adding tests at the end that would leak stuff otherwise.
TODO: something seems wrong with bats when returning an error in
teardown_suite(), it prints a warning:
bats warning: Executed <NUM+1> instead of expected <NUM> tests
And also the output is formatted weirdly in this case where the podman
args are split over multiple lines.
But the test fails as expected so I don't think it is a problem.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
While these are not really slow they still take about 100-250ms if I
time this locally. Given they are run for every test this adds up
quickly. Looking at CI logs I can see the timings for skipped
tests are all in 600ms range. So I think it is safe to assume that these
functions need to get faster.
We have over 670 test cases currently so we talk about over 400s spend
in these functions in CI. This allows for big gains.
Now overall this is a tricky trade of, while all tests should cleanup
after themselves there is no guarantee for that as such errors can be
leaked into other tests making debugging much harder. To work at least a
bit against this teardown checks if the test was successful and only
skips the podman commands bases on that. Without it a single flake could
cause all following tets to fail.
As such this commit does the proper setup once one suite start then only
after a test failed.
In order for this to work at all we have to fix all leaks first, see
previous commits. And then for the future keep a very strong eye on
this during reviews.
Also add a PODMAN_BATS_LEAK_CHECK option
By default test must cleanup themselves and to speed up CI we no longer
do any cleanup in teardown by default. However there is still many cases
where we might have to debug a leak so add a new PODMAN_BATS_LEAK_CHECK
env option that can be set and should cause teardown to fail if the test
did not cleanup properly.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Remove leaking containers and remove unessesary push/pull args. For push
it tries to push an image as argument which makes no sense and for pull
we try to pull argument as image which is also wrong.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This is the same as what --squash-all is doing, and we already support
--squash with --layers=true since this is the default.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If a container was stopped and we try to start it before we called
cleanup it tried to reuse the network which caused a panic as the pasta
code cannot deal with that. It is also never correct as the netns must
be created by the runtime in case of custom user namespaces used. As
such the proper thing is to clean the netns up first.
Also change a e2e test to report better errors. It is not directly
related to this chnage but it failed on v1 of this patch so we noticed
the ugly error message it produced. Thanks to Ed for the fix.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This fixes a regression added in commit 4fd84190b8, because the name was
overwritten by the createTimer() timer call the removeTransientFiles()
call removed the new timer and not the startup healthcheck. And then
when the container was stopped we leaked it as the wrong unit name was
in the state.
A new test has been added to ensure the logic works and we never leak
the system timers.
Fixes#22884
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The buildah buil kill trick is bad as we have to sleep and wait to aboid
flakes which takes time. Instead it is possible to redo this build part
manually with buildah commands. It is not trival and harder to
understand but it safes 2-3s so I think it is worth it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
First, as root don't wait 5s for the timeout, 1s is enough. Also switch
to use the curl --max-time option instead, that way we know we do not
kill curl before it had the chance to do anything possibly.
Second, combine podman inspect commands into one. This makes the test
faster by over one second as we safe a bunch of podman commands.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Another case of contianer does not exit with SIGTERM so we waste 10s.
Now because our contianer reacts to sigterm and exits 0 the systemd unit
status changed to inactive from failed.
And most importantly add Notify=yes because the socat call always failed
as the default is to not leak the notify socket into the container.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>