As part of our database init, we perform a check of the current values for a few fields (graph driver, graph root, static dir, and a few more) to validate that Libpod is being started with a sane & sensible config, and the user's containers can actually be expected to work. Basically, we take the current runtime config and compare against values cached in the database from the first time Podman was run. We've had some issues with this logic before this year around symlink resolution, but this is a new edge case. Somehow, the database is being loaded with the empty string for some fields (at least graph driver) which is causing comparisons to fail because we will never compare against "" for those fields - we insert the default value instead, assuming we have one. Having a value of "" in the database largely invalidates the check so arguably we could just drop it, but what BoltDB did - and what SQLite does after this patch - is to use the default value for comparison instead of "". This should still catch some edge cases, and shouldn't be too harmful. What this does not do is identify or solve the reason that we are seeing the empty string in the database at all. From my read on the logic, it must mean that the graph driver is explicitly set to "" in the c/storage config at the time Podman is first run and I'm not precisely sure how that happens. Fixes #24738 Signed-off-by: Matt Heon <mheon@redhat.com> |
||
|---|---|---|
| .. | ||
| 000-TEMPLATE | ||
| 001-basic.bats | ||
| 005-info.bats | ||
| 010-images.bats | ||
| 011-image.bats | ||
| 012-manifest.bats | ||
| 015-help.bats | ||
| 020-tag.bats | ||
| 030-run.bats | ||
| 032-sig-proxy.bats | ||
| 035-logs.bats | ||
| 037-runlabel.bats | ||
| 040-ps.bats | ||
| 045-start.bats | ||
| 050-stop.bats | ||
| 055-rm.bats | ||
| 060-mount.bats | ||
| 065-cp.bats | ||
| 070-build.bats | ||
| 075-exec.bats | ||
| 080-pause.bats | ||
| 085-top.bats | ||
| 090-events.bats | ||
| 110-history.bats | ||
| 120-load.bats | ||
| 125-import.bats | ||
| 130-kill.bats | ||
| 140-diff.bats | ||
| 150-login.bats | ||
| 155-partial-pull.bats | ||
| 160-volumes.bats | ||
| 170-run-userns.bats | ||
| 180-blkio.bats | ||
| 190-run-ipcns.bats | ||
| 195-run-namespaces.bats | ||
| 200-pod.bats | ||
| 220-healthcheck.bats | ||
| 250-systemd.bats | ||
| 251-system-service.bats | ||
| 252-quadlet.bats | ||
| 255-auto-update.bats | ||
| 260-sdnotify.bats | ||
| 270-socket-activation.bats | ||
| 271-tcp-cors-server.bats | ||
| 272-system-connection.bats | ||
| 280-update.bats | ||
| 300-cli-parsing.bats | ||
| 320-system-df.bats | ||
| 330-corrupt-images.bats | ||
| 331-system-check.bats | ||
| 400-unprivileged-access.bats | ||
| 410-selinux.bats | ||
| 420-cgroups.bats | ||
| 450-interactive.bats | ||
| 500-networking.bats | ||
| 505-networking-pasta.bats | ||
| 520-checkpoint.bats | ||
| 550-pause-process.bats | ||
| 600-completion.bats | ||
| 610-format.bats | ||
| 620-option-conflicts.bats | ||
| 700-play.bats | ||
| 710-kube.bats | ||
| 750-trust.bats | ||
| 760-system-renumber.bats | ||
| 800-config.bats | ||
| 850-compose.bats | ||
| 900-ssh.bats | ||
| 950-preexec-hooks.bats | ||
| 999-final.bats | ||
| README.md | ||
| build-systemd-image | ||
| build-testimage | ||
| helpers.bash | ||
| helpers.network.bash | ||
| helpers.registry.bash | ||
| helpers.sig-proxy.bash | ||
| helpers.systemd.bash | ||
| helpers.t | ||
| setup_suite.bash | ||
README.md
Quick overview of podman system tests. The idea is to use BATS, but with a framework for making it easy to add new tests and to debug failures.
Quick Start
Look at 000-TEMPLATE for a simple starting point. This introduces the basic set of helper functions:
-
setup(implicit) - establishes a test environment. -
parse_table- you can define tables of inputs and expected results, then read those in awhileloop. This makes it easy to add new tests. Because bash is not a programming language, the caller ofparse_tablesometimes needs to massage the returned values;030-run.batsoffers examples of how to deal with the more typical such issues. -
run_podman- runs command defined in$PODMAN(default: 'podman' but could also be './bin/podman' or 'podman-remote'), with a timeout. Checks its exit status. -
assert- compare actual vs expected output. Emits a useful diagnostic on failure. -
die- output a properly-formatted message to stderr, and fail test -
skip_if_rootless- if rootless, skip this test with a helpful message. -
skip_if_remote- like the above, but skip if testingpodman-remote -
safename- generates a pseudorandom lower-case string suitable for use in names for containers, images, volumes, any object. String includes the BATS test number, making it possible to identify the source of leaks (failure to clean up) at the end of tests. -
random_string- returns a pseudorandom alphanumeric string suitable for verifying I/O.
Test files are of the form NNN-name.bats where NNN is a three-digit
number. Please preserve this convention, it simplifies viewing the
directory and understanding test order. In particular, 00x tests
should be reserved for a first-pass fail-fast subset of tests:
bats test/system/00*.bats || exit 1
bats test/system
...the goal being to provide quick feedback on catastrophic failures without having to wait for the entire test suite.
Running tests
To run the tests locally in your sandbox using hack/bats is recommend, check hack/bats --help for info about usage.
To run the entire suite use make localsystem or make remotesystem for podman-remote testing.
Analyzing test failures
The top priority for this scheme is to make it easy to diagnose
what went wrong. To that end, podman_run always logs all invoked
commands, their output and exit codes. In a normal run you will never
see this, but BATS will display it on failure. The goal here is to
give you everything you need to diagnose without having to rerun tests.
The assert comparison function is designed to emit useful diagnostics,
in particular, the actual and expected strings. Please do not use
the horrible BATS standard of [ x = y ]; that's nearly useless
for tracking down failures.
If the above are not enough to help you track down a failure:
Debugging tests
Some functions have dprint statements. To see the output of these,
set PODMAN_TEST_DEBUG="funcname" where funcname is the name of
the function or perhaps just a substring.
Requirements
- bats
- jq
- skopeo
- nmap-ncat
- httpd-tools
- openssl
- socat
- buildah
- gnupg
Further Details
TBD. For now, look in helpers.bash; each helper function
has (what are intended to be) helpful header comments. For even more
examples, see and/or run helpers.t; that's a regression test
and provides a thorough set of examples of how the helpers work.