When no containers could be started we need to make sure the unit status
reflects this. This means we should not send the READ=1 message and not
keep the service container running when we were unable to start any
container.
There is the question what should happen when only a subset was started.
For systemd we can only be either running or failed. And as podman kube
play also just keeps the partial started pods running I opted to let
systemd keep considering this as success.
Fixes#20667
Fixes https://issues.redhat.com/browse/RHEL-80471
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When starting a container consider healthcheck errors fatal. That way
user know when systemd-run failed to setup the timer to run the
healthcheck and we don't get into a state where the container is running
but not the healthcheck.
This also fixes the broken error reporting from the systemd-run exec, if
the binary could not be run the output was just empty leaving the users
with no idea what failed.
Fixes#25034
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This commit adds the "secret" Event type and emits
"create" and "remove" events for this Event type
when Secret is created or removed.
This can be used for example by podman interfaces to
view and manage secrets.
Fixes: #24030
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
This resolves an ordering issue that prevented quotas from being
applied. XFS quotas are applied recursively, but only for
subdirectories created after the quota is applied; if we create
`_data` before the quota, and then use `_data` for all data in
the volume, the quota will never be used by the volume.
Also, add a test that volume quotas are working as designed using
an XFS formatted loop device in the system tests. This should
prevent any further regressions on basic quota functionality,
such as quotas being shared between volumes.
Fixes#25368
Signed-off-by: Matt Heon <mheon@redhat.com>
Add the ability to remove all artifacts with a --all|-a option in podman
artifact rm.
Fixes: https://issues.redhat.com/browse/RUN-2512
Signed-off-by: Brent Baude <bbaude@redhat.com>
In a different PR review, it was noted that defined error types for
artifacts was lacking. We have these for most other commands and they
help with error differentiation. The changes here are to define the
errors, implement them in the library, and adopt test verifications to
match.
Signed-off-by: Brent Baude <bbaude@redhat.com>
fixed a bug in the artifact code where --retry-delay was being
discarded.
Fixes: https://issues.redhat.com/browse/RUN-2511
Signed-off-by: Brent Baude <bbaude@redhat.com>
Buildah bats tests have been made (mostly) parallel-safe
in the past few months. One test is flaking, but it's
not a test that needs to be run under podman: that
functionality is almost entirely buildah-manifest-push
so it uses the buildah binary, and doesn't exercise
anything under podman.
Therefore:
1) run bud tests with -j$(nproc) on fastvm (was: standardvm)
2) desperate scramble to parallelize podman system service.
May not be quite 100% perfect, but I think this is in good
enough shape for someone to adopt and push through.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Since commit 708fe0af in buildah the tests can run in parallel, let's
enable it here to get the same speed up.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This is valid and the upstream linter allows it but somehow with
golangci-lint it produces an error:
Success matcher only support a single error value, or function with Gomega as its first parameter
I reported a bug upstream[1] but for now let's just ignore it so we can
update the linter.
[1] https://github.com/golangci/golangci-lint/issues/5398
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add a new command to extract the blob content of the artifact store to a
local path.
Fixes https://issues.redhat.com/browse/RUN-2445
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As part of our database init, we perform a check of the current
values for a few fields (graph driver, graph root, static dir,
and a few more) to validate that Libpod is being started with a
sane & sensible config, and the user's containers can actually be
expected to work. Basically, we take the current runtime config
and compare against values cached in the database from the first
time Podman was run.
We've had some issues with this logic before this year around
symlink resolution, but this is a new edge case. Somehow, the
database is being loaded with the empty string for some fields
(at least graph driver) which is causing comparisons to fail
because we will never compare against "" for those fields - we
insert the default value instead, assuming we have one.
Having a value of "" in the database largely invalidates the
check so arguably we could just drop it, but what BoltDB did -
and what SQLite does after this patch - is to use the default
value for comparison instead of "". This should still catch some
edge cases, and shouldn't be too harmful.
What this does not do is identify or solve the reason that we are
seeing the empty string in the database at all. From my read on
the logic, it must mean that the graph driver is explicitly set
to "" in the c/storage config at the time Podman is first run and
I'm not precisely sure how that happens.
Fixes#24738
Signed-off-by: Matt Heon <mheon@redhat.com>
In our CI env we use a special registries.conf file
(test/registries.conf) to redirect some parts but it also defines:
[[registry]]
location="localhost:5000"
insecure=true
That means that port 5000 is trusted by default so the
/v1.40/images/localhost:5000/myrepo/push?tag=mytag test in 12-imagesMore
fails when the test registry uses port 5000.
Example failure:
not ok 360 [12-imagesMore] POST /v1.40/images/localhost:5000/myrepo/push?tag=mytag [-d {}] : status
# expected: 500
# actual: 200
# response: {"status":"The push refers to repository [localhost:5000/myrepo:mytag]"}
{"status":"mytag: digest: sha256:d40f8191d6dae366339e318d1004258022f56bd8c649720a72060fad20019c9d size: 758"}
To avoid using port 5000 simply start at 5001.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
First, refactor our existing graph traversal code to improve code
sharing. There still isn't much sharing between inward traversal
(stop, remove) and outward traversal (start) but stop and remove
are sharing most of their code, which seems a positive.
Second, add a new graph-traversal function to stop containers.
We already had start and remove; stop uses the newly-refactored
inward-traversal code which it shares with removal.
Third, rework the shared stop/removal inward-traversal code to
add locking. This allows parallel execution of stop and removal,
which should improve the performance of `podman pod rm` and
retain the performance of `podman pod stop` at about what it is
right now.
Fourth and finally, use the new graph-based stop when possible
to solve unordered stop problems with pods - specifically, the
infra container stopping before application containers, leaving
those containers without a working network.
Fixes https://issues.redhat.com/browse/RHEL-76827
Signed-off-by: Matt Heon <mheon@redhat.com>
... because (podman system reset) will delete all of it,
interfering with the test storing other data in the directory.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Checking of service and timer caused unexpected exit code `3` of `systemctl status`. Since the status check can be executed when HealthCheck was exited, this caused a termination error code `3` for `systemctl status`. Because service was in dead state because HealthCheck exited.
Fixes: https://github.com/containers/podman/issues/25204
Signed-off-by: Jan Rodák <hony.com@seznam.cz>