since the effect would be to lower the rlimits when their definition
is higher than the default value.
The test doesn't fail on the previous version, unless the system is
configured with a nofile ulimit higher than the default value.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2317721
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When we are activated by systemd the code assumed that we had a valid
URL which was not the case so it failed to parse the URL which causes
the info call to fail all the time.
This fixes two problems first add the schema to the systemd activated
listener URL so it can be parsed correctly but second simply do not
parse it as url as all we care about in the info call is if it is unix
and the file path exists.
Fixes#24152
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This commit was automatically cherry-picked
by buildah-vendor-treadmill v0.3
from the buildah vendor treadmill PR, #13808
* Fix conflict caused by Ed's local-registry PR in buildah
* Wire in "new" --retry and --retry-delay, these existed for longer
but where non functional.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
These flags can affect the output of the HealtCheck log. Currently, when a container is configured with HealthCheck, the output from the HealthCheck command is only logged to the container status file, which is accessible via `podman inspect`.
It is also limited to the last five executions and the first 500 characters per execution.
This makes debugging past problems very difficult, since the only information available about the failure of the HealthCheck command is the generic `healthcheck service failed` record.
- The `--health-log-destination` flag sets the destination of the HealthCheck log.
- `none`: (default behavior) `HealthCheckResults` are stored in overlay containers. (For example: `$runroot/healthcheck.log`)
- `directory`: creates a log file named `<container-ID>-healthcheck.log` with JSON `HealthCheckResults` in the specified directory.
- `events_logger`: The log will be written with logging mechanism set by events_loggeri. It also saves the log to a default directory, for performance on a system with a large number of logs.
- The `--health-max-log-count` flag sets the maximum number of attempts in the HealthCheck log file.
- A value of `0` indicates an infinite number of attempts in the log file.
- The default value is `5` attempts in the log file.
- The `--health-max-log-size` flag sets the maximum length of the log stored.
- A value of `0` indicates an infinite log length.
- The default value is `500` log characters.
Add --health-max-log-count flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-max-log-size flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-log-destination flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Modify `RemoveConnections` to verify the new default system connection's
rootful state matches the rootful-ness of the podman machine it is associated
with.
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
Moves the `DefaultMachineName` constant out of `pkg/machine` and into
`pkg/machine/define`.
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
* podman manifest remove doesn't accept references as descriptions of
what to remove from a list or index; only use digests in the man page
* podman manifest remove only removes one thing at a time; correct the
man page examples
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When you sort by repository a user most likely also want the tags to be
sorted as well. At the very least to get a stable output as the order
could be changed pull podman tag/pull even if they keep using the same
tag name.
Fixes#23803
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The kube generate command can now generate a yaml for
the Job kind and the kube play command can create a pod
and containers with podman when passed in a Job yaml.
Add relevant tests and docs for this.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Podman machine list now supports a new option, --all-providers, which lists all machines from all providers.
Signed-off-by: Ashley Cui <acui@redhat.com>
The podman container cleanup process runs asynchronous and by the time
it gets the lock it is possible another podman process already did the
cleanup and then did a new init() to start it again. If the cleanup
process gets the lock there it will cause very weird things.
This can be observed in the remote start API as CI flakes.
Fixes#23754
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we use passthrough logging and the --rmi option should not try to
delete the image right away. Simply speak passthough only means do not
print the cotnainer id but we should never try to delete the image here
as this will be done in the cleanup process.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Since commit 458ba5a8af the cleanup process now removes the image as
well, thus the removal is racy and it will cause an error here.
The code tried to ignore the error with errors.Is() but this never works
across the remote API. However the API already has a ignore option so
juts use that and fix the error message so that we can easily find the
root cause and I do not have to guess where the log was written.
Fixes#23719
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This started off as an attempt to make `podman stop` on a
container started with `--rm` actually remove the container,
instead of just cleaning it up and waiting for the cleanup
process to finish the removal.
In the process, I realized that `podman run --rmi` was rather
broken. It was only done as part of the Podman CLI, not the
cleanup process (meaning it only worked with attached containers)
and the way it was wired meant that I was fairly confident that
it wouldn't work if I did a `podman stop` on an attached
container run with `--rmi`. I rewired it to use the same
mechanism that `podman run --rm` uses, so it should be a lot more
durable now, and I also wired it into `podman inspect` so you can
tell that a container will remove its image.
Tests have been added for the changes to `podman run --rmi`. No
tests for `stop` on a `run --rm` container as that would be racy.
Fixes#22852
Fixes RHEL-39513
Signed-off-by: Matt Heon <mheon@redhat.com>
The new golangci-lint version 1.60.1 has problems with typecheck when
linting remote files. We have certain pakcages that should never be
inlcuded in remote but the typecheck tries to compile all of them but
this never works and it seems to ignore the exclude files we gave it.
To fix this the proper way is to mark all packages we only use locally
with !remote tags. This is a bit ugly but more correct. I also moved the
DecodeChanges() code around as it is called from the client so the
handles package which should only be remote doesn't really fit anyway.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fixes: e62c928642 ("Make podman-compose refer to podman-compose(1) when using an external provider")
- test: add coverage for PODMAN_COMPOSE_WARNING_LOGS
Signed-off-by: Petter Mikkelsen <43xhyr9m@anonaddy.me>
Using the ExactArgs(1) function is better because we have less
duplication of the error text and the ValidArgsFunction uses that to
suggest shell completion. The command before this commit would suggest
connection names even if there was already one arg on the cli set.
However because there is the --all option we still must exclude that
first.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Change the warning message at runtime to refer to the man page of podman-compose instead of "the documentation"
Add instructions in the man page on how to disable the warning emitted by podman-compose when using an external compose provider
Signed-off-by: marinmo <bugzilla@marinmo.org>
The events code makes use of two channels, one for the events and one
for the resulting error. Then in the main file we have a loop reading
from both channels that should exit on first error it gets.
However in case the event channel is closed before the error channel
cotains the error it could caused an early exit as it looked like all
events were done. Commit c46884aa93 fixed that somewhat by checking for
an error in the error channel before exiting. This however was still
racy as it added a default case in the select which means the channel
check is non blocking. Thus the error was not yet send into the channel.
To fix this we should make it a blocking read to wait for the error in
the channel. Also the err != nil check can be removed as we either
return err or nil anyway.
And as last step make sure the error channel is closed, that prevents us
from blocking forever in case the main select already processed the nil
error.
Fixes#23165
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The driver is now hardcoded again, and there can only be
one type of mounts at a time (which one changes over time)
Revert "Make it possible to select the volume driver"
This reverts commit 6630e5cf66.
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
Close loophole that would allow you to assign more memory than the
system has to a podman machine
Fixes: #18206
Signed-off-by: Brent Baude <bbaude@redhat.com>
Podman machine reset now removes and resets machines from all providers availabe on the platform.
On windows, if the user is does not have admin privs, machine will only reset WSL, but will emit a warning that it is unable to remove hyperV machines without elevated privs.
Signed-off-by: Ashley Cui <acui@redhat.com>
When the user specifies a Containerfile or Dockfile with the -f flag in podman build, if the file does not exist, the error should be intuitive to the user.
Fixed: #22940
Signed-off-by: Kevin Cui <bh@bugs.cc>
The pod was set after we checked the namespace and the namespace code
only checked the --pod flag but didn't consider --pod-id-file option.
As such fix the check to first set the pod option on the spec then use
that for the namespace. Also make sure we always use an empty default
otherwise it would be impossible in the backend to know if a user
requested a specific userns or not, i.e. even in case of a set
PODMAN_USERNS env a container should still get the userns from the pod
and not use the var in this case. Therefore unset it from the default
cli value.
There are more issues here around --pod-id-file and cli validation that
does not consider the option as conflicting with --userns like --pod
does but I decided to fix the bug at hand and don't try to fix the
entire mess which most likely would take days.
Fixes#22931
Signed-off-by: Paul Holzinger <pholzing@redhat.com>