They no longer work in the latest image update, it is not clear why and
I do not have the time to debug that stuff. I opened #24230 to track it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
In debian EST and MST7MDT are gone by default and moved to a special
package[1], instead of also installing that in the images lets use
different timezones in the test.
[1] 42c0008f86
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Run pasta with --trace and a log file to see if the hangs are caused by
pasta not correctly closing connections as assumed in #24219.
As the log is super verbose do not log it by default so I added some
extra logic to make sure it is only logged when the test fails.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This command sequence causes SizeRootFs to change on foo:
podman tag foo newimagename
podman save ... newimagename
podman load ...
Solution: get foo completely out of the picture. Use an
airgapped image: new image, new digest, new everything.
Fixes: #23756
Signed-off-by: Ed Santiago <santiago@redhat.com>
Quadlet inserts network-online.target Wants/After dependencies to ensure pulling works.
Those systemd statements cannot be subsequently reset.
In the cases where those dependencies are not wanted, we add a new
configuration item called `DefaultDependencies=` in a new section called
[Quadlet]. This section is shared between different unit types.
fixes#24193
Signed-off-by: Farya L. Maerten <me@ltow.me>
There's an important reason why the healthcheck container in 055-rm
test uses 'sleep infinity' and not 'top. Document it.
And, the test itself wasn't actually working as intended. Make
it safer by confirming that the container actually enters
the "stopping" state.
Signed-off-by: Ed Santiago <santiago@redhat.com>
When we are activated by systemd the code assumed that we had a valid
URL which was not the case so it failed to parse the URL which causes
the info call to fail all the time.
This fixes two problems first add the schema to the systemd activated
listener URL so it can be parsed correctly but second simply do not
parse it as url as all we care about in the info call is if it is unix
and the file path exists.
Fixes#24152
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Undoing some of my own work here from #24090 now that we have the
ExposedPorts field implemented in inspect. I considered a revert
of that patch, but it's still needed as without it we'd be
including exposed ports when --net=container which is not
correct.
Basically, exposed ports for a container should always go in the
new ExposedPorts field we added. They sometimes go in the Ports
field in NetworkSettings, but only when the container is not
net=host and not net=container. We were always including exposed
ports, which was not correct, but is an easy logical fix.
Also required is a test change to correct the expected behavior
as we were testing for incorrect behavior.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
the kernel checks that both the uid and the gid are mapped inside the
user namespace, not only the uid:
/**
* privileged_wrt_inode_uidgid - Do capabilities in the namespace work over the inode?
* @ns: The user namespace in question
* @idmap: idmap of the mount @inode was found from
* @inode: The inode in question
*
* Return true if the inode uid and gid are within the namespace.
*/
bool privileged_wrt_inode_uidgid(struct user_namespace *ns,
struct mnt_idmap *idmap,
const struct inode *inode)
{
return vfsuid_has_mapping(ns, i_uid_into_vfsuid(idmap, inode)) &&
vfsgid_has_mapping(ns, i_gid_into_vfsgid(idmap, inode));
}
for this reason, improve the check for hasCurrentUserMapped to verify
that the gid is also mapped, and if it is not, use an intermediate
mount for the container rootfs.
Closes: https://github.com/containers/podman/issues/24159
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Similar to github.com/containers/buildah/pull/5761 but not
security critical as Podman does not have an expectation that
mounts are scoped (the ability to write a --mount option is
already the ability to mount arbitrary content into the container
so sneaking arbitrary options into the mount doesn't have
security implications). Still, bad practice to let users inject
anything into the mount command line so let's not do that.
Signed-off-by: Matt Heon <mheon@redhat.com>
This commit was automatically cherry-picked
by buildah-vendor-treadmill v0.3
from the buildah vendor treadmill PR, #13808
* Fix conflict caused by Ed's local-registry PR in buildah
* Wire in "new" --retry and --retry-delay, these existed for longer
but where non functional.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Potential race between starting socat (which creates a socket
file) and processes accessing said socket. Or maybe not. I
dunno, I'm grasping at straws. This is an elusive flake.
Fixes: #23798 (I hope)
Signed-off-by: Ed Santiago <santiago@redhat.com>
Although podman has moved on from CNI, RHEL has not. Make
sure that builds on RHEL test the desired network backend(s).
Effective immediately, gating.yaml on all RHEL branches
must set CI_DESIRED_NETWORK (=cni or =netavark)
Signed-off-by: Ed Santiago <santiago@redhat.com>
A field we missed versus Docker. Matches the format of our
existing Ports list in the NetworkConfig, but only includes
exposed ports (and maps these to struct{}, as they never go to
real ports on the host).
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
There is no reason to validate the args here, first podman may change
the syntax so this is just duplication that may hurt us long term. It
also added special handling of some options that just do not make sense,
i.e. removing 0.0.0.0, podman should really be the only parser here. And
more importantly this prevents variables from being used.
Fixes#24081
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Previously, we didn't bother including exposed ports in the
container config when creating a container with --net=host. Per
Docker this isn't really correct; host-net containers are still
considered to have exposed ports, even though that specific
container can be guaranteed to never use them.
We could just fix this for host container, but we might as well
make it generic. This patch unconditionally adds exposed ports to
the container config - it was previously conditional on a network
namespace being configured. The behavior of `podman inspect` with
exposed ports when using `--net=container:` has also been
corrected. Previously, we used exposed ports from the container
sharing its network namespace, which was not correct. Now, we use
regular port bindings from the namespace container, but exposed
ports from our own container.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
Change getUnitDirs to maintain a slice in addition to the map and return the slice
Add helper functions to make the code more readable
Adjust unit tests
Restore system test
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Yield to reality: if $XDG_RUNTIME_DIR is unset, assume a
reasonable default (rootless only). This clears up a
common failure in Fedora gating tests, and will probably
prevent future time wasters.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Primary motivator: 'curl -v' format changes in f42
Drive-bys:
* 127.0.0.1, not localhost
* use wait_for_port, not sleep
* show curl commands and their output, to ease debugging failures
* better failure assertions
Signed-off-by: Ed Santiago <santiago@redhat.com>
These flags can affect the output of the HealtCheck log. Currently, when a container is configured with HealthCheck, the output from the HealthCheck command is only logged to the container status file, which is accessible via `podman inspect`.
It is also limited to the last five executions and the first 500 characters per execution.
This makes debugging past problems very difficult, since the only information available about the failure of the HealthCheck command is the generic `healthcheck service failed` record.
- The `--health-log-destination` flag sets the destination of the HealthCheck log.
- `none`: (default behavior) `HealthCheckResults` are stored in overlay containers. (For example: `$runroot/healthcheck.log`)
- `directory`: creates a log file named `<container-ID>-healthcheck.log` with JSON `HealthCheckResults` in the specified directory.
- `events_logger`: The log will be written with logging mechanism set by events_loggeri. It also saves the log to a default directory, for performance on a system with a large number of logs.
- The `--health-max-log-count` flag sets the maximum number of attempts in the HealthCheck log file.
- A value of `0` indicates an infinite number of attempts in the log file.
- The default value is `5` attempts in the log file.
- The `--health-max-log-size` flag sets the maximum length of the log stored.
- A value of `0` indicates an infinite log length.
- The default value is `500` log characters.
Add --health-max-log-count flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-max-log-size flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-log-destination flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
The various pasta port forwarding tests run a socat server inside a
container, then connect to it from a socat client on the host. Currently
we have the server bind to the same specific address within the container
as we connect to on the host.
That's not quite what we want. For "tap" tests where the traffic goes over
pasta's L2 link to the container it's fine, though unnecessary. For
"loopback" tests where traffic is forwarded by pasta at the L4 socket
level, however, it's not quite right. In this case the address used is
either 127.0.0.1 or ::. That's correct and as needed for the host side
address we're connecting to. However on the container side, this only
works because of an odd and arguably undesirable behaviour of pasta: we use
the fact that we have an L4 socket within the container to make such
"spliced" L4 connections appear as if they come from loopback within the
container. A container will generally expect it's loopback address to be
only accessible from within the container, and this odd behaviour may be
changed in pasta in future.
In any case, the binding of the container side server is unnecessary, so
simply remove it.
Link: https://github.com/containers/podman/issues/24045
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mostly just switch to safename. Rewrite setup() to guarantee
unique service file names, atomically created.
* IMPORTANT NOTE: enabling parallelization on these tests
triggers #24010 ("fragment file" flake), but only on my
f40 laptop. I have never seen the flake in Cirrus despite
many many runs in #23275. I am submitting this for review
and merging because even though _something_ is broken,
this breakage is unlikely to affect our CI.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Any test that uses --events-backend=file cannot be run in parallel
due to #23750. This seems to be a hard block, unfixable.
All other tests, enable ci:parallel.
And, bring in timing fixes#23600. Thanks, @Honny1!
Signed-off-by: Ed Santiago <santiago@redhat.com>