The v5 API made a breaking change for podman inspect, this means that
an old client could not longer parse the result from the new 5.X server.
The other way around new client and old server already worked.
As it turned out there were several users that run into this, one case
to hit this is using an old 4.X podman machine wich now pulls a newer
coreos with podman 5.0. But there are also other users running into it.
In order to keep the API working we now have a version check and return
the old v4 compatible payload so the old remote client can still work
against a newer server thus removing any major breaking change for an
old client.
Fixes#22657
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fix following issues:
- create container API handler ignores Annotations from HostConfig
- inspect container API handler does not provide Annotations as
part of HostConfig
Signed-off-by: diplane <diplane3d@gmail.com>
When inspecting a container that does not define any health check, the health field should return nil. This matches docker behavior.
Signed-off-by: Ashley Cui <acui@redhat.com>
Add --rdt-class=COS to the create and run command to enable the
assignment of a container to a Class of Service (COS). The COS
represents a part of the cache based on the Cache Allocation Technology
(CAT) feature that is part of Intel's Resource Director Technology
(Intel RDT) feature set. By assigning a container to a COS, all PID's of
the container have only access to the cache space defined for this COS.
The COS has to be pre-configured based on the resctrl kernel driver.
cat_l2 and cat_l3 flags in /proc/cpuinfo represent CAT support for cache
level 2 and 3 respectively.
Signed-off-by: Wolfgang Pross <wolfgang.pross@intel.com>
Also add a new `StoppedByUser` field to the container-inspect state
which can be useful during debugging and is now also used in the
regression test. Note that I moved the `false` check one test above
such that we can compare the previous Podman version which should just
be stuck in the `wait $ctr` command since it will continue restarting.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Being able to easily identify what lock has been allocated to a
given Libpod object is only somewhat useful for debugging lock
issues, but it's trivial to expose and I don't see any harm in
doing so.
Signed-off-by: Matt Heon <mheon@redhat.com>
Implement means for reflecting failed containers (i.e., those having
exited non-zero) to better integrate `kube play` with systemd. The
idea is to have the main PID of `kube play` exit non-zero in a
configurable way such that systemd's restart policies can kick in.
When using the default sdnotify-notify policy, the service container
acts as the main PID to further reduce the resource footprint. In that
case, before stopping the service container, Podman will lookup the exit
codes of all non-infra containers. The service will then behave
according to the following three exit-code policies:
- `none`: exit 0 and ignore containers (default)
- `any`: exit non-zero if _any_ container did
- `all`: exit non-zero if _all_ containers did
The upper values can be passed via a hidden `kube play
--service-exit-code-propagation` flag which can be used by tests and
later on by Quadlet.
In case Podman acts as the main PID (i.e., when at least one container
runs with an sdnotify-policy other than "ignore"), Podman will continue
to wait for the service container to exit and reflect its exit code.
Note that this commit also fixes a long-standing annoyance of the
service container exiting non-zero. The underlying issue was that the
service container had been stopped with SIGKILL instead of SIGTERM and
hence exited non-zero. Fixing that was a prerequisite for the exit-code
propagation to work but also improves the integration of `kube play`
with systemd and hence Quadlet with systemd.
Jira: issues.redhat.com/browse/RUN-1776
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
We should have done this much earlier, most of the times CNI networks
just mean networks so I changed this and also fixed some function
names. This should make it more clear what actually refers to CNI and
what is just general network backend stuff.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Motivated to have a working `make lint` on Fedora 37 (beta).
Most changes come from the new `gofmt` standards.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Include the digest of the image in `podman container inspect`. The image
digest is a key information for auditing as it defines the identify of
an image. This way, it can be determined whether a container used an
image with a given CVE etc.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
For systems that have extreme robustness requirements (edge devices,
particularly those in difficult to access environments), it is important
that applications continue running in all circumstances. When the
application fails, Podman must restart it automatically to provide this
robustness. Otherwise, these devices may require customer IT to
physically gain access to restart, which can be prohibitively difficult.
Add a new `--on-failure` flag that supports four actions:
- **none**: Take no action.
- **kill**: Kill the container.
- **restart**: Restart the container. Do not combine the `restart`
action with the `--restart` flag. When running inside of
a systemd unit, consider using the `kill` or `stop`
action instead to make use of systemd's restart policy.
- **stop**: Stop the container.
To remain backwards compatible, **none** is the default action.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The notify socket can now either be specified via an environment
variable or programatically (where the env is ignored). The
notify mode and the socket are now also displayed in `container inspect`
which comes in handy for debugging and allows for propper testing.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* Correct spelling and typos.
* Improve language.
Co-authored-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
The nolintlint linter does not deny the use of `//nolint`
Instead it allows us to enforce a common nolint style:
- force that a linter name must be specified
- do not add a space between `//` and `nolint`
- make sure nolint is only used when there is actually a problem
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add the notion of a "service container" to play kube. A service
container is started before the pods in play kube and is (reverse)
linked to them. The service container is stopped/removed *after*
all pods it is associated with are stopped/removed.
In other words, a service container tracks the entire life cycle
of a service started via `podman play kube`. This is required to
enable `play kube` in a systemd unit file.
The service container is only used when the `--service-container`
flag is set on the CLI. This flag has been marked as hidden as it
is not meant to be used outside the context of `play kube`. It is
further not supported on the remote client.
The wiring with systemd will be done in a later commit.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The linter ensures a common code style.
- use switch/case instead of else if
- use if instead of switch/case for single case statement
- add space between comment and text
- detect the use of defer with os.Exit()
- use short form var += "..." instead of var = var + "..."
- detect problems with append()
```
newSlice := append(orgSlice, val)
```
This could lead to nasty bugs because the orgSlice will be changed in
place if it has enough capacity too hold the new elements. Thus we
newSlice might not be a copy.
Of course most of the changes are just cosmetic and do not cause any
logic errors but I think it is a good idea to enforce a common style.
This should help maintainability.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It seems this breaks older version of `podman-remote` users hence it
looks like this patch would be a better candidate for podman `5.0`
Problem
* Client with `4.0` cannot interact with a server of `4.1`
Plan this patch for podman `5.0`
This reverts commit 0cebd158b6.
Signed-off-by: Aditya R <arajan@redhat.com>
Convert container entrypoint from string to an array inorder to make
sure there is parity between `podman inspect` and `docker inspect`
Signed-off-by: Aditya R <arajan@redhat.com>
added support for a new flag --passwd which, when false prohibits podman from creating entries in
/etc/passwd and /etc/groups allowing users to modify those files in the container entrypoint
resolves#11805
Signed-off-by: cdoern <cdoern@redhat.com>
This adds the following information to the output of 'podman inspect':
* CheckpointedAt - time the container was checkpointed
Only set if the container has been checkpointed
* RestoredAt - time the container was restored
Only set if the container has been restored
* CheckpointLog - path to the checkpoint log file (CRIU's dump.log)
Only set if the log file exists (--keep)
* RestoreLog - path to the restore log file (CRIU's restore.log)
Only set if the log file exists (--keep)
* CheckpointPath - path to the actual (CRIU) checkpoint files
Only set if the checkpoint files exists (--keep)
* Restored - set to true if the container has been restored
Only set if the container has been restored
Signed-off-by: Adrian Reber <areber@redhat.com>
There is a problem with creating and storing the exit command when the
container was created. It only contains the options the container was
created with but NOT the options the container is started with. One
example would be a CNI network config. If I start a container once, then
change the cni config dir with `--cni-config-dir` ans start it a second
time it will start successfully. However the exit command still contains
the wrong `--cni-config-dir` because it was not updated.
To fix this we do not want to store the exit command at all. Instead we
create it every time the conmon process for the container is startet.
This guarantees us that the container cleanup process is startet with
the correct settings.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
structure.
Resolves a discrepancy between the types used in inspect for docker and podman.
This causes a panic when using the docker client against podman when the
secondary IP fields in the `NetworkSettings` inspect field are populated.
Fixes containers#12165
Signed-off-by: Federico Gimenez <fgimenez@redhat.com>
podman inspect shows the healthcheck status in `.State.Healthcheck`,
docker uses `.State.Health`. To make sure docker scripts work we
should add the `Health` key. Because we do not want to display both keys
by default we only use the new `Health` key. This is a breaking change
for podman users but matches what docker does. To provide some form of
compatibility users can still use `--format {{.State.Healthcheck}}`. IT
is just not shown by default.
Fixes#11645
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When inspecting a container, we now report whether the container
was stopped by a `podman checkpoint` operation via a new bool in
the State portion of inspected, `Checkpointed`.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Support UID, GID, Mode options for mount type secrets. Also, change
default secret permissions to 444 so all users can read secret.
Signed-off-by: Ashley Cui <acui@redhat.com>
This option allows users to specify the maximum amount of time to run
before conmon sends the kill signal to the container.
Fixes: https://github.com/containers/podman/issues/6412
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When doing a container inspect on a container with unlimited ulimits,
the value should be -1. But because the OCI spec requires the ulimit
value to be uint64, we were displaying the inspect values as a uint64 as
well. Simple change to display as an int64.
Fixes: #9303
Signed-off-by: baude <bbaude@redhat.com>
Implement podman secret create, inspect, ls, rm
Implement podman run/create --secret
Secrets are blobs of data that are sensitive.
Currently, the only secret driver supported is filedriver, which means creating a secret stores it in base64 unencrypted in a file.
After creating a secret, a user can use the --secret flag to expose the secret inside the container at /run/secrets/[secretname]
This secret will not be commited to an image on a podman commit
Signed-off-by: Ashley Cui <acui@redhat.com>
The libpod/define code should not import any large dependencies,
as it is intended to be structures and definitions only. It
included the libpod/driver package for information on the storage
driver, though, which brought in all of c/storage. Split the
driver package so that define has the struct, and thus does not
need to import Driver. And simplify the driver code while we're
at it.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Convert the existing network aliases set/remove code to network
connect and disconnect. We can no longer modify aliases for an
existing network, but we can add and remove entire networks. As
part of this, we need to add a new function to retrieve current
aliases the container is connected to (we had a table for this
as of the first aliases PR, but it was not externally exposed).
At the same time, remove all deconflicting logic for aliases.
Docker does absolutely no checks of this nature, and allows two
containers to have the same aliases, aliases that conflict with
container names, etc - it's just left to DNS to return all the
IP addresses, and presumably we round-robin from there? Most
tests for the existing code had to be removed because of this.
Convert all uses of the old container config.Networks field,
which previously included all networks in the container, to use
the new DB table. This ensures we actually get an up-to-date list
of in-use networks. Also, add network aliases to the output of
`podman inspect`.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When we create a container, we assign a cgroup parent based on
the current cgroup manager in use. This parent is only usable
with the cgroup manager the container is created with, so if the
default cgroup manager is later changed or overridden, the
container will not be able to start.
To solve this, store the cgroup manager that created the
container in container configuration, so we can guarantee a
container with a systemd cgroup parent will always be started
with systemd cgroups.
Unfortunately, this is very difficult to test in CI, due to the
fact that we hard-code cgroup manager on all invocations of
Podman in CI.
Fixes#7830
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
it allows to manually tweak the configuration for cgroup v2.
we will expose some of the options in future as single
options (e.g. the new memory knobs), but for now add the more generic
--cgroup-conf mechanism for maximum control on the cgroup
configuration.
OCI specs change: https://github.com/opencontainers/runtime-spec/pull/1040
Requires: https://github.com/containers/crun/pull/459
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>