Compare commits

...

327 Commits

Author SHA1 Message Date
openshift-merge-bot[bot] 0fd4e55fd5
Merge pull request #581 from inknos/5.6.0
Bump release to 5.6.0
2025-09-05 09:38:28 +00:00
Nicola Sella a3a8b7ca23
Bump release to 5.6.0
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-09-04 15:30:35 +02:00
openshift-merge-bot[bot] 85bc9560a1
Merge pull request #580 from containers/inknos-update-owners
Add Honn1 to OWNERS
2025-09-04 13:01:48 +00:00
openshift-merge-bot[bot] 77fea64100
Merge pull request #569 from inknos/update-ruff-0-12-8
Update Ruff to 0.12.8
2025-09-04 12:30:34 +00:00
Nicola Sella 24379a4cf5
Add Honn1 to OWNERS
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-09-04 14:26:27 +02:00
Nicola Sella c3d53a443e
Fix and enable a bunch of ruff checks
Fix E501: line too long
Fix E741: ambiguous var name
Fix F401: module imported but unused
Fix F541: f-string is missing placeholders
Fix F821: undefined name
Fix F841: local variable assigned but never used

Enable B: bugbear
Enable UP: pyupgrade

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-09-04 12:22:27 +02:00
Nicola Sella 3a8c5f9fb6
Lint pep-naming N80
This linting found the tearDown function missing and mistakenly named
tearUp. This commits renames the function to tearDown. We might expect
some related issues due to this fix

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-09-04 12:18:20 +02:00
Nicola Sella c97384262b
Update Ruff to 0.12.8
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-09-04 12:18:19 +02:00
openshift-merge-bot[bot] 3007fdba71
Merge pull request #579 from containers/renovate/actions-setup-python-6.x
[skip-ci] Update actions/setup-python action to v6
2025-09-04 10:07:44 +00:00
renovate[bot] d34c718e26
[skip-ci] Update actions/setup-python action to v6
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-04 09:16:18 +00:00
openshift-merge-bot[bot] 5d4ff56c0c
Merge pull request #540 from inknos/list-sparse
Implement sparse keyword for containers.list()
2025-09-04 09:15:55 +00:00
Nicola Sella 010949a925 Implement sparse keyword for containers.list()
Defaults to True for Libpod calls and False for Docker Compat calls

This ensures:
1. Docker API compatibility
2. No breaking changes with Libpod

It also provides:
1. Possibility to inspect containers on demand for list calls
2. Safer behavior if container hangs
3. Fewer expensive calls to the API by default

Note: Requests need to pass compat explicitely to reload containers. A
unit test has been added.

Fixes: https://github.com/containers/podman-py/issues/459
Fixes: https://github.com/containers/podman-py/issues/446

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-09-03 17:47:02 +02:00
openshift-merge-bot[bot] 1368c96bae
Merge pull request #575 from ricardobranco777/timezone_aware
tests: Fix deprecation warning for utcfromtimestamp()
2025-09-03 15:45:04 +00:00
Ricardo Branco f38fe91d46
tests: Fix deprecation warning for utcfromtimestamp()
Fix DeprecationWarning for datetime.datetime.utcfromtimestamp()

It suggests to use timezone-aware objects to represent datetimes in UTC
with datetime.UTC but datetime.timezone.utc is backwards compatible.

Signed-off-by: Ricardo Branco <rbranco@suse.de>
2025-09-03 17:18:21 +02:00
openshift-merge-bot[bot] c4c86486cc
Merge pull request #566 from containers/renovate/major-github-artifact-actions
[skip-ci] Update actions/download-artifact action to v5
2025-09-03 14:52:59 +00:00
renovate[bot] f92036c156
[skip-ci] Update actions/download-artifact action to v5
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-03 14:27:30 +00:00
openshift-merge-bot[bot] ca381f217a
Merge pull request #568 from containers/renovate/actions-checkout-5.x
[skip-ci] Update actions/checkout action to v5
2025-09-03 14:27:10 +00:00
renovate[bot] f044c7e10c
[skip-ci] Update actions/checkout action to v5
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-03 14:04:44 +00:00
openshift-merge-bot[bot] 058de18c20
Merge pull request #578 from inknos/issue-571
Skip tests conditionally based on version
2025-09-03 14:04:17 +00:00
Nicola Sella 203eea8d5d
Skip tests conditionally based on version
The sorting based on tuples will not take into account sorting
versions like x.y.z-dev or x.y.z-rc1, which will be considered
as x.y.z.

Fixes: https://github.com/containers/podman-py/issues/571

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-09-02 17:19:45 +02:00
openshift-merge-bot[bot] 9b99805024
Merge pull request #567 from inknos/test-os-release
Run tests conditionally based on os_release
2025-08-20 13:25:14 +00:00
Nicola Sella 365534ebfe
Run tests conditionally based on OS_RELEASE
Add fallback function freedesktop_os_release() for systems that run
python versions that don't implement the `platform` function
(python < 3.10)

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-08-13 16:27:17 +02:00
Daniel J Walsh 3c624222bc
Merge pull request #565 from inknos/add-pull-policy
Implement policy option for image.pull
2025-08-11 16:24:45 -04:00
Nicola Sella 2f4b14f8ee
Implement policy option for images.pull
Also, pass options from containers.run to images.pull:
    - policy is passed with default to "missing" to mirror podman-run
      --pull-policy behavior
    - auth_config is passed to images.pull

Fixes: https://github.com/containers/podman-py/issues/564

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-08-07 16:03:37 +02:00
openshift-merge-bot[bot] 7e834d3cbe
Merge pull request #563 from Luap99/cirrus-rm
remove cirrus files
2025-07-01 08:07:34 +00:00
Paul Holzinger 56ebf6c1ea
remove cirrus files
Cirrus was disabled already and is not is use however the cirrus specfic
scripts were left around so remove them as well.

Fixes: cec8a83ecb ("Remove Cirrus testing")

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-27 17:42:59 +02:00
openshift-merge-bot[bot] 7e88fed72c
Merge pull request #562 from dklimpel/patch-1
fix: broken configuration for readthedocs
2025-06-26 16:02:28 +00:00
openshift-merge-bot[bot] 371ecb8ae6
Merge pull request #513 from Mr-Sunglasses/fix/#489
Fix/#489
2025-06-24 10:06:19 +00:00
Kanishk Pachauri fd8bfcdadd
Merge branch 'main' into fix/#489 2025-06-24 01:48:53 +05:30
Kanishk Pachauri e472ae020d
fix: remove unused variable
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-06-24 01:31:36 +05:30
Kanishk Pachauri a6ca81cec2
fix: failing test due to type error
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-06-24 01:27:17 +05:30
Kanishk Pachauri fac45dd5ba
chore: fix formatting errors
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-06-24 01:11:59 +05:30
Kanishk Pachauri c150b07f29
feat: Add exception for edge cases and add unit tests
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-06-24 01:10:50 +05:30
openshift-merge-bot[bot] bbbe75813f
Merge pull request #561 from containers/renovate/sigstore-gh-action-sigstore-python-3.x
[skip-ci] Update sigstore/gh-action-sigstore-python action to v3.0.1
2025-06-23 10:11:26 +00:00
Dirk Klimpel 61f7725152
fix: broken configuration for readthedocs
Signed-off-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
2025-06-22 22:06:04 +02:00
renovate[bot] f917e0a36c
[skip-ci] Update sigstore/gh-action-sigstore-python action to v3.0.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-06-20 19:55:56 +00:00
Kanishk Pachauri 527971f55e
Merge branch 'main' into fix/#489 2025-06-21 00:21:14 +05:30
Nicola Sella bfc70e666f
Bump release to 5.5.0 (pyproject.toml fix)
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-06-18 12:49:23 +02:00
openshift-merge-bot[bot] 57a91849d8
Merge pull request #557 from inknos/coverage-pyproject
pyproject: exclude unittest.main() from coverage
2025-06-17 16:00:43 +00:00
openshift-merge-bot[bot] 7235308e04
Merge pull request #546 from inknos/fix-readthedocs-requirements
Use extra requirements to build readthedocs
2025-06-17 15:57:56 +00:00
openshift-merge-bot[bot] daa48374b1
Merge pull request #549 from inknos/5.5.0
Bump release to 5.5.0 and enable updates-testing repos on Fedora
2025-06-17 15:55:13 +00:00
Nicola Sella e8849831d2
pyproject: exclude unittest.main() from coverage
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-06-17 14:56:41 +02:00
openshift-merge-bot[bot] 7054b46daf
Merge pull request #555 from Alex-Izquierdo/fix-digest
fix: move digest label to the repository instead of tag
2025-06-16 15:49:44 +00:00
Alex 34a7f0f385 fix: move digest label to the repository instead of tag
Signed-off-by: Alex <aizquier@redhat.com>
2025-06-13 15:30:22 +02:00
openshift-merge-bot[bot] 108d9f3ad3
Merge pull request #552 from aseering/main
Add support for custom contexts
2025-05-21 09:56:30 +00:00
Adam Seering b536f24818 Add support for custom contexts
Fixes: https://github.com/containers/podman-py/issues/551
Signed-off-by: Adam Seering <aseering@gmail.com>
2025-05-20 21:20:18 -04:00
Nicola Sella ca08bb1e74
Enable Updates Testing repositories
While we try to avoid upstream changes, it's better to fetch the latest
podman version asap, so we want to enable Fedora updates-testing
repositories.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-05-14 18:06:37 +02:00
Nicola Sella 6ae1b9d55a
Bump release to 5.5.0
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-05-14 17:54:42 +02:00
Nicola Sella e75a2d3a54
Use extra requirements to build readthedocs
Fixes: https://github.com/containers/podman-py/issues/532

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-05-06 14:48:38 +02:00
openshift-merge-bot[bot] 9f56d1c8ae
Merge pull request #548 from inknos/testing-podman-pnext
Improve testing against distro and podman-next
2025-05-06 11:07:03 +00:00
Nicola Sella cec8a83ecb
Remove Cirrus testing
Cirrus is now redundant with tmt. Also, with the recent change, cirrus
would need to be updated and reworked to work with podman-next and
without it.

Also, replace badge in the README

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-05-06 10:49:43 +02:00
Nicola Sella f3b8f1d982
Improve testing against distro and podman-next
1. Split upstream into distro/pnext test runs

A new CI run targets tests that are exclusive for podman-next.
/upstream plans are now split into /distro and /pnext to be explicit
against which podman version is used in the tests. Also, python version
used are now called base_python and all_python, to be more self
explainatory.

2. Add pytest marker `pnext` to filter tests

Tests that should run against podman-next are now skipped by default in
tox runs. They can be enabled with --pnext and filtered in using
`-m pnext`. These two options should be used together since `pnext` and
`non-pnext` tests are thought to be incompatible.

3. Split `test_container_mounts` to test a breaking change and update
   docs

The scenario `test_container_mounts` was split to demonstrates the
usage of the marker against an upstream breaking change.
CONTRIBUTING.md now reflects the new feature and explains how to use tox
locally to leverage on this feature.

4. Add manual trigger on packit

Since the tests cover a corner case and serve the purpose of testing
unreleased features of libpod (which is slightly out of the scope of
podman-py) the testing jobs are optional and should run for certain PRs
only. The command to run the pnext tests is the following.

/packit test --labels {pnext,podman_next}

Fixes: https://github.com/containers/podman-py/issues/547

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-05-05 15:48:40 +02:00
openshift-merge-bot[bot] c4aad1b75e
Merge pull request #520 from inknos/debug-twine-push-to-test-pypi
Set verbose: true to debug twine uploads to PypI
2025-04-28 22:25:09 +00:00
openshift-merge-bot[bot] 9d48125c8e
Merge pull request #536 from dklimpel/fix_push_manifest
Set X-Registry-Auth header on manifest push and bump to new API
2025-04-28 22:16:40 +00:00
openshift-merge-bot[bot] e46c204450
Merge pull request #543 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20250422
2025-04-23 13:13:42 +00:00
renovate[bot] 356fd1fffa
chore(deps): update dependency containers/automation_images to v20250422
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-23 12:33:31 +00:00
openshift-merge-bot[bot] c59c8dd581
Merge pull request #541 from dklimpel/use_mypy
misc: add mypy for type checking
2025-04-20 16:19:54 +00:00
dklimpel adb33a4306 fix pre commit hook
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-15 10:44:52 +02:00
dklimpel b4abd3ebfc misc: add mypy for type checking
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-15 10:18:53 +02:00
openshift-merge-bot[bot] f8c799c213
Merge pull request #538 from dklimpel/fix_push_stream
fix: push image does not stream
2025-04-14 11:37:18 +00:00
openshift-merge-bot[bot] 97f1f0ab32
Merge pull request #535 from rkozlo/main
Add inspect volume
2025-04-14 10:06:10 +00:00
dklimpel d08229681f move get vars up
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-12 14:24:55 +02:00
dklimpel e0b5208767 fix: push image does not stream
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-12 13:21:44 +02:00
dklimpel d02e7e5ff5 revert wrong change
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-12 12:25:21 +02:00
dklimpel 16257d564e add test for push manifest
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-12 12:21:57 +02:00
dklimpel 44abffd4fe add test for encode_auth_header
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-12 12:21:57 +02:00
dklimpel c3faa3e042 Set X-Registry-Auth header on manifest push and bump to new API
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-04-12 12:21:57 +02:00
rkozlo 8209b3e0c1 Add inspect volume
Fixes: #533

Signed-off-by: rkozlo <rafalkozlowski07@gmail.com>
2025-04-12 09:58:13 +02:00
openshift-merge-bot[bot] 02d20ceadc
Merge pull request #531 from inknos/unittest-upstream
Enable Unit test coverage upstream
2025-04-10 18:30:09 +00:00
openshift-merge-bot[bot] d99aca43af
Merge pull request #537 from inknos/stop-float-int
Fix: fix timeout type as int
2025-04-10 16:56:10 +00:00
Nicola Sella e95f7ed7e2
Enable Unit test coverage upstream
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-04-09 18:03:06 +02:00
Nicola Sella 137a409756
Fix: fix timeout tipe as int
Documentation and Annotations require timeout to be a float, but the API
requires an integer.

Fixes: https://github.com/containers/podman-py/issues/534

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-04-09 17:10:38 +02:00
openshift-merge-bot[bot] a75e8c2b84
Merge pull request #525 from josegomezr/main
feat: expose `*ns` keys to consumers
2025-03-27 16:40:58 +00:00
openshift-merge-bot[bot] db1d6ed410
Merge pull request #529 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20250324
2025-03-27 16:38:12 +00:00
renovate[bot] 8ec81ffede
chore(deps): update dependency containers/automation_images to v20250324
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-03-24 15:51:29 +00:00
Jose D. Gomez R 1510ab7921
feat: expose `*ns` keys to consumers
`cgroupns`, `ipcns`, `pidns`, `userns`, `utsns` now support setting the
`value` attribute.

This allows to mimic the behavior of `--userns`[0] cli arg via this API.

[0]: https://docs.podman.io/en/stable/markdown/podman-create.1.html#userns-mode

Signed-off-by: Jose D. Gomez R <1josegomezr@gmail.com>
2025-03-23 10:12:00 +01:00
openshift-merge-bot[bot] cbd660df67
Merge pull request #524 from eighthave/skip-tests-when-on-32bit
skip tests when the test image is not available (e.g. 32-bit)
2025-03-12 10:59:41 +00:00
Hans-Christoph Steiner e08da695c5
skip tests when the test image is not available (e.g. 32-bit)
Signed-off-by: Hans-Christoph Steiner <hans@eds.org>
2025-03-12 11:19:08 +01:00
openshift-merge-bot[bot] 5dacf8b1c9
Merge pull request #521 from inknos/implement-update-healthcheck
Implement container.update()
2025-03-10 01:51:24 +00:00
openshift-merge-bot[bot] 7e307b2a2d
Merge pull request #522 from inknos/tmt-enablement
Enable tmt downstream
2025-03-10 01:48:37 +00:00
Nicola Sella 5cb5f3796c
Enable tmt downstream
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-03-05 17:07:42 +01:00
Nicola Sella 8db8d12e9c
Use podman from podman-next
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-28 13:44:38 +01:00
Nicola Sella 3556188cd3
Increase time for testing against all-pythons
Looks like tests for all python require more time to run. This can be
improved in the future by splitting each python run in a separate test,
and, therefore, be run in parallel

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-27 14:09:31 +01:00
Nicola Sella 02352b0772
Implement container.update()
Introduce container.update() api implementation. The options supported
are a direct port to GO's structures, therefore, some of the options
like blkio, need to follow a specific syntax and have to use
dictionaries with GO case style.

See for example blockio = {
                        "leafWeight": 0
                        "throttleReadBpsDevice": [{
                            "major": 0,
                            "minor": 0,
                            "rate": 0
                        }], ...

Other common options, like health_cmd, health_interval,... follow pythonic
implementations and python-like type calls.

Fixes: https://issues.redhat.com/browse/RUN-2427

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-27 14:04:49 +01:00
Nicola Sella 913eaa1189
Set verbose: true to debug twine uploads to PypI
Sometimes push to Test PyPI fails with 404. It is necessary to add
verbose to see what's happening.

https://github.com/containers/podman-py/actions/runs/13407882925/job/37451651001

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-24 10:06:05 +01:00
openshift-merge-bot[bot] 12ef0a82f0
Merge pull request #519 from dklimpel/fix_type_load
fix: set return type for `ImagesManager.load()` to `Image`
2025-02-24 09:01:15 +00:00
openshift-merge-bot[bot] d406f55264
Merge pull request #518 from dklimpel/fix_docs_login
fix(doc): correction of the example for the registry at the login
2025-02-24 08:58:30 +00:00
dklimpel 672397c2d9 fix: set return type for `ImagesManager.load()` to `Image`
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-02-24 08:13:23 +01:00
dklimpel 091f6f9fc4 fix(doc): correction of the example for the registry at the login
Signed-off-by: dklimpel <5740567+dklimpel@users.noreply.github.com>
2025-02-24 07:57:16 +01:00
openshift-merge-bot[bot] 2a29132efa
Merge pull request #517 from inknos/5.4.0.1
Bump release to 5.4.0.1
2025-02-19 16:47:06 +00:00
Nicola Sella 623a539c7c
Bump release to 5.4.0.1
The podman pyproject.toml was updated to fix a packaging issue that
prevented the `import podman.api` to work.

This caused an issue in the package versioning. Python-podman cannot be
bumped to version 5.4.1 since it will try to use the libpod/v5.4.1/
endpoint which is not released yet. Python-podman uses the same
versioning scheme and so the package need to add a fourth digi in the
version scheme to be released on PyPI. This can be avoided in
distribution packages with the option of a new release or a patch
downstream, but on GitHub and PyPI this is the workaround.

Source for packaging Python software
https://packaging.python.org/en/latest/specifications/version-specifiers/#version-specifiers

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-19 14:48:51 +01:00
openshift-merge-bot[bot] 49d8827a42
Merge pull request #515 from jyejare/podman_toml_miss
pyproject toml fixed for podman submodules invisibility
2025-02-19 12:36:11 +00:00
jyejare 0b9dcf3fbf pyproject toml fixed for podman submodules invisibility
Signed-off-by: jyejare <jyejare@redhat.com>
2025-02-19 17:42:25 +05:30
openshift-merge-bot[bot] 371bec7dc3
Merge pull request #510 from inknos/5.4.0
Bump release to 5.4.0
2025-02-18 18:42:41 +00:00
Nicola Sella 315ad3cafe
Bump release to 5.4.0
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-18 16:52:43 +01:00
Kanishk Pachauri ee13b44943
chore: removed unuseful comments
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-02-18 00:19:10 +05:30
Kanishk Pachauri 068e23330f
fix: broken tests
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-02-18 00:19:10 +05:30
Kanishk Pachauri 23a0845b5e
test: Add tests for the enviroment variables
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-02-18 00:19:10 +05:30
Kanishk Pachauri 4f843ad11c
fix: Correctly handle environment variables in container creation
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2025-02-18 00:19:10 +05:30
openshift-merge-bot[bot] 3ec3122449
Merge pull request #509 from inknos/add-upstream-ci-to-tmt
Add upstream tests to tmt
2025-02-10 09:11:29 +00:00
Nicola Sella 8b1d2f4a87
Add upstream tests to tmt
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-07 17:18:33 +01:00
openshift-merge-bot[bot] aba801a328
Merge pull request #480 from inknos/use-pyproject-toml
Use pyproject toml and enable workflow for publishing on PyPI
2025-02-07 14:28:49 +00:00
Nicola Sella 945693b84f
Add PyPI GH workflow
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-05 15:05:21 +01:00
Nicola Sella bb4167bdb4
Migrate to pyproject.toml
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-05 15:05:12 +01:00
openshift-merge-bot[bot] 74e94abb5b
Merge pull request #507 from inknos/RHEL-70162
Add **kwargs to Network.connect call
2025-02-04 18:42:53 +00:00
openshift-merge-bot[bot] 98f098d858
Merge pull request #508 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20250131
2025-02-04 14:43:22 +00:00
renovate[bot] 6ac04d1f39
Update dependency containers/automation_images to v20250131
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-03 18:18:30 +00:00
Nicola Sella 14d40f0e15
Add **kwargs to Network.connect call
This and other methods don't pass **kwargs to requests
and therefore it is not possible to enable the compatible
endpoint.

Fixes: https://issues.redhat.com/browse/RHEL-70162

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-02-03 15:08:32 +01:00
Nicola Sella 8e448f7fdf
Update ruff with safe checks and checks to enable
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-30 16:26:39 +01:00
Nicola Sella 61bcd5490a
Fix Code based on ruff 0.3->0.8.1
Fix errors for tyope annotation for list, dict, type and tuple

Example:
UP006: Use `list` instead of `List` for type annotation

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-30 16:19:08 +01:00
openshift-merge-bot[bot] 1d37d84c39
Merge pull request #503 from inknos/drop-cirrus-in-favor-of-tmt
Onboard TMT
2025-01-30 15:17:51 +00:00
openshift-merge-bot[bot] 84e3ce539d
Merge pull request #476 from inknos/enable-many-ruff-checks
Enable many ruff checks
2025-01-29 19:45:17 +00:00
Nicola Sella 91d96aa080
Onboard TMT
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-29 16:10:30 +01:00
Nicola Sella 4eab05d084
Remove api.Literal
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:59 +01:00
Nicola Sella 67c5392b1d
Disambiguate shadowed builtins without API break
This suppresses A003 for `list` builtin.

`typing.List`, `typing.Dict`, `typing.Tuple` and `typing.Type` are
deprecated. Removing these annotations breaks the calls to `list` when
they are done within the same class scope, which makes them ambiguous.

Typed returns `list` resolve to the function `list` defined in the class,
shadowing the builtin function. This change is not great but a proper
one would require changing the name of the class function `list` and
breaking the API to be fixed.

Example of where it breaks:

podman/domains/images_manager.py

class ImagesManager(...):

    def list(...):
        ...

    def pull(
        self,
        ...
        ) -> Image | list[Image], [[str]]:
        ...

Here, the typed annotation of `pull` would resolve to the `list` method,
rather than the builtin.

For the sake of readability, all builtin `list` calls are replaced in
the class as `builtins.list`.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:16 +01:00
Nicola Sella 29d122c1f9
Remove pylint bare-except comment
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:16 +01:00
Nicola Sella b5718c7545
Remove code that is never executed
Python Version is already set to be < 3.9 so this code will never run

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella feaf5fc8bf
Fix Code based on ruff 0.3->0.8.1
Fix errors for tyope annotation for list, dict, type and tuple

Example:
UP006: Use `list` instead of `List` for type annotation

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella eff688ec3a
Update Ruff version
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 83e3abdf83
Enable Flake Bandit checks
Bandit provides security checks and good practices suggestions for the
codebase.

https://pypi.org/project/flake8-bandit/

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 031a277974
Suppress Bandit S603
S603: subprocess-without-shell-equals-true

This could be an exception or a false positive and since it's used on
one single piece of code it is ok to ignore from now.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 7e20dc7cc5
Suppress Bandit S108: hardcoded-temp-file
This could be an exception and should be checked in the future but it is
suppressed at the moment.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella adf01fe148
Enable Bugbear Checks
Bugbear checks usually check for design problems within the code.

https://pypi.org/project/flake8-bugbear/

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 11944f9af4
Silence Bugbear B024
B024: abstract-base-class-without-abstract-method
Usually, abstract classes with no abstract methods are flagged
as incorrectly implemented because @abstract might have been
forgotten.

PodmanResource should be kept an abstract class to prevent
the initialization of an instance of it

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 896fc53c97
Fix Bugbear B028: no-explicit-stacklevel
It is recommended to use 2 or more to give the caller more context about
warning

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 4847d28d7c
Fix Pyupgrades
Fix sys.version_info comparisons and drop unsupported python code

Furthermore, addresses UP008: super-call-with-parameters

Super is not called with parameters anymore when the first argument is
__class__ and the second argument is equivalent to the first argument of
the enclosing method

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella baf076f3ad
Fix Pycodestyle E501: line-too-long
This is a quality of life improvement and it should be backward
compatible with our previous set line-length

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella eac30c657e
Add comments in ruff.toml
Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 961d5e0254
Fix Pylint PLW2901: redefined-loop-name
More on why it is bad here:
https://docs.astral.sh/ruff/rules/redefined-loop-name/

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella ca9fdd6d1d
Fix Pyflakes F841: unused-variable
Check for unused variables. Unused variables should be prefixed with '_'

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella e0524b1bee
Fix Pycodestyle E402, Pyflakes F401 and Bandit S101
F401: unused-import

E402: module-import-not-at-top-of-file

S101: assert

Assertions are removed when Python is run with optimization requested
(i.e., when the -O flag is present), which is a common practice in
 production environments. As such, assertions should not be used for
runtime validation of user input or to enforce interface constraints.

Signed-off-by: Nicola Sella <nsella@redhat.com>

Unused import

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 2e15b45649
Fix Pycodestyle E741: ambiguous-variable-name
Checks for the use of the characters 'l', 'O', or 'I' as variable names.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:15 +01:00
Nicola Sella 06b5dd9d33
Fix Pycodestyle E722: bare-except
Catching BaseException can make it hard to interrupt the program
(e.g., with Ctrl-C) and can disguise other problems.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-28 15:42:14 +01:00
openshift-merge-bot[bot] bd69280d1d
Merge pull request #504 from inknos/RUN-2313
Honor port numbers in urls for image.pull
2025-01-23 19:20:46 +00:00
Nicola Sella cb5a3f6170
Honor port numbers in urls for image.pull
Fixes: https://issues.redhat.com/browse/RUN-2313

Signed-off-by: Nicola Sella <nsella@redhat.com>
2025-01-22 18:00:33 +01:00
openshift-merge-bot[bot] b20360f9bb
Merge pull request #500 from D3vil0p3r/patch-1
Add compatMode raw JSON output and fix tls_verify init on pull()
2025-01-16 15:04:02 +00:00
openshift-merge-bot[bot] c6db9c2b12
Merge pull request #498 from vmsh0/container-removal-flags
Clarify documentation of container removal flags
2025-01-16 14:55:59 +00:00
Antonio 4bd2d33e10 Add compatMode raw JSON output and fix tls_verify init on pull()
Signed-off-by: Antonio <vozaanthony@gmail.com>
2025-01-15 23:28:51 +01:00
openshift-merge-bot[bot] 8cab9d83bd
Merge pull request #497 from vmsh0/container-start-command
fix: accept a string for the `command` argument of Container.start
2025-01-15 14:54:40 +00:00
openshift-merge-bot[bot] c792857880
Merge pull request #491 from vmsh0/container-init
Add support for container initialization
2025-01-15 14:46:27 +00:00
openshift-merge-bot[bot] c33cc29732
Merge pull request #484 from D3vil0p3r/patch-1
Implement "decode" parameter in pull()
2025-01-15 14:32:42 +00:00
Riccardo Paolo Bestetti 7eaad537bc
Update documentation for the `remove` flag in Container.run()
This commit specifies more in depth the semantics of the `remove` flag
of the run() operation:
- it describes its interaction with detach=True
- it clarifies that it is a client-initiated operation
- it specifies that a similar daemon-side flag also exists with the name
  of `auto_remove`

Signed-off-by: Riccardo Paolo Bestetti <pbl@bestov.io>
2025-01-12 13:43:15 +01:00
Riccardo Paolo Bestetti c79b7a9d47
Remove the `remove` flag from docstring of Container.create()
This commit removes the `remove` flag from the docstring entirely, as
the create() operation doesn't support the `remove` flag. It is
tolerated as an input because run() supports it and it internally calls
the start() operation relaying its own kwargs.

Signed-off-by: Riccardo Paolo Bestetti <pbl@bestov.io>
2025-01-12 13:43:15 +01:00
Riccardo Paolo Bestetti 10d9c0a2e8
fix: accept a string for the `command` argument of Container.start
This makes its interface consistent with its own documentation, and with
Container.run.

Signed-off-by: Riccardo Paolo Bestetti <pbl@bestov.io>
2025-01-12 13:42:58 +01:00
Riccardo Paolo Bestetti 99a7296f08
Add support for container initialization
This commit contributes support for container initialization (i.e., the
operation performed by `podman container init`.)

Alongside that, it introduces:
- unit test ContainersTestCase::test_init
- integration subtest `Create-Init-Start Container` in
  ContainersIntegrationTest::test_container_crud

A small fix to the docstring of Container.status has also been
contributed to reflect the existance of the `created` and `initialized`
states.

Signed-off-by: Riccardo Paolo Bestetti <pbl@bestov.io>
2025-01-12 13:41:56 +01:00
Antonio 9a0fb3b31f Implement "decode" parameter in pull()
Implement `decode (bool)` parameter in `pull()`. Decode the JSON data from the server into dicts. Only applies with `stream=True`.

Signed-off-by: Antonio <vozaanthony@gmail.com>
2025-01-08 14:02:12 +01:00
openshift-merge-bot[bot] 79eb8c4281
Merge pull request #494 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20250107
2025-01-08 10:36:13 +00:00
renovate[bot] 11101c2cbc
chore(deps): update dependency containers/automation_images to v20250107
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-01-07 20:29:20 +00:00
openshift-merge-bot[bot] e79f4af138
Merge pull request #487 from D3vil0p3r/patch-2
Support uppercase mount attributes
2025-01-07 16:44:25 +00:00
Antonio 27b4be6200 Support uppercase mount attributes
Signed-off-by: Antonio <vozaanthony@gmail.com>
2025-01-07 15:54:06 +01:00
Jhon Honce 77d66f8295
Merge pull request #481 from jwhonce/wip/reviews
Add edward5hen as reviewer
2025-01-06 20:17:52 -07:00
openshift-merge-bot[bot] d3dd154359
Merge pull request #482 from Luap99/new-images
New CI Images
2024-12-13 16:04:26 +00:00
Paul Holzinger f8324c2e0a
cirrus: replace dnf command
The new images updated to f41 and thus contain dnf5, dnf erase is no
longer valid so just call dnf remove to remove the package.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-13 16:43:32 +01:00
Paul Holzinger e876b07a40
update CI images
build from https://github.com/containers/automation_images/pull/396

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-13 16:40:58 +01:00
Jhon Honce d2cdfc7016 Add edward5hen as reviewer
Signed-off-by: Jhon Honce <jhonce@redhat.com>
2024-12-12 10:47:16 -07:00
openshift-merge-bot[bot] 442c7a5be1
Merge pull request #479 from containers/renovate/pre-commit-action-3.x
[skip-ci] Update pre-commit/action action to v3.0.1
2024-12-10 15:34:09 +00:00
renovate[bot] 11a606967e
[skip-ci] Update pre-commit/action action to v3.0.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-12-09 23:31:03 +00:00
openshift-merge-bot[bot] 16c0107e47
Merge pull request #473 from inknos/pre-commit-ruff
Add pre-commit workflow
2024-12-09 23:30:41 +00:00
openshift-merge-bot[bot] f8684b7622
Merge pull request #413 from inknos/add-all-external-prune
Add all, external, and label to Image.prune()
2024-12-09 23:27:56 +00:00
Nicola Sella ebf9ce6b9d Update docs
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-12-09 16:26:20 +01:00
Nicola Sella 59985eaf97 Fix files to comply with pre-commit
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-12-03 16:13:54 +01:00
Nicola Sella 3a29d248ee Update lint, format checks in tox and cirrus files
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-12-03 16:10:12 +01:00
Nicola Sella 74e449fa4c Add pre-commit-workflow
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-12-02 15:02:43 +01:00
openshift-merge-bot[bot] 2fbd32fe61
Merge pull request #478 from eighthave/fix-run-user
fix: /run/user/ is based on UID not username
2024-12-02 10:40:35 +00:00
Hans-Christoph Steiner 61cb204f92
fix: /run/user/ is based on UID not username
Signed-off-by: Hans-Christoph Steiner <hans@eds.org>
2024-12-02 09:28:34 +01:00
openshift-merge-bot[bot] 367bce6401
Merge pull request #475 from eighthave/patch-1
/run/user/$UID as fallback if XDG_RUNTIME_DIR is not set
2024-11-29 19:19:14 +00:00
Hans-Christoph Steiner 3968fc201b
/run/user/$UID as fallback if XDG_RUNTIME_DIR is not set
XDG mentions `/run/user/$UID` as the value for `XDG_RUNTIME_DIR`:
https://www.freedesktop.org/software/systemd/man/latest/pam_systemd.html
https://serverfault.com/questions/388840/good-default-for-xdg-runtime-di
r/727994#727994

Archlinux, Debian, RedHat, Ubuntu, etc all use `/run/user/$UID`
because they follow XDG:
https://wiki.archlinux.org/title/XDG_Base_Directory

Signed-off-by: Hans-Christoph Steiner <hans@eds.org>
2024-11-29 18:31:39 +01:00
openshift-merge-bot[bot] d4668d51ec
Merge pull request #469 from inknos/drop-python-38
Bump release to 5.3.0 and drop python<3.8
2024-11-25 15:27:04 +00:00
openshift-merge-bot[bot] 4c1490f4b8
Merge pull request #468 from Mr-Sunglasses/fix/457
fix: name filter in images.list()
2024-11-25 15:21:46 +00:00
openshift-merge-bot[bot] 67aedd4e29
Merge pull request #471 from Mr-Sunglasses/docs/installation
docs: Add Installation and docs in README.md
2024-11-22 18:28:46 +00:00
Kanishk Pachauri f9dfcae67c
chore: Remove $ from to imporve copy.
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-22 23:34:10 +05:30
Kanishk Pachauri ddd31cdf60
docs: Add Installation and docs in README.md
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-22 23:32:33 +05:30
Nicola Sella 7e649973c8 Bump release to 5.3.0
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-11-22 16:21:04 +01:00
Nicola Sella 02e5829e7d Drop python<3.8 and enable testing up to py3.13
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-11-22 14:51:49 +01:00
Kanishk Pachauri 986ba477e1
fix: broken tests
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-20 02:10:18 +05:30
Kanishk Pachauri 1fb6c1ce98
chore: format with black
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-20 02:10:17 +05:30
Kanishk Pachauri 58587879fa
tests: Add test for name filter
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-20 02:10:17 +05:30
Kanishk Pachauri e9967daeaa
fix: name filter in images.list()
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-20 02:10:17 +05:30
Kanishk Pachauri c9b3d671a9
fix[docs]: Unindented example code on the index page
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-20 02:10:17 +05:30
openshift-merge-bot[bot] 5d3747e223
Merge pull request #460 from MattBelle/main
Added support for mounting directories through the volume keyword.
2024-11-18 15:28:17 +00:00
openshift-merge-bot[bot] 1aab4bbc1b
Merge pull request #467 from Mr-Sunglasses/main
fix[docs]: Unindented example code on the index page
2024-11-17 11:02:30 +00:00
Kanishk Pachauri 7e08da7dbe
fix[docs]: Unindented example code on the index page
Signed-off-by: Kanishk Pachauri <itskanishkp.py@gmail.com>
2024-11-15 23:54:54 +05:30
openshift-merge-bot[bot] e671a4d554
Merge pull request #464 from kianmeng/fix-typos
Fix typos
2024-11-13 03:22:27 +00:00
Kian-Meng Ang 5d58a20dd4 Fix typos
Found via `codespell -H` and `typos --hidden --format brief`

Signed-off-by: Kian-Meng Ang <kianmeng@cpan.org>
2024-11-13 01:47:12 +08:00
openshift-merge-bot[bot] d57618255c
Merge pull request #462 from MattBelle/Issue-461_NullLabel
Container.labels now returns an empty dict instead of None.
2024-11-06 12:35:35 +00:00
openshift-merge-bot[bot] 8cfc8c28e7
Merge pull request #447 from krrhodes/patch-1
Accept integer ports in containers_create.create
2024-11-05 11:43:24 +00:00
Kenai Rhodes 85b3220e79 Add Tests
Signed-off-by: Kenai Rhodes <kenai.rhodes@gmail.com>
2024-11-03 17:42:34 -06:00
Matt Belle 06d60c3ead Removed accidental modification.
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-31 15:11:05 -04:00
Matt Belle a11d82b2cd Container.labels now returns an empty dict instead of None.
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-31 15:07:08 -04:00
Matt Belle a13b25354c Added support for mounting directories through the volume keyword.
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-30 11:36:57 -04:00
openshift-merge-bot[bot] 7a674914ed
Merge pull request #454 from MattBelle/main
Added stream support to Container.exec_run().
2024-10-29 16:03:47 +00:00
Matt Belle 4f5cbdbf1b Fixed behavior if detach is set. Flattened test logic.
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-29 11:33:52 -04:00
Matt Belle 524087d804 Added integration tests.
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-23 11:56:08 -04:00
Matt Belle e948cfac5a Formatted code and cleaned up docstrings to pass the automated checks.
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-22 20:19:01 -04:00
Matt Belle b9c9d0e69a
Merge branch 'containers:main' into main 2024-10-22 19:43:20 -04:00
Matt Belle 03b87c0037 Added stream support to Container.exec_run().
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-22 19:41:52 -04:00
openshift-merge-bot[bot] 8a4cbf3c8a
Merge pull request #453 from lsm5/rpm-clean-changelog
[skip-ci] RPM: remove conditionals from changelog
2024-10-22 12:18:27 +00:00
Lokesh Mandvekar 3ee66fbbc3
[skip-ci] RPM: remove conditionals from changelog
All active Fedora and CentOS Stream environments support rpmautospec. So
changelog conditionals can be removed.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2024-10-22 17:13:56 +05:30
openshift-merge-bot[bot] 3e3170147d
Merge pull request #452 from MattBelle/main
Fix default stderr value of container.logs() to match documentation.
2024-10-17 15:52:35 +00:00
Matt Belle f743cbde15 Fix default stderr value of container.logs() to match documentation.
Signed-off-by: Matt Belle <matthew.belle@ibm.com>
2024-10-17 10:51:01 -04:00
Kenai River Rhodes 5694225816
Merge branch 'containers:main' into patch-1 2024-10-16 11:55:46 -05:00
openshift-merge-bot[bot] 5d986e6854
Merge pull request #442 from lsm5/packit-downstream-constraint
Packit: constrain koji and bodhi jobs to fedora package
2024-10-16 13:10:23 +00:00
Lokesh Mandvekar 1a35e46418
Packit: constrain koji and bodhi jobs to fedora package
This helps to avoid dup packit jobs during downstream updates.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2024-10-16 18:12:52 +05:30
openshift-merge-bot[bot] b91dfb0b36
Merge pull request #450 from Honny1/fix-circular-import
Fix cyclic-import
2024-10-15 15:32:29 +00:00
Jan Rodák 09bd51774f
Fix pylint cyclic-import
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
2024-10-15 16:55:45 +02:00
openshift-merge-bot[bot] 244d403981
Merge pull request #430 from inknos/update-ci-vm-images-2
Update CI VM images
2024-10-15 14:20:56 +00:00
Nicola Sella fd601b2cce Update CI VM images
https://github.com/containers/automation_images/pull/376

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-10-15 13:27:16 +02:00
Kenai River Rhodes 41a73c37d7
Accept integer ports in containers_create.create
Currently in containers_create.create(), the ports dict parameter is described as
    
> The keys of the dictionary are the ports to bind inside the container, either as an integer or a string in the form port/protocol
    
however, if the dict passed as this parameter has an integer as a key, it will result in a TypeError from the code on L625 `if "/" in container`.
    
Fix this by casting any integer keys to be strings beforehand.

Signed-off-by: Kenai River Rhodes <kenai.rhodes@gmail.com>
2024-10-14 14:26:57 -05:00
openshift-merge-bot[bot] 8335410d2d
Merge pull request #435 from milanbalazs/main
Remove the container in case of detach mode
2024-10-07 13:55:02 +00:00
openshift-merge-bot[bot] 0863df618e
Merge pull request #436 from containers/renovate/ubuntu-24.x
Update dependency ubuntu to v24
2024-10-07 08:29:31 +00:00
renovate[bot] db45b4aeea
Update dependency ubuntu to v24
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-10-06 10:35:56 +00:00
openshift-merge-bot[bot] a8b9a24651
Merge pull request #441 from baude/OWNERS.update
Audit and Update OWNERS file
2024-10-06 10:35:42 +00:00
Brent Baude 3c76a82885 Audit and Update OWNERS file
Remove duplicate reviewers (from approvers) and promote Nicola to
approver.  Minor changes to both reviewers and approvers

Signed-off-by: Brent Baude <bbaude@redhat.com>
2024-10-04 10:46:57 -05:00
openshift-merge-bot[bot] db584e38e8
Merge pull request #440 from lsm5/packit-c9s
Packit: enable c9s downstream update
2024-10-04 12:11:26 +00:00
Lokesh Mandvekar 6f0aa8daf8
Packit: enable c9s downstream update
rpmautospec is now enabled on c9s envs so we should be ok to use the
spec file maintained upstream for c9s as well.

Fixes: RUN-2123

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2024-10-04 17:32:16 +05:30
openshift-merge-bot[bot] 2ba278db0c
Merge pull request #437 from cevich/not_me
Renovate: Update default assignment
2024-09-30 14:29:36 +00:00
Chris Evich 68c1bac566
Renovate: Update default assignment
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-09-27 11:31:54 -04:00
openshift-merge-bot[bot] da8fca9b93
Merge pull request #431 from aparcar/user
Don't use `root` as default user for exec_run
2024-09-27 10:50:53 +00:00
Milan Balazs c8fa31a69f Remove the container in case of detach mode
Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-09-26 20:54:28 +02:00
openshift-merge-bot[bot] 1c8c5bce72
Merge pull request #434 from milanbalazs/squash-pr-upstream-426
Extend the parameters of 'images.load' and 'login' methods
2024-09-26 18:40:11 +00:00
Milan Balazs 61cb8453db Add 'file_path' option for images.load method.
- Add tests for images.load
- Add validation of arguments via exceptions
- Split the load function so it can raise but can also keep the retun
  type without yielding

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-09-25 17:16:07 +02:00
Milan Balazs 46d321f357 Add missing parameters for login method
- Handle the URL scheme based on TLS
- Since the 'auth' argument is not well documented in the podman
  swagger it will follow the same here until the swagger is better
  documented

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-09-25 17:16:07 +02:00
Paul Spooren 4175adf9b3 Don't use `root` as default user for exec_run
Instead use whatever user the container uses per default, which might be
`root` but may also be something else. This avoids manually figuring out
the default user in case some files inside the container are
specifically owned by a non-root user.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-09-25 15:59:13 +02:00
openshift-merge-bot[bot] 1767eea372
Merge pull request #432 from inknos/fix-pylint-R0917
Fix/Disable Pylint R0917
2024-09-24 18:01:15 +00:00
Nicola Sella 210c9dcb1d Fix/Disable Pylint R0917
Two ways of fixing this:
- Ignore R0917 for internal functions such as __int__
- Fix the positional arguments with *

Only exception is for run(image, command=None) for which it is
convenient to run with two positional args.
Example:
    run('fedora:latest', 'ls -la /')

Testing against Pylint stable.
Source of Pylint 3.3.1 docs
https://pylint.readthedocs.io/en/stable/user_guide/messages/refactor/too-many-positional-arguments.html

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-09-24 17:28:01 +02:00
openshift-merge-bot[bot] e998990967
Merge pull request #429 from jtluka/fix-ipam-driver
domain/networks_manager.py: use specified driver in IPAMConfig
2024-09-20 17:43:38 +00:00
openshift-merge-bot[bot] 0181e62d10
Merge pull request #423 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20240821
2024-09-18 20:02:59 +00:00
Jan Tluka 71e53c7ade domain/ipam.py: fix default value of ipam driver
Signed-off-by: Jan Tluka <jtluka@redhat.com>
2024-09-18 09:31:24 +02:00
Jan Tluka 86fbf2936d domain/networks_manager.py: use specified driver in IPAMConfig
When user specifies a IPAM driver through IPAMConfig, the driver value was
ignored and not translated to json used in network create command. This
patch fixes the issue by translating the IPAM driver configuration in
network_create json data.

Signed-off-by: Jan Tluka <jtluka@redhat.com>
2024-09-18 08:35:12 +02:00
renovate[bot] 972dfcb692
Update dependency containers/automation_images to v20240821
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-09-17 15:56:49 +00:00
openshift-merge-bot[bot] c68abf5204
Merge pull request #428 from inknos/issue-425
Remove wait condition in run()
2024-09-17 15:56:32 +00:00
Nicola Sella 75430faf90 Remove wait condition in run()
Fixes: https://github.com/containers/podman-py/issues/425

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-09-16 18:46:44 +02:00
openshift-merge-bot[bot] 0aa59150fd
Merge pull request #418 from cevich/upd_imgs
Fix podman search flake + update CI VM images
2024-08-12 20:00:18 +00:00
Chris Evich 3d3298e637
Remove duplicate linting
Code linting has a dedicated make target, as does testing.  However
previously, the testing target also ran a linting pass.  This is
unnecessary and slows down development, remove it from testing.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 13:57:27 -04:00
Chris Evich 559db70598
Simplify podman search test
This is an infrequently used feature, that tends to flake a lot.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 13:57:08 -04:00
renovate[bot] a57270e967
Update dependency containers/automation_images to v20240529
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-12 13:29:33 +00:00
openshift-merge-bot[bot] 4ebd99aaaa
Merge pull request #417 from Honny1/test_dns_option
Add test of container create with  DNS option
2024-08-12 13:28:35 +00:00
Jan Rodák 464dfd8441
Add test of container create with DNS option
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
2024-08-09 15:08:49 +02:00
openshift-merge-bot[bot] f6c3d255cf
Merge pull request #415 from milanbalazs/main
Make "images.push" method support "format" parameter
2024-08-05 14:07:08 +00:00
openshift-merge-bot[bot] 060e8a66a5
Merge pull request #416 from lsm5/packit-package-name
[skip-ci] Packit: downstream_package_name for each package key
2024-08-02 15:41:25 +00:00
Lokesh Mandvekar 3eabadd4af
[skip-ci] Packit: downstream_package_name for each package key
When the fedora package name doesn't match with the upstream repo, the
`downstream_package_name` key needs to be mentioned for each package
config (fedora, centos).

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2024-08-02 10:04:16 -04:00
Milan Balazs 102ea67a6c Make "images.push" method support "format" parameter
Some repositories (AWS lambdas in ECR) have issues with
the oci manifest format and require the manifest be pushed
in the docker/v2s2 format.

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-08-02 15:29:08 +02:00
openshift-merge-bot[bot] 376c866053
Merge pull request #414 from inknos/5.2.0
Bump version to 5.2.0
2024-08-02 13:18:25 +00:00
Nicola Sella ffdf599f97 Bump version to 5.2.0
Also adding myself to Authors in setup.cfg

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-08-02 15:10:42 +02:00
Nicola Sella d99076a160 Add all, external, and label to Image.prune()
Param all is now supported by prune.
Param external is now supported by prune.
Filter by label, which was already supported, is now
documented and tested.

Resolves: https://github.com/containers/podman-py/issues/312

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-08-01 17:27:37 +02:00
openshift-merge-bot[bot] c5bde0473f
Merge pull request #412 from milanbalazs/main
Fix the TypeError exception in the images.prune method
2024-08-01 15:26:32 +00:00
Milan Balazs e70a3c17ac Fix the TypeError exception in the images.prune method
If the prune doesn't remove images,
the API returns "null" (with 200) and it's interpreted as
None (NoneType) so the for loop throws:
"TypeError: 'NoneType' object is not iterable".

My fix handles the above described case and the
client.images.prune() returns a valid Dict with zero
values, which is correct because the Podman didn't
remove anything

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-08-01 17:07:31 +02:00
openshift-merge-bot[bot] 7dbc101737
Merge pull request #409 from inknos/5.1.0
Bump version to 5.1.0
2024-07-27 09:50:22 +00:00
openshift-merge-bot[bot] ba977559c5
Merge pull request #410 from inknos/demux-stdout-stderr
Enable demux option in `exec_run`
2024-07-27 09:47:47 +00:00
Nicola Sella 05d1e7cb89 Enable demux option in `exec_run`
`exec_run` now returns a tuple of bytes if demux is True the first
element being the stdout and the second the stderr of the exec_run call.

Implementation is courtesy of:
60a52941f2/broker/binds/containers.py (L8-L48)

Resolves: https://github.com/containers/podman-py/issues/322

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-07-26 17:49:48 +02:00
Nicola Sella a10f50b406 Bump version to 5.1.0
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-07-26 11:14:20 +02:00
openshift-merge-bot[bot] 35777adf8f
Merge pull request #408 from milanbalazs/main
Fix the locally non-existent image fails with AttributeError
2024-07-24 21:30:20 +00:00
Milan Balazs 281508bccb Fix the Black formatting issues
Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-07-17 16:23:35 +02:00
Milan Balazs c7d29b2f97 Fix the locally non-existent image fails with AttributeError
The code for running a new container with an image that is not
present locally tries to access the ImagesManager
via self.client.images.
As self.client is an APIClient instance and not an instance of
PodmanClient, an AttributeError occurs:

AttributeError: 'APIClient' object has no attribute 'images'

With this fix the 'PodmanClient' object is used instead of
'APIClinet'.

Ticket:
https://github.com/containers/podman-py/issues/378

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-07-17 15:56:14 +02:00
openshift-merge-bot[bot] 12c877da1e
Merge pull request #406 from milanbalazs/main
Implementing the functionality of the 'named' argument of the 'Image.save' method
2024-07-10 20:48:40 +00:00
Milan Balazs 3cd2e214c7 Implementing the functionality of the 'named' argument
In the Image.save method, the named argument was ignored.
The self.id argument was always used, resulting in an empty repositories
file in the tarball.
As a result, if the tarball was loaded into Podman,
the image was not given a name (only None).
This limitation can cause problems in some cases,
since the name of the imported image is not known.

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-07-10 18:31:10 +02:00
openshift-merge-bot[bot] de94d9e4b1
Merge pull request #405 from jwoehr/patch-1
Update index.rst Client -> PodmanClient
2024-07-01 10:10:44 +00:00
Jack J. Woehr 6e929b8cdf
Update index.rst Client -> PodmanClient
Signed-off-by: Jack J. Woehr <4604036+jwoehr@users.noreply.github.com>
2024-06-29 19:02:36 -06:00
openshift-merge-bot[bot] e882700ee7
Merge pull request #401 from inknos/remove-pythonxdg
Add python 3.12 support and remove python xdg
2024-06-24 18:17:24 +00:00
Nicola Sella 330cf0b501 Add python3.12 to tox testing
Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-06-24 19:27:06 +02:00
Nicola Sella 8d1020ecc8 Remove python xdg dependency
Python xdg is used in few parts of the code and its lack of support
could introduce vulnerabilities in the codebase.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-06-24 19:27:06 +02:00
openshift-merge-bot[bot] 17e48466c1
Merge pull request #396 from cevich/remove_ci_vm_names
Remove Fedora release number from task names
2024-06-03 14:49:44 +00:00
Chris Evich cd6e10cc92
Remove Fedora release number from task names
With Renovate managing CI VM updates, when a major Fedora release
happens, additional manual work is required to update the CI task names
so they match the release number.  This is extra burdensome on
maintainers, stop it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-31 10:02:43 -04:00
openshift-merge-bot[bot] 8267ed2ad7
Merge pull request #395 from inknos/fix-readme
Fix README TypeError when one container is running
2024-05-30 20:35:44 +00:00
Nicola Sella b8c0c85469 Fix README TypeError when one container is running
closes #379

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-05-30 16:09:46 +02:00
openshift-merge-bot[bot] 2262bd3cd7
Merge pull request #394 from containers/jwhonce-patch-1
Update OWNERS
2024-05-28 14:55:46 +00:00
Jhon Honce e32e3324b2
Update OWNERS
Updates reflect team changes

Signed-off-by: Jhon Honce <jhonce@redhat.com>
2024-05-23 11:57:05 -07:00
openshift-merge-bot[bot] d825ccbf39
Merge pull request #393 from lsm5/packit-update-release
[skip-ci] Packit: use default `update_release` behavior
2024-05-23 15:45:23 +00:00
Lokesh Mandvekar 933226a2c7
[skip-ci] Packit: use default `update_release` behavior
The release needs to be updated if you're building for a persistent
copr otherwise you'll end up with multiple builds with no change in the
release tag.
See recent builds at: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/package/python-podman/
Ref Build IDs: 7478065, 7455330, 7446571

Packit by default updates the Release in spec file for copr builds which
is what we want.

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2024-05-23 10:58:36 -04:00
openshift-merge-bot[bot] 80513e89d6
Merge pull request #392 from inknos/fix-pylint-E0606
Fix Pylint E0606 for undefined variable after else
2024-05-22 21:50:41 +00:00
Nicola Sella 0f2f714c7e Fix Pylint E0606 for undefined variable after else
Description is trivial: `old_toml_file` is undefined in the
else block.

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-05-22 15:02:28 +02:00
openshift-merge-bot[bot] b8d2c876da
Merge pull request #391 from eighthave/issue-390
ignore proxies from the env vars when using UNIX Domain Sockets
2024-05-17 19:58:14 +00:00
Hans-Christoph Steiner 20eeddc86e ignore proxies from the env vars when using UNIX Domain Sockets
closes #390

Signed-off-by: Hans-Christoph Steiner <hans@eds.org>
2024-05-16 22:09:16 +02:00
openshift-merge-bot[bot] 4d67ca24df
Merge pull request #388 from lsm5/packit-centos
[skip-ci] Packit: enable c10s downstream sync
2024-05-15 13:49:02 +00:00
Lokesh Mandvekar 7a63c8a134
[skip-ci] Packit: enable c10s downstream sync
This commit will enable downstream syncing to CentOS Stream 10. The
centos maintainer will need to manually run `packit propose-downstream`
and `centpkg build` until better centos integration is in place.

This commit also builds both rhel and centos stream copr rpms so we can check
for things like differences in python toolchain.

EL8 jobs have also been deleted. CentOS Stream 8 will go EOL soon.
We won't be shipping anything from main into EL8.

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2024-05-14 09:20:36 -04:00
openshift-merge-bot[bot] 1aa8d90673
Merge pull request #381 from jonded94/include-py-typed-marker
Include py.typed marker file.
2024-04-08 13:36:32 +00:00
Jonas Dedden 5f31809a82
Include py.typed marker file for mypy & co.
Signed-off-by: Jonas Dedden <university@jonas-dedden.de>
2024-04-05 21:35:15 +02:00
openshift-merge-bot[bot] 9886c620dc
Merge pull request #385 from apozsuse/auth_header_encoding_fix
Fixes encoding of `X-Registry-auth` HTTP Header value from Base64 to `url_safe` Base64
2024-04-04 18:38:36 +00:00
Andres Pozo 6db1724dc6
Merge branch 'containers:main' into auth_header_encoding_fix 2024-04-04 18:52:54 +02:00
openshift-merge-bot[bot] ca024cc2a1
Merge pull request #386 from robbmanes/fix_dns_option_typo
Fix dns_option typo
2024-04-04 13:41:34 +00:00
Robb Manes 8b2a77dec4 Fix dns_option typo
Signed-off-by: Robb Manes <robbmanes@protonmail.com>
2024-04-02 10:44:18 -04:00
Andres Pozo Munoz 219b2c6d17
Fixes encoding of `X-Registry-auth` HTTP Header value from Base64 to
url_safe Base64

Signed-off-by: Andres Pozo Munoz <andres.munoz@suse.com>
2024-03-26 18:35:58 +01:00
openshift-merge-bot[bot] 2a6ded6221
Merge pull request #382 from umohnani8/5.0
Bump version to 5.0.0
2024-03-22 21:46:27 +00:00
Urvashi Mohnani 628b182f1c Bump version to 5.0.0
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-03-22 17:36:34 -04:00
openshift-merge-bot[bot] 6016b7d7c0
Merge pull request #383 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20240320
2024-03-22 21:24:27 +00:00
Urvashi Mohnani 276329b6bf Fix pylint issue with yield
Use yield from instead of iterating through the list
and doing a yield.

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-03-21 13:38:03 -04:00
renovate[bot] 5b57a3add4
chore(deps): update dependency containers/automation_images to v20240320
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-03-21 12:13:08 +00:00
openshift-merge-bot[bot] 53b238b75f
Merge pull request #377 from containers/renovate/tim-actions-get-pr-commits-1.x
[skip-ci] Update tim-actions/get-pr-commits action to v1.3.1
2024-02-28 15:36:44 +00:00
renovate[bot] 550964f4af
[skip-ci] Update tim-actions/get-pr-commits action to v1.3.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-02-27 18:54:40 +00:00
openshift-merge-bot[bot] 6f5d07ebaa
Merge pull request #376 from umohnani8/vol
Use volumes param for container rm
2024-02-27 18:54:24 +00:00
openshift-merge-bot[bot] 4002fd1722
Merge pull request #375 from umohnani8/5.0-dev
Remove deprecated max_pools_size arg
2024-02-26 20:23:24 +00:00
Urvashi Mohnani 3305b4fa7c User volumes param for container rm
Use the volumes param instead of v to ensure that
the libpod endpoint actually removes anonymous volumes
when a container is deleted.

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-02-22 11:43:40 -05:00
Urvashi Mohnani 0741c5eece Remove deprecated max_pools_size arg
Completely remove the depcrecated max_pools_size
arg.

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-02-21 13:34:14 -05:00
openshift-merge-bot[bot] 68f9296e6c
Merge pull request #374 from umohnani8/5.0-dev
Use new json connections file
2024-02-21 17:24:41 +00:00
Urvashi Mohnani c3413735e8 Fix black format issues
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-02-20 10:58:41 -05:00
Urvashi Mohnani b190fc928d Use new json connections file
Podman has updated where it will store its
system connection information to a new json
format file.
Add support to podman-py to read from both the
new json file and old toml file giving preference
to the new json file.

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-02-20 10:58:33 -05:00
openshift-merge-bot[bot] 1585d918d0
Merge pull request #373 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20240125
2024-02-12 19:40:39 +00:00
renovate[bot] 3b8b8f2ac6
chore(deps): update dependency containers/automation_images to v20240125
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-02-12 17:27:12 +00:00
openshift-merge-bot[bot] 42e572c956
Merge pull request #372 from kajinamit/default-base-url
from_env: Use default base_url if no environment is given
2024-02-12 17:26:52 +00:00
Takashi Kajinami 13cb09a0dc Run black format
... to resolve the lint failure in CI.

Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
2024-02-12 17:12:09 +09:00
Takashi Kajinami 4a139fafbe from_env: Use default base_url if no environment is given
The PodmanClient class has its own mechanism to detect the default
value for base_url if no specific value is provided. However this
mechanism can't be used when creating an instance by from_env, because
from_env requires (DOCKER|CONTAINER)_HOST environment.

This drops the validation from from_env, so that users can use from_env
without explicitly setting the HOST environment.

Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
2024-02-12 17:12:09 +09:00
openshift-merge-bot[bot] ce63c3d7b6
Merge pull request #368 from umohnani8/workdir
Add `workdir` as alias for `working_dir`
2024-01-23 17:34:57 +00:00
openshift-merge-bot[bot] d556cade6d
Merge pull request #369 from umohnani8/5.0-dev
Bump main to 5.0.0-dev
2024-01-23 12:32:37 +00:00
Urvashi Mohnani 22dcee5e21 Bump main to 5.0.0-dev
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-01-22 12:30:06 -05:00
Paul Spooren d85c51f281 Add `workdir` as alias for `working_dir`
Currently `podman-py` uses all three versions of `work_dir`,
`working_dir` and `workdir` (not to mention `WorkingDir`).

This commit tries to unify the parameter usage by allowing `workdir` for
container `create` or `run`. For backwards compatibility `working_dir`
still works.

Since upstream Podman uses a variety of *workdir* versions the
`podman-py` codebase can't be simplified further.

Fix: #330

Signed-off-by: Paul Spooren <mail@aparcar.org>
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-01-22 12:03:27 -05:00
openshift-merge-bot[bot] a6630ad43e
Merge pull request #367 from umohnani8/release-4.9
Bump version to v4.9
2024-01-22 16:13:30 +00:00
Urvashi Mohnani 0c9d0df17a Bump version to v4.9
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-01-22 08:39:14 -05:00
openshift-merge-bot[bot] 1d99b26155
Merge pull request #366 from milanbalazs/main
Fix the 'max_pool_size' parameter passing for Adapters
2024-01-18 17:56:09 +00:00
Milan Balazs 226498938a Use custom warning (ParameterDeprecationWarning)
With this custom warning class only the specific warning
will be shown to user (warnings.simplefilter).

It doesn't affect to other Warings in other modules.

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-01-18 14:26:55 +01:00
Milan Balazs a26194ddf0 Fix the Python Black formatting.
Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-01-18 14:26:55 +01:00
Milan Balazs cd11e38269 Keep the backward compatible
Fix the parameter passing for HttpAdapter

Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-01-18 14:26:32 +01:00
Milan Balazs cc8e5252b7 Fix the 'max_pool_size' parameter passing
Signed-off-by: Milan Balazs <milanbalazs01@gmail.com>
2024-01-18 11:28:19 +01:00
openshift-merge-bot[bot] 51463b609b
Merge pull request #365 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20240102
2024-01-17 19:12:09 +00:00
openshift-merge-bot[bot] c3d227ad9c
Merge pull request #364 from dcasier/netns-user-defined
Enable user defined netns
2024-01-17 17:55:07 +00:00
Urvashi Mohnani 764048f7ba
Update .cirrus.yml
Bump fedora name to fedora-39

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2024-01-17 12:10:25 -05:00
renovate[bot] 2b64418686
Update dependency containers/automation_images to v20240102
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-16 19:54:08 +00:00
David Casier 65940fc3c1 Enable user defined netns
Signed-off-by: David Casier <david.casier@aevoo.fr>
2024-01-15 17:43:23 +01:00
openshift-merge-bot[bot] b242d8999d
Merge pull request #361 from yselkowitz/rich-optional
Make progress_bar an extra feature
2024-01-03 10:27:08 +00:00
Yaakov Selkowitz ea85c5fa92 Make progress_bar an extra feature
This allows the 'rich' dependency, which has additional dependencies and
is not available in RHEL, to be optional.

Fixes: #360

Signed-off-by: Yaakov Selkowitz <yselkowi@redhat.com>
2024-01-02 22:59:01 -05:00
Urvashi Mohnani 3b6276674b
Merge pull request #348 from umohnani8/release
Bump main to 4.9.0-dev
2023-12-21 12:02:24 -05:00
openshift-merge-bot[bot] 9be195a871
Merge pull request #355 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20231208
2023-12-21 08:24:29 +00:00
renovate[bot] c8ebe9df5e
Update dependency containers/automation_images to v20231208
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-21 07:55:03 +00:00
Urvashi Mohnani 2a007a32b5 Bump main to 4.9.0-dev
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2023-12-21 02:53:00 -05:00
openshift-merge-bot[bot] 039b648b9d
Merge pull request #358 from umohnani8/lint
Fix lint issues
2023-12-20 11:25:03 +00:00
Urvashi Mohnani ba05f89279 Fix lint issues
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2023-12-20 06:01:57 -05:00
openshift-merge-bot[bot] d6342f8af0
Merge pull request #353 from umohnani8/req
Add rich dep to setup.cfg
2023-12-11 12:33:58 +00:00
Urvashi Mohnani cb9c7a8715 Add rich dep to setup.cfg
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2023-12-11 07:02:23 -05:00
101 changed files with 3735 additions and 3995 deletions

View File

@ -1,126 +0,0 @@
---
env:
DEST_BRANCH: "main"
GOPATH: "/var/tmp/go"
GOBIN: "${GOPATH}/bin"
GOCACHE: "${GOPATH}/cache"
GOSRC: "${GOPATH}/src/github.com/containers/podman"
CIRRUS_WORKING_DIR: "${GOPATH}/src/github.com/containers/podman-py"
SCRIPT_BASE: "./contrib/cirrus"
CIRRUS_SHELL: "/bin/bash"
HOME: "/root" # not set by default
####
#### Cache-image names to test with (double-quotes around names are critical)
####
FEDORA_NAME: "fedora-38"
# Google-cloud VM Images
IMAGE_SUFFIX: "c20231116t174419z-f39f38d13"
FEDORA_CACHE_IMAGE_NAME: "fedora-podman-py-${IMAGE_SUFFIX}"
gcp_credentials: ENCRYPTED[0c639039cdd3a9a93fac7746ea1bf366d432e5ff3303bf293e64a7ff38dee85fd445f71625fa5626dc438be2b8efe939]
# Default VM to use unless set or modified by task
gce_instance:
image_project: "libpod-218412"
zone: "us-central1-c" # Required by Cirrus for the time being
cpu: 2
memory: "4Gb"
disk: 200 # Required for performance reasons
image_name: "${FEDORA_CACHE_IMAGE_NAME}"
gating_task:
name: "Gating test"
alias: gating
# Only run this on PRs, never during post-merge testing. This is also required
# for proper setting of EPOCH_TEST_COMMIT value, required by validation tools.
only_if: $CIRRUS_PR != ""
timeout_in: 20m
env:
PATH: ${PATH}:${GOPATH}/bin
script:
- make
- make lint
test_task:
name: "Test on $FEDORA_NAME"
alias: test
depends_on:
- gating
script:
- ${SCRIPT_BASE}/enable_ssh.sh
- ${SCRIPT_BASE}/build_podman.sh
- ${SCRIPT_BASE}/enable_podman.sh
- ${SCRIPT_BASE}/test.sh
latest_task:
name: "Test Podman main on $FEDORA_NAME"
alias: latest
allow_failures: true
depends_on:
- gating
env:
PATH: ${PATH}:${GOPATH}/bin
script:
- ${SCRIPT_BASE}/enable_ssh.sh
- ${SCRIPT_BASE}/build_podman.sh
- ${SCRIPT_BASE}/enable_podman.sh
- ${SCRIPT_BASE}/test.sh
# This task is critical. It updates the "last-used by" timestamp stored
# in metadata for all VM images. This mechanism functions in tandem with
# an out-of-band pruning operation to remove disused VM images.
meta_task:
alias: meta
name: "VM img. keepalive"
container: &smallcontainer
image: "quay.io/libpod/imgts:latest"
cpu: 1
memory: 1
env:
IMGNAMES: ${FEDORA_CACHE_IMAGE_NAME}
BUILDID: "${CIRRUS_BUILD_ID}"
REPOREF: "${CIRRUS_REPO_NAME}"
GCPJSON: ENCRYPTED[e8a53772eff6e86bf6b99107b6e6ee3216e2ca00c36252ae3bd8cb29d9b903ffb2e1a1322ea810ca251b04f833b8f8d9]
GCPNAME: ENCRYPTED[fb878daf188d35c2ed356dc777267d99b59863ff3abf0c41199d562fca50ba0668fdb0d87e109c9eaa2a635d2825feed]
GCPPROJECT: "libpod-218412"
clone_script: &noop mkdir -p $CIRRUS_WORKING_DIR
script: /usr/local/bin/entrypoint.sh
# Status aggregator for all tests. This task simply ensures a defined
# set of tasks all passed, and allows confirming that based on the status
# of this task.
success_task:
name: "Total Success"
alias: success
# N/B: ALL tasks must be listed here, minus their '_task' suffix.
depends_on:
- meta
- gating
- test
- latest
container:
image: quay.io/libpod/alpine:latest
cpu: 1
memory: 1
env:
CIRRUS_SHELL: "/bin/sh"
clone_script: *noop
script: *noop

1
.fmf/version Normal file
View File

@ -0,0 +1 @@
1

View File

@ -51,5 +51,5 @@
*************************************************/
// Don't leave dep. update. PRs "hanging", assign them to people.
"assignees": ["umohnani8", "cevich"],
"assignees": ["inknos"],
}

View File

@ -4,13 +4,13 @@ on:
jobs:
commit:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
# Only check commits on pull requests.
if: github.event_name == 'pull_request'
steps:
- name: get pr commits
id: 'get-pr-commits'
uses: tim-actions/get-pr-commits@v1.3.0
uses: tim-actions/get-pr-commits@v1.3.1
with:
token: ${{ secrets.GITHUB_TOKEN }}

18
.github/workflows/pre-commit.yml vendored Normal file
View File

@ -0,0 +1,18 @@
name: pre-commit
on:
pull_request:
push:
branches: [main]
jobs:
pre-commit:
runs-on: ubuntu-latest
env:
SKIP: no-commit-to-branch
steps:
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
with:
python-version: |
3.9
3.x
- uses: pre-commit/action@v3.0.1

View File

@ -0,0 +1,126 @@
name: Publish Python 🐍 distribution 📦 to PyPI and TestPyPI
on: push
jobs:
build:
name: Build distribution 📦
# ensure the workflow is never executed on forked branches
# it would fail anyway, so we just avoid to see an error
if: ${{ github.repository == 'containers/podman-py' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.x"
- name: Install pypa/build
run: >-
python3 -m
pip install
build
--user
- name: Build a binary wheel and a source tarball
run: python3 -m build
- name: Store the distribution packages
uses: actions/upload-artifact@v4
with:
name: python-package-distributions
path: dist/
publish-to-pypi:
name: >-
Publish Python 🐍 distribution 📦 to PyPI
if: startsWith(github.ref, 'refs/tags/') && github.repository == 'containers/podman-py'
needs:
- build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/podman
permissions:
id-token: write # IMPORTANT: mandatory for trusted publishing
steps:
- name: Download all the dists
uses: actions/download-artifact@v5
with:
name: python-package-distributions
path: dist/
- name: Publish distribution 📦 to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
github-release:
name: >-
Sign the Python 🐍 distribution 📦 with Sigstore
and upload them to GitHub Release
if: github.repository == 'containers/podman-py'
needs:
- publish-to-pypi
runs-on: ubuntu-latest
permissions:
contents: write # IMPORTANT: mandatory for making GitHub Releases
id-token: write # IMPORTANT: mandatory for sigstore
steps:
- name: Download all the dists
uses: actions/download-artifact@v5
with:
name: python-package-distributions
path: dist/
- name: Sign the dists with Sigstore
uses: sigstore/gh-action-sigstore-python@v3.0.1
with:
inputs: >-
./dist/*.tar.gz
./dist/*.whl
- name: Create GitHub Release
env:
GITHUB_TOKEN: ${{ github.token }}
run: >-
gh release create
'${{ github.ref_name }}'
--repo '${{ github.repository }}'
--generate-notes
- name: Upload artifact signatures to GitHub Release
env:
GITHUB_TOKEN: ${{ github.token }}
# Upload to GitHub Release using the `gh` CLI.
# `dist/` contains the built packages, and the
# sigstore-produced signatures and certificates.
run: >-
gh release upload
'${{ github.ref_name }}' dist/**
--repo '${{ github.repository }}'
publish-to-testpypi:
name: Publish Python 🐍 distribution 📦 to TestPyPI
if: github.repository == 'containers/podman-py'
needs:
- build
runs-on: ubuntu-latest
environment:
name: testpypi
url: https://test.pypi.org/p/podman
permissions:
id-token: write # IMPORTANT: mandatory for trusted publishing
steps:
- name: Download all the dists
uses: actions/download-artifact@v5
with:
name: python-package-distributions
path: dist/
- name: Publish distribution 📦 to TestPyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
repository-url: https://test.pypi.org/legacy/
skip_existing: true
verbose: true

View File

@ -2,40 +2,172 @@
# See the documentation for more information:
# https://packit.dev/docs/configuration/
downstream_package_name: python-podman
specfile_path: rpm/python-podman.spec
upstream_tag_template: v{version}
files_to_sync:
- src: rpm/gating.yml
dest: gating.yml
delete: true
- src: pyproject.toml
dest: pyproject.toml
delete: true
- src: plans/
dest: plans/
delete: true
mkpath: true
- src: tests/
dest: tests/
delete: true
mkpath: true
- src: .fmf/
dest: .fmf/
delete: true
mkpath: true
packages:
python-podman-fedora:
pkg_tool: fedpkg
downstream_package_name: python-podman
specfile_path: rpm/python-podman.spec
python-podman-centos:
pkg_tool: centpkg
downstream_package_name: python-podman
specfile_path: rpm/python-podman.spec
python-podman-rhel:
specfile_path: rpm/python-podman.spec
srpm_build_deps:
- make
jobs:
# Copr builds for Fedora
- job: copr_build
trigger: pull_request
identifier: pr-fedora
packages: [python-podman-fedora]
targets:
- fedora-all
- centos-stream-8
# Copr builds for CentOS Stream
- job: copr_build
trigger: pull_request
identifier: pr-centos
packages: [python-podman-centos]
targets:
- centos-stream-10
- centos-stream-9
# Copr builds for RHEL
- job: copr_build
trigger: pull_request
identifier: pr-rhel
packages: [python-podman-rhel]
targets:
- epel-9
# Run on commit to main branch
- job: copr_build
trigger: commit
identifier: commit-fedora
packages: [python-podman-fedora]
branch: main
owner: rhcontainerbot
project: podman-next
# Downstream sync for Fedora
- job: propose_downstream
trigger: release
update_release: false
packages: [python-podman-fedora]
dist_git_branches:
- fedora-all
# Downstream sync for CentOS Stream
# TODO: c9s enablement being tracked in https://issues.redhat.com/browse/RUN-2123
- job: propose_downstream
trigger: release
packages: [python-podman-centos]
dist_git_branches:
- c10s
- c9s
- job: koji_build
trigger: commit
packages: [python-podman-fedora]
dist_git_branches:
- fedora-all
- job: bodhi_update
trigger: commit
packages: [python-podman-fedora]
dist_git_branches:
- fedora-branched # rawhide updates are created automatically
# Test linting on the codebase
# This test might break based on the OS and lint used, so we follow fedora-latest as a reference
- job: tests
trigger: pull_request
identifier: distro-sanity
tmt_plan: /distro/sanity
packages: [python-podman-fedora]
targets:
- fedora-latest-stable
skip_build: true
# test unit test coverage
- job: tests
trigger: pull_request
identifier: unittest-coverage
tmt_plan: /distro/unittest_coverage
packages: [python-podman-fedora]
targets:
- fedora-latest-stable
skip_build: true
# TODO: test integration test coverage
# run all tests for all python versions on all fedoras
- job: tests
trigger: pull_request
identifier: distro-fedora-all
tmt_plan: /distro/all_python
packages: [python-podman-fedora]
targets:
- fedora-all
# run tests for the rawhide python version using podman-next packages
- job: tests
trigger: pull_request
identifier: podman-next-fedora-base
tmt_plan: /pnext/base_python
packages: [python-podman-fedora]
targets:
- fedora-rawhide
tf_extra_params:
environments:
- artifacts:
- type: repository-file
id: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/repo/fedora-$releasever/rhcontainerbot-podman-next-fedora-$releasever.repo
manual_trigger: true
labels:
- pnext
- podman-next
- job: tests
trigger: pull_request
identifier: distro-centos-base
tmt_plan: /distro/base_python
packages: [python-podman-centos]
targets:
- centos-stream-9
- centos-stream-10
- job: tests
trigger: pull_request
identifier: distro-rhel-base
tmt_plan: /distro/base_python
packages: [python-podman-rhel]
targets:
- epel-9

27
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,27 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-yaml
exclude: "gating.yml"
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.12.8
hooks:
# Run the linter.
- id: ruff
args: [ --fix ]
# Run the formatter.
- id: ruff-format
- repo: https://github.com/teemtee/tmt.git
rev: 1.39.0
hooks:
- id: tmt-lint
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.15.0
hooks:
- id: mypy
pass_filenames: false
args: ["--package", "podman"]

View File

@ -21,7 +21,10 @@ build:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: requirements.txt
- method: pip
path: .
extra_requirements:
- docs
# Build documentation in the docs/ directory with Sphinx
sphinx:

View File

@ -25,9 +25,9 @@ Please don't include any private/sensitive information in your issue!
## Tools we use
- Python 3.6
- [pylint](https://www.pylint.org/)
- [black](https://github.com/psf/black)
- Python >= 3.9
- [pre-commit](https://pre-commit.com/)
- [ruff](https://docs.astral.sh/ruff/)
- [tox](https://tox.readthedocs.io/en/latest/)
- You may need to use [virtualenv](https://virtualenv.pypa.io/en/latest/) to
support Python 3.6
@ -45,6 +45,45 @@ pip install tox
tox -e coverage
```
#### Advanced testing
Always prefer to run `tox` directly, even when you want to run a specific test or scenario.
Instead of running `pytest` directly, you should run:
```
tox -e py -- podman/tests/integration/test_container_create.py -k test_container_directory_volume_mount
```
If you'd like to test against a specific `tox` environment you can do:
```
tox -e py12 -- podman/tests/integration/test_container_create.py -k test_container_directory_volume_mount
```
Pass pytest options after `--`.
#### Testing future features
Since `podman-py` follows stable releases of `podman`, tests are thought to be run against
libpod's versions that are commonly installed in the distributions. Tests can be versioned,
but preferably they should not. Occasionally, upstream can diverge and have features that
are not included in a specific version of libpod, or that will be included eventually.
To run a test against such changes, you need to have
[podman-next](https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next) installed.
Then, you need to mark the test as `@pytest.mark.pnext`. Marked tests willbe excluded from the
runs, unless you pass `--pnext` as a cli option.
Preferably, this should be a rare case and it's better to use this marker as a temporary solution,
with the goal of removing the marker within few PRs.
To run these tests use:
```
tox -e py -- --pnext -m pnext podman/tests/integration/test_container_create.py -k test_container_mounts_without_rw_as_default
```
The option `--pnext` **enables** the tests with the `pnext` pytest marker, and `-m pnext` will run
the marked tests **only**.
## Submitting changes
- Create a github pull request (PR)
@ -65,10 +104,12 @@ tox -e coverage
## Coding conventions
- Use [black](https://github.com/psf/black) code formatter. If you have tox
installed, run `tox -e black` to see what changes will be made. You can use
`tox -e black-format` to update the code formatting prior to committing.
- Pass pylint
- Formatting and linting are incorporated using [ruff](https://docs.astral.sh/ruff/).
- If you use [pre-commit](https://pre-commit.com/) the checks will run automatically when you commit some changes
- If you prefer to run the ckecks with pre-commit, use `pre-commit run -a` to run the pre-commit checks for you.
- If you'd like to see what's happening with the checks you can run the [linter](https://docs.astral.sh/ruff/linter/)
and [formatter](https://docs.astral.sh/ruff/formatter/) separately with `ruff check --diff` and `ruff format --diff`
- Checks need to pass pylint
- exceptions are possible, but you will need to make a good argument
- Use spaces not tabs for indentation
- This is open source software. Consider the people who will read your code,

View File

@ -8,23 +8,37 @@ DESTDIR ?=
EPOCH_TEST_COMMIT ?= $(shell git merge-base $${DEST_BRANCH:-main} HEAD)
HEAD ?= HEAD
export PODMAN_VERSION ?= "4.8.0"
export PODMAN_VERSION ?= "5.6.0"
.PHONY: podman
podman:
rm dist/* || :
$(PYTHON) -m pip install --user -r requirements.txt
$(PYTHON) -m pip install -q build
PODMAN_VERSION=$(PODMAN_VERSION) \
$(PYTHON) setup.py sdist bdist bdist_wheel
$(PYTHON) -m build
.PHONY: lint
lint: tox
$(PYTHON) -m tox -e black,pylint
$(PYTHON) -m tox -e format,lint,mypy
.PHONY: tests
tests: tox
# see tox.ini for environment variable settings
$(PYTHON) -m tox -e pylint,coverage,py36,py38,py39,py310,py311
$(PYTHON) -m tox -e coverage,py39,py310,py311,py312,py313
.PHONY: tests-ci-base-python-podman-next
tests-ci-base-python-podman-next:
$(PYTHON) -m tox -e py -- --pnext -m pnext
.PHONY: tests-ci-base-python
tests-ci-base-python:
$(PYTHON) -m tox -e coverage,py
# TODO: coverage is probably not necessary here and in tests-ci-base-python
# but for now it's ok to leave it here so it's run
.PHONY: tests-ci-all-python
tests-ci-all-python:
$(PYTHON) -m tox -e coverage,py39,py310,py311,py312,py313
.PHONY: unittest
unittest:
@ -39,9 +53,9 @@ integration:
.PHONY: tox
tox:
ifeq (, $(shell which dnf))
brew install python@3.8 python@3.9 python@3.10 python@3.11
brew install python@3.9 python@3.10 python@3.11 python@3.12 python@3.13
else
-dnf install -y python3 python3.6 python3.8 python3.9
-dnf install -y python3 python3.9 python3.10 python3.11 python3.12 python3.13
endif
# ensure tox is available. It will take care of other testing requirements
$(PYTHON) -m pip install --user tox

17
OWNERS
View File

@ -1,6 +1,4 @@
approvers:
- baude
- cdoern
- edsantiago
- giuseppe
- jwhonce
@ -8,22 +6,13 @@ approvers:
- Luap99
- mheon
- mwhahaha
- rhatdan
- TomSweeneyRedHat
- umohnani8
- vrothberg
- inknos
reviewers:
- ashley-cui
- baude
- cdoern
- edsantiago
- giuseppe
- jwhonce
- lsm5
- Luap99
- mheon
- mwhahaha
- Honny1
- rhatdan
- TomSweeneyRedHat
- umohnani8
- vrothberg
- Edward5hen

View File

@ -1,14 +1,32 @@
# podman-py
[![Build Status](https://api.cirrus-ci.com/github/containers/podman-py.svg)](https://cirrus-ci.com/github/containers/podman-py/main)
[![PyPI Latest Version](https://img.shields.io/pypi/v/podman)](https://pypi.org/project/podman/)
This python package is a library of bindings to use the RESTful API of [Podman](https://github.com/containers/podman).
It is currently under development and contributors are welcome!
## Installation
<div class="termy">
```console
pip install podman
```
</div>
---
**Documentation**: <a href="https://podman-py.readthedocs.io/en/latest/" target="_blank">https://podman-py.readthedocs.io/en/latest/</a>
**Source Code**: <a href="https://github.com/containers/podman-py" target="_blank">https://github.com/containers/podman-py</a>
---
## Dependencies
* For runtime dependencies, see [requirements.txt](https://github.com/containers/podman-py/blob/main/requirements.txt).
* For testing and development dependencies, see [test-requirements.txt](https://github.com/containers/podman-py/blob/main/test-requirements.txt).
* For runtime dependencies, see \[dependencies\] in [pyproject.toml](https://github.com/containers/podman-py/blob/main/pyproject.toml)
* For testing and development dependencies, see \[project.optional.dependencies\] in [pyproject.toml](https://github.com/containers/podman-py/blob/main/pyproject.toml)
* The package is split in \[progress\_bar\], \[docs\], and \[test\]
## Example usage
@ -35,9 +53,12 @@ with PodmanClient(base_url=uri) as client:
# find all containers
for container in client.containers.list():
first_name = container['Names'][0]
container = client.containers.get(first_name)
# After a list call you would probably want to reload the container
# to get the information about the variables such as status.
# Note that list() ignores the sparse option and assumes True by default.
container.reload()
print(container, container.id, "\n")
print(container, container.status, "\n")
# available fields
print(sorted(container.attrs.keys()))

View File

@ -1,10 +0,0 @@
#!/bin/bash
set -xeo pipefail
systemctl stop podman.socket || :
dnf erase podman -y
dnf copr enable rhcontainerbot/podman-next -y
dnf install podman -y

View File

@ -1,11 +0,0 @@
#!/bin/bash
set -eo pipefail
systemctl enable podman.socket podman.service
systemctl start podman.socket
systemctl status podman.socket ||:
# log which version of podman we just enabled
echo "Locate podman: $(type -P podman)"
podman --version

View File

@ -1,11 +0,0 @@
#!/bin/bash
set -eo pipefail
systemctl enable sshd
systemctl start sshd
systemctl status sshd ||:
ssh-keygen -t ecdsa -b 521 -f /root/.ssh/id_ecdsa -P ""
cp /root/.ssh/authorized_keys /root/.ssh/authorized_keys%
cat /root/.ssh/id_ecdsa.pub >>/root/.ssh/authorized_keys

View File

@ -1,5 +0,0 @@
#!/bin/bash
set -eo pipefail
make tests

View File

@ -5,4 +5,3 @@
{% for docname in docnames %}
{{ docname }}
{%- endfor %}

View File

@ -20,9 +20,9 @@ sys.path.insert(0, os.path.abspath('../..'))
# -- Project information -----------------------------------------------------
project = u'Podman Python SDK'
copyright = u'2021, Red Hat Inc'
author = u'Red Hat Inc'
project = 'Podman Python SDK'
copyright = '2021, Red Hat Inc'
author = 'Red Hat Inc'
# The full version, including alpha/beta/rc tags
version = '3.2.1.0'
@ -125,9 +125,7 @@ class PatchedPythonDomain(PythonDomain):
def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
if 'refspecific' in node:
del node['refspecific']
return super(PatchedPythonDomain, self).resolve_xref(
env, fromdocname, builder, typ, target, node, contnode
)
return super().resolve_xref(env, fromdocname, builder, typ, target, node, contnode)
def skip(app, what, name, obj, would_skip, options):

View File

@ -34,13 +34,14 @@ Example
.. code-block:: python
:linenos:
import podman
import podman
with podman.PodmanClient() as client:
if client.ping():
images = client.images.list()
for image in images:
print(image.id)
with podman.Client() as client:
if client.ping():
images = client.images.list()
for image in images:
print(image.id)
.. toctree::
:caption: Podman Client

10
gating.yml Normal file
View File

@ -0,0 +1,10 @@
---
!Policy
product_versions:
- fedora-*
decision_contexts:
- bodhi_update_push_stable
- bodhi_update_push_testing
subject_type: koji_build
rules:
- !PassingTestCaseRule {test_case_name: fedora-ci.koji-build./plans/downstream/all.functional}

View File

@ -1,61 +0,0 @@
#!/usr/bin/env bash
#
# For help and usage information, simply execute the script w/o any arguments.
#
# This script is intended to be run by Red Hat podman-py developers who need
# to debug problems specifically related to Cirrus-CI automated testing.
# It requires that you have been granted prior access to create VMs in
# google-cloud. For non-Red Hat contributors, VMs are available as-needed,
# with supervision upon request.
set -e
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# Help detect if we were called by get_ci_vm container
GET_CI_VM="${GET_CI_VM:-0}"
in_get_ci_vm() {
if ((GET_CI_VM==0)); then
echo "Error: $1 is not intended for use in this context"
exit 2
fi
}
# get_ci_vm APIv1 container entrypoint calls into this script
# to obtain required repo. specific configuration options.
if [[ "$1" == "--config" ]]; then
in_get_ci_vm "$1"
cat <<EOF
DESTDIR="/var/tmp/go/src/github.com/containers/podman-py"
UPSTREAM_REPO="https://github.com/containers/podman-py.git"
CI_ENVFILE="/etc/ci_environment"
GCLOUD_PROJECT="podman-py"
GCLOUD_IMGPROJECT="libpod-218412"
GCLOUD_CFG="podman-py"
GCLOUD_ZONE="${GCLOUD_ZONE:-us-central1-c}"
GCLOUD_CPUS="2"
GCLOUD_MEMORY="4Gb"
GCLOUD_DISK="200"
EOF
elif [[ "$1" == "--setup" ]]; then
in_get_ci_vm "$1"
echo "+ Setting up and Running make" > /dev/stderr
echo 'PATH=$PATH:$GOPATH/bin' > /etc/ci_environment
make
else
# Create and access VM for specified Cirrus-CI task
mkdir -p $HOME/.config/gcloud/ssh
podman run -it --rm \
--tz=local \
-e NAME="$USER" \
-e SRCDIR=/src \
-e GCLOUD_ZONE="$GCLOUD_ZONE" \
-e DEBUG="${DEBUG:-0}" \
-v $REPO_DIRPATH:/src:O \
-v $HOME/.config/gcloud:/root/.config/gcloud:z \
-v $HOME/.config/gcloud/ssh:/root/.ssh:z \
quay.io/libpod/get_ci_vm:latest "$@"
fi

116
plans/main.fmf Normal file
View File

@ -0,0 +1,116 @@
summary: Run Python Podman Tests
discover:
how: fmf
execute:
how: tmt
prepare:
- name: pkg dependencies
how: install
package:
- make
- python3-pip
- podman
- name: pip dependencies
how: shell
script:
- pip3 install .[test]
- name: ssh configuration
how: shell
script:
- ssh-keygen -t ecdsa -b 521 -f /root/.ssh/id_ecdsa -P ""
- cp /root/.ssh/authorized_keys /root/.ssh/authorized_keys%
- cat /root/.ssh/id_ecdsa.pub >>/root/.ssh/authorized_keys
# Run tests agains Podman Next builds.
# These tests should NOT overlap with the ones who run in the distro plan and should only include
# tests against upcoming features or upstream tests that we need to run for reasons.
/pnext:
prepare+:
- name: enable rhcontainerbot/podman-next update podman
when: initiator == packit
how: shell
script: |
COPR_REPO_FILE="/etc/yum.repos.d/*podman-next*.repo"
if compgen -G $COPR_REPO_FILE > /dev/null; then
sed -i -n '/^priority=/!p;$apriority=1' $COPR_REPO_FILE
fi
dnf -y upgrade --allowerasing
/base_python:
summary: Run Tests Upstream PRs for base Python
discover+:
filter: tag:pnext
adjust+:
enabled: false
when: initiator is not defined or initiator != packit
# Run tests against Podman buids installed from the distribution.
/distro:
prepare+:
- name: Enable testing repositories
when: initiator == packit && distro == fedora
how: shell
script: |
dnf config-manager setopt updates-testing.enabled=true
dnf -y upgrade --allowerasing --setopt=allow_vendor_change=true
/sanity:
summary: Run Sanity and Coverage checks on Python Podman
discover+:
# we want to change this to tag:stable once all the coverage tests are fixed
filter: tag:lint
/base_python:
summary: Run Tests Upstream for base Python
discover+:
filter: tag:base
/all_python:
summary: Run Tests Upstream PRs for all Python versions
prepare+:
- name: install all python versions
how: install
package:
- python3.9
- python3.10
- python3.11
- python3.12
- python3.13
discover+:
filter: tag:matrix
# TODO: replace with /coverage and include integration tests coverage
/unittest_coverage:
summary: Run Unit test coverage
discover+:
filter: tag:coverage & tag:unittest
adjust+:
enabled: false
when: initiator is not defined or initiator != packit
# Run tests against downstream Podman. These tests should be the all_python only since the sanity
# of code is tested in the distro environment
/downstream:
/all:
summary: Run Tests on bodhi / errata and dist-git PRs
prepare+:
- name: install all python versions
how: install
package:
- python3.9
- python3.10
- python3.11
- python3.12
- python3.13
discover+:
filter: tag:matrix
adjust+:
enabled: false
when: initiator == packit

View File

@ -1,9 +1,5 @@
"""Podman client module."""
import sys
assert sys.version_info >= (3, 6), "Python 3.6 or greater is required."
from podman.client import PodmanClient, from_env
from podman.version import __version__

View File

@ -1,10 +1,9 @@
"""Tools for connecting to a Podman service."""
import re
from podman.api.cached_property import cached_property
from podman.api.client import APIClient
from podman.api.http_utils import prepare_body, prepare_filters
from podman.api.api_versions import VERSION, COMPATIBLE_VERSION
from podman.api.http_utils import encode_auth_header, prepare_body, prepare_filters
from podman.api.parse_utils import (
decode_header,
frames,
@ -15,42 +14,20 @@ from podman.api.parse_utils import (
stream_helper,
)
from podman.api.tar_utils import create_tar, prepare_containerfile, prepare_containerignore
from .. import version
DEFAULT_CHUNK_SIZE = 2 * 1024 * 1024
def _api_version(release: str, significant: int = 3) -> str:
"""Return API version removing any additional identifiers from the release version.
This is a simple lexicographical parsing, no semantics are applied, e.g. semver checking.
"""
items = re.split(r"\.|-|\+", release)
parts = items[0:significant]
return ".".join(parts)
VERSION: str = _api_version(version.__version__)
COMPATIBLE_VERSION: str = _api_version(version.__compatible_version__, 2)
try:
from typing import Literal
except (ImportError, ModuleNotFoundError):
try:
from typing_extensions import Literal
except (ImportError, ModuleNotFoundError):
from podman.api.typing_extensions import Literal # pylint: disable=ungrouped-imports
# isort: unique-list
__all__ = [
'APIClient',
'COMPATIBLE_VERSION',
'DEFAULT_CHUNK_SIZE',
'Literal',
'VERSION',
'cached_property',
'create_tar',
'decode_header',
'encode_auth_header',
'frames',
'parse_repository',
'prepare_body',

View File

@ -1,6 +1,7 @@
"""Utility functions for working with Adapters."""
from typing import NamedTuple, Mapping
from typing import NamedTuple
from collections.abc import Mapping
def _key_normalizer(key_class: NamedTuple, request_context: Mapping) -> Mapping:

View File

@ -0,0 +1,18 @@
"""Constants API versions"""
import re
from .. import version
def _api_version(release: str, significant: int = 3) -> str:
"""Return API version removing any additional identifiers from the release version.
This is a simple lexicographical parsing, no semantics are applied, e.g. semver checking.
"""
items = re.split(r"\.|-|\+", release)
parts = items[0:significant]
return ".".join(parts)
VERSION: str = _api_version(version.__version__)
COMPATIBLE_VERSION: str = _api_version(version.__compatible_version__, 2)

View File

@ -6,5 +6,5 @@ try:
from functools import cached_property # pylint: disable=unused-import
except ImportError:
def cached_property(fn):
def cached_property(fn): # type: ignore[no-redef]
return property(functools.lru_cache()(fn))

View File

@ -1,16 +1,24 @@
"""APIClient for connecting to Podman service."""
import json
import warnings
import urllib.parse
from typing import Any, ClassVar, IO, Iterable, List, Mapping, Optional, Tuple, Type, Union
from typing import (
Any,
ClassVar,
IO,
Optional,
Union,
)
from collections.abc import Iterable, Mapping
import requests
from requests.adapters import HTTPAdapter
from podman import api # pylint: disable=cyclic-import
from podman.api.api_versions import VERSION, COMPATIBLE_VERSION
from podman.api.ssh import SSHAdapter
from podman.api.uds import UDSAdapter
from podman.errors import APIError, NotFound
from podman.errors import APIError, NotFound, PodmanError
from podman.tlsconfig import TLSConfig
from podman.version import __version__
@ -19,15 +27,25 @@ _Data = Union[
str,
bytes,
Mapping[str, Any],
Iterable[Tuple[str, Optional[str]]],
Iterable[tuple[str, Optional[str]]],
IO,
]
"""Type alias for request data parameter."""
_Timeout = Union[None, float, Tuple[float, float], Tuple[float, None]]
_Timeout = Union[None, float, tuple[float, float], tuple[float, None]]
"""Type alias for request timeout parameter."""
class ParameterDeprecationWarning(DeprecationWarning):
"""
Custom DeprecationWarning for deprecated parameters.
"""
# Make the ParameterDeprecationWarning visible for user.
warnings.simplefilter('always', ParameterDeprecationWarning)
class APIResponse:
"""APIResponse proxy requests.Response objects.
@ -47,7 +65,7 @@ class APIResponse:
"""Forward any query for an attribute not defined in this proxy class to wrapped class."""
return getattr(self._response, item)
def raise_for_status(self, not_found: Type[APIError] = NotFound) -> None:
def raise_for_status(self, not_found: type[APIError] = NotFound) -> None:
"""Raises exception when Podman service reports one."""
if self.status_code < 400:
return
@ -70,7 +88,7 @@ class APIClient(requests.Session):
# Abstract methods (delete,get,head,post) are specialized and pylint cannot walk hierarchy.
# pylint: disable=too-many-instance-attributes,arguments-differ,arguments-renamed
supported_schemes: ClassVar[List[str]] = (
supported_schemes: ClassVar[list[str]] = (
"unix",
"http+unix",
"ssh",
@ -89,9 +107,9 @@ class APIClient(requests.Session):
num_pools: Optional[int] = None,
credstore_env: Optional[Mapping[str, str]] = None,
use_ssh_client=True,
max_pools_size=None,
max_pool_size=None,
**kwargs,
): # pylint: disable=unused-argument
): # pylint: disable=unused-argument,too-many-positional-arguments
"""Instantiate APIClient object.
Args:
@ -117,30 +135,39 @@ class APIClient(requests.Session):
self.base_url = self._normalize_url(base_url)
adapter_kwargs = kwargs.copy()
# The HTTPAdapter doesn't handle the "**kwargs", so it needs special structure
# where the parameters are set specifically.
http_adapter_kwargs = {}
if num_pools is not None:
adapter_kwargs["pool_connections"] = num_pools
if max_pools_size is not None:
adapter_kwargs["pool_maxsize"] = max_pools_size
http_adapter_kwargs["pool_connections"] = num_pools
if max_pool_size is not None:
adapter_kwargs["pool_maxsize"] = max_pool_size
http_adapter_kwargs["pool_maxsize"] = max_pool_size
if timeout is not None:
adapter_kwargs["timeout"] = timeout
if self.base_url.scheme == "http+unix":
self.mount("http://", UDSAdapter(self.base_url.geturl(), **adapter_kwargs))
self.mount("https://", UDSAdapter(self.base_url.geturl(), **adapter_kwargs))
# ignore proxies from the env vars
self.trust_env = False
elif self.base_url.scheme == "http+ssh":
self.mount("http://", SSHAdapter(self.base_url.geturl(), **adapter_kwargs))
self.mount("https://", SSHAdapter(self.base_url.geturl(), **adapter_kwargs))
elif self.base_url.scheme == "http":
self.mount("http://", HTTPAdapter(**adapter_kwargs))
self.mount("https://", HTTPAdapter(**adapter_kwargs))
self.mount("http://", HTTPAdapter(**http_adapter_kwargs))
self.mount("https://", HTTPAdapter(**http_adapter_kwargs))
else:
assert False, "APIClient.supported_schemes changed without adding a branch here."
raise PodmanError("APIClient.supported_schemes changed without adding a branch here.")
self.version = version or api.VERSION
self.version = version or VERSION
self.path_prefix = f"/v{self.version}/libpod/"
self.compatible_version = kwargs.get("compatible_version", api.COMPATIBLE_VERSION)
self.compatible_version = kwargs.get("compatible_version", COMPATIBLE_VERSION)
self.compatible_prefix = f"/v{self.compatible_version}/"
self.timeout = timeout
@ -179,6 +206,7 @@ class APIClient(requests.Session):
def delete(
self,
path: Union[str, bytes],
*,
params: Union[None, bytes, Mapping[str, str]] = None,
headers: Optional[Mapping[str, str]] = None,
timeout: _Timeout = None,
@ -213,7 +241,8 @@ class APIClient(requests.Session):
def get(
self,
path: Union[str, bytes],
params: Union[None, bytes, Mapping[str, List[str]]] = None,
*,
params: Union[None, bytes, Mapping[str, list[str]]] = None,
headers: Optional[Mapping[str, str]] = None,
timeout: _Timeout = None,
stream: Optional[bool] = False,
@ -247,6 +276,7 @@ class APIClient(requests.Session):
def head(
self,
path: Union[str, bytes],
*,
params: Union[None, bytes, Mapping[str, str]] = None,
headers: Optional[Mapping[str, str]] = None,
timeout: _Timeout = None,
@ -281,6 +311,7 @@ class APIClient(requests.Session):
def post(
self,
path: Union[str, bytes],
*,
params: Union[None, bytes, Mapping[str, str]] = None,
data: _Data = None,
headers: Optional[Mapping[str, str]] = None,
@ -300,6 +331,7 @@ class APIClient(requests.Session):
Keyword Args:
compatible: Will override the default path prefix with compatible prefix
verify: Whether to verify TLS certificates.
Raises:
APIError: when service returns an error
@ -318,6 +350,7 @@ class APIClient(requests.Session):
def put(
self,
path: Union[str, bytes],
*,
params: Union[None, bytes, Mapping[str, str]] = None,
data: _Data = None,
headers: Optional[Mapping[str, str]] = None,
@ -356,6 +389,7 @@ class APIClient(requests.Session):
self,
method: str,
path: Union[str, bytes],
*,
data: _Data = None,
params: Union[None, bytes, Mapping[str, str]] = None,
headers: Optional[Mapping[str, str]] = None,
@ -374,6 +408,7 @@ class APIClient(requests.Session):
Keyword Args:
compatible: Will override the default path prefix with compatible prefix
verify: Whether to verify TLS certificates.
Raises:
APIError: when service returns an error
@ -389,10 +424,10 @@ class APIClient(requests.Session):
path = path.lstrip("/") # leading / makes urljoin crazy...
# TODO should we have an option for HTTPS support?
scheme = "https" if kwargs.get("verify", None) else "http"
# Build URL for operation from base_url
uri = urllib.parse.ParseResult(
"http",
scheme,
self.base_url.netloc,
urllib.parse.urljoin(path_prefix, path),
self.base_url.params,
@ -409,6 +444,7 @@ class APIClient(requests.Session):
data=data,
headers=(headers or {}),
stream=stream,
verify=kwargs.get("verify", None),
**timeout_kw,
)
)

View File

@ -3,16 +3,17 @@
import base64
import collections.abc
import json
from typing import Dict, List, Mapping, Optional, Union, Any
from typing import Optional, Union, Any
from collections.abc import Mapping
def prepare_filters(filters: Union[str, List[str], Mapping[str, str]]) -> Optional[str]:
"""Return filters as an URL quoted JSON Dict[str, List[Any]]."""
def prepare_filters(filters: Union[str, list[str], Mapping[str, str]]) -> Optional[str]:
"""Return filters as an URL quoted JSON dict[str, list[Any]]."""
if filters is None or len(filters) == 0:
return None
criteria: Dict[str, List[str]] = {}
criteria: dict[str, list[str]] = {}
if isinstance(filters, str):
_format_string(filters, criteria)
elif isinstance(filters, collections.abc.Mapping):
@ -42,12 +43,12 @@ def _format_dict(filters, criteria):
for key, value in filters.items():
if value is None:
continue
value = str(value)
str_value = str(value)
if key in criteria:
criteria[key].append(value)
criteria[key].append(str_value)
else:
criteria[key] = [value]
criteria[key] = [str_value]
def _format_string(filters, criteria):
@ -67,7 +68,7 @@ def prepare_body(body: Mapping[str, Any]) -> str:
return json.dumps(body, sort_keys=True)
def _filter_values(mapping: Mapping[str, Any], recursion=False) -> Dict[str, Any]:
def _filter_values(mapping: Mapping[str, Any], recursion=False) -> dict[str, Any]:
"""Returns a canonical dictionary with values == None or empty Iterables removed.
Dictionary is walked using recursion.
@ -84,6 +85,7 @@ def _filter_values(mapping: Mapping[str, Any], recursion=False) -> Dict[str, Any
continue
# depending on type we need details...
proposal: Any
if isinstance(value, collections.abc.Mapping):
proposal = _filter_values(value, recursion=True)
elif isinstance(value, collections.abc.Iterable) and not isinstance(value, str):
@ -91,7 +93,7 @@ def _filter_values(mapping: Mapping[str, Any], recursion=False) -> Dict[str, Any
else:
proposal = value
if not recursion and proposal not in (None, str(), [], {}):
if not recursion and proposal not in (None, "", [], {}):
canonical[key] = proposal
elif recursion and proposal not in (None, [], {}):
canonical[key] = proposal
@ -99,5 +101,5 @@ def _filter_values(mapping: Mapping[str, Any], recursion=False) -> Dict[str, Any
return canonical
def encode_auth_header(auth_config: Dict[str, str]) -> str:
return base64.b64encode(json.dumps(auth_config).encode('utf-8'))
def encode_auth_header(auth_config: dict[str, str]) -> bytes:
return base64.urlsafe_b64encode(json.dumps(auth_config).encode('utf-8'))

View File

@ -0,0 +1,49 @@
"""Utility functions for dealing with stdout and stderr."""
HEADER_SIZE = 8
STDOUT = 1
STDERR = 2
# pylint: disable=line-too-long
def demux_output(data_bytes):
"""Demuxes the output of a container stream into stdout and stderr streams.
Stream data is expected to be in the following format:
- 1 byte: stream type (1=stdout, 2=stderr)
- 3 bytes: padding
- 4 bytes: payload size (big-endian)
- N bytes: payload data
ref: https://docs.podman.io/en/latest/_static/api.html?version=v5.0#tag/containers/operation/ContainerAttachLibpod
Args:
data_bytes: Bytes object containing the combined stream data.
Returns:
A tuple containing two bytes objects: (stdout, stderr).
"""
stdout = b""
stderr = b""
while len(data_bytes) >= HEADER_SIZE:
# Extract header information
header, data_bytes = data_bytes[:HEADER_SIZE], data_bytes[HEADER_SIZE:]
stream_type = header[0]
payload_size = int.from_bytes(header[4:HEADER_SIZE], "big")
# Check if data is sufficient for payload
if len(data_bytes) < payload_size:
break # Incomplete frame, wait for more data
# Extract and process payload
payload = data_bytes[:payload_size]
if stream_type == STDOUT:
stdout += payload
elif stream_type == STDERR:
stderr += payload
else:
# todo: Handle unexpected stream types
pass
# Update data for next frame
data_bytes = data_bytes[payload_size:]
return stdout or None, stderr or None

View File

@ -4,33 +4,32 @@ import base64
import ipaddress
import json
import struct
from datetime import datetime
from typing import Any, Dict, Iterator, Optional, Tuple, Union
from datetime import datetime, timezone
from typing import Any, Optional, Union
from collections.abc import Iterator
from requests import Response
from podman.api.client import APIResponse
from .output_utils import demux_output
def parse_repository(name: str) -> Tuple[str, Optional[str]]:
"""Parse repository image name from tag or digest
def parse_repository(name: str) -> tuple[str, Optional[str]]:
"""Parse repository image name from tag.
Returns:
item 1: repository name
item 2: Either digest and tag, tag, or None
item 2: Either tag or None
"""
# split image name and digest
elements = name.split("@", 1)
if len(elements) == 2:
return elements[0], elements[1]
# split repository and image name from tag
elements = name.split(":", 1)
# tags need to be split from the right since
# a port number might increase the split list len by 1
elements = name.rsplit(":", 1)
if len(elements) == 2 and "/" not in elements[1]:
return elements[0], elements[1]
return name, None
def decode_header(value: Optional[str]) -> Dict[str, Any]:
def decode_header(value: Optional[str]) -> dict[str, Any]:
"""Decode a base64 JSON header value."""
if value is None:
return {}
@ -49,13 +48,15 @@ def prepare_timestamp(value: Union[datetime, int, None]) -> Optional[int]:
return value
if isinstance(value, datetime):
delta = value - datetime.utcfromtimestamp(0)
if value.tzinfo is None:
value = value.replace(tzinfo=timezone.utc)
delta = value - datetime.fromtimestamp(0, timezone.utc)
return delta.seconds + delta.days * 24 * 3600
raise ValueError(f"Type '{type(value)}' is not supported by prepare_timestamp()")
def prepare_cidr(value: Union[ipaddress.IPv4Network, ipaddress.IPv6Network]) -> (str, str):
def prepare_cidr(value: Union[ipaddress.IPv4Network, ipaddress.IPv6Network]) -> tuple[str, str]:
"""Returns network address and Base64 encoded netmask from CIDR.
The return values are dictated by the Go JSON decoder.
@ -63,7 +64,7 @@ def prepare_cidr(value: Union[ipaddress.IPv4Network, ipaddress.IPv6Network]) ->
return str(value.network_address), base64.b64encode(value.netmask.packed).decode("utf-8")
def frames(response: Response) -> Iterator[bytes]:
def frames(response: APIResponse) -> Iterator[bytes]:
"""Returns each frame from multiplexed payload, all results are expected in the payload.
The stdout and stderr frames are undifferentiated as they are returned.
@ -79,11 +80,13 @@ def frames(response: Response) -> Iterator[bytes]:
yield response.content[frame_begin:frame_end]
def stream_frames(response: Response) -> Iterator[bytes]:
def stream_frames(
response: APIResponse, demux: bool = False
) -> Iterator[Union[bytes, tuple[bytes, bytes]]]:
"""Returns each frame from multiplexed streamed payload.
Notes:
The stdout and stderr frames are undifferentiated as they are returned.
If ``demux`` then output will be tuples where the first position is ``STDOUT`` and the second
is ``STDERR``.
"""
while True:
header = response.raw.read(8)
@ -95,14 +98,18 @@ def stream_frames(response: Response) -> Iterator[bytes]:
continue
data = response.raw.read(frame_length)
if demux:
data = demux_output(header + data)
if not data:
return
yield data
def stream_helper(
response: Response, decode_to_json: bool = False
) -> Union[Iterator[bytes], Iterator[Dict[str, Any]]]:
response: APIResponse, decode_to_json: bool = False
) -> Union[Iterator[bytes], Iterator[dict[str, Any]]]:
"""Helper to stream results and optionally decode to json"""
for value in response.iter_lines():
if decode_to_json:

54
podman/api/path_utils.py Normal file
View File

@ -0,0 +1,54 @@
"""Helper functions for managing paths"""
import errno
import getpass
import os
import stat
def get_runtime_dir() -> str:
"""Returns the runtime directory for the current user
The value in XDG_RUNTIME_DIR is preferred, but that is not always set, for
example, on headless servers. /run/user/$UID is defined in the XDG documentation.
"""
try:
return os.environ['XDG_RUNTIME_DIR']
except KeyError:
user = getpass.getuser()
run_user = f'/run/user/{os.getuid()}'
if os.path.isdir(run_user):
return run_user
fallback = f'/tmp/podmanpy-runtime-dir-fallback-{user}'
try:
# This must be a real directory, not a symlink, so attackers can't
# point it elsewhere. So we use lstat to check it.
fallback_st = os.lstat(fallback)
except OSError as e:
if e.errno == errno.ENOENT:
os.mkdir(fallback, 0o700)
else:
raise
else:
# The fallback must be a directory
if not stat.S_ISDIR(fallback_st.st_mode):
os.unlink(fallback)
os.mkdir(fallback, 0o700)
# Must be owned by the user and not accessible by anyone else
elif (fallback_st.st_uid != os.getuid()) or (
fallback_st.st_mode & (stat.S_IRWXG | stat.S_IRWXO)
):
os.rmdir(fallback)
os.mkdir(fallback, 0o700)
return fallback
def get_xdg_config_home() -> str:
"""Returns the XDG_CONFIG_HOME directory for the current user"""
try:
return os.environ["XDG_CONFIG_HOME"]
except KeyError:
return os.path.join(os.path.expanduser("~"), ".config")

View File

@ -15,12 +15,12 @@ from contextlib import suppress
from typing import Optional, Union
import time
import xdg.BaseDirectory
import urllib3
import urllib3.connection
from requests.adapters import DEFAULT_POOLBLOCK, DEFAULT_RETRIES, HTTPAdapter
from podman.api.path_utils import get_runtime_dir
from .adapter_utils import _key_normalizer
@ -46,7 +46,7 @@ class SSHSocket(socket.socket):
self.identity = identity
self._proc: Optional[subprocess.Popen] = None
runtime_dir = pathlib.Path(xdg.BaseDirectory.get_runtime_dir(strict=False)) / "podman"
runtime_dir = pathlib.Path(get_runtime_dir()) / "podman"
runtime_dir.mkdir(mode=0o700, parents=True, exist_ok=True)
self.local_sock = runtime_dir / f"podman-forward-{random.getrandbits(80):x}.sock"
@ -250,7 +250,7 @@ class SSHAdapter(HTTPAdapter):
max_retries: int = DEFAULT_RETRIES,
pool_block: int = DEFAULT_POOLBLOCK,
**kwargs,
):
): # pylint: disable=too-many-positional-arguments
"""Initialize SSHAdapter.
Args:

View File

@ -6,12 +6,12 @@ import shutil
import tarfile
import tempfile
from fnmatch import fnmatch
from typing import BinaryIO, List, Optional
from typing import BinaryIO, Optional
import sys
def prepare_containerignore(anchor: str) -> List[str]:
def prepare_containerignore(anchor: str) -> list[str]:
"""Return the list of patterns for filenames to exclude.
.containerignore takes precedence over .dockerignore.
@ -24,7 +24,7 @@ def prepare_containerignore(anchor: str) -> List[str]:
with ignore.open(encoding='utf-8') as file:
return list(
filter(
lambda l: l and not l.startswith("#"),
lambda i: i and not i.startswith("#"),
(line.strip() for line in file.readlines()),
)
)
@ -53,7 +53,7 @@ def prepare_containerfile(anchor: str, dockerfile: str) -> str:
def create_tar(
anchor: str, name: str = None, exclude: List[str] = None, gzip: bool = False
anchor: str, name: str = None, exclude: list[str] = None, gzip: bool = False
) -> BinaryIO:
"""Create a tarfile from context_dir to send to Podman service.
@ -119,7 +119,7 @@ def create_tar(
return open(name.name, "rb") # pylint: disable=consider-using-with
def _exclude_matcher(path: str, exclude: List[str]) -> bool:
def _exclude_matcher(path: str, exclude: list[str]) -> bool:
"""Returns True if path matches an entry in exclude.
Note:

File diff suppressed because it is too large Load Diff

View File

@ -137,7 +137,7 @@ class UDSAdapter(HTTPAdapter):
max_retries=DEFAULT_RETRIES,
pool_block=DEFAULT_POOLBLOCK,
**kwargs,
):
): # pylint: disable=too-many-positional-arguments
"""Initialize UDSAdapter.
Args:
@ -153,7 +153,7 @@ class UDSAdapter(HTTPAdapter):
Examples:
requests.Session.mount(
"http://", UDSAdapater("http+unix:///run/user/1000/podman/podman.sock"))
"http://", UDSAdapter("http+unix:///run/user/1000/podman/podman.sock"))
"""
self.poolmanager: Optional[UDSPoolManager] = None

View File

@ -4,12 +4,11 @@ import logging
import os
from contextlib import AbstractContextManager
from pathlib import Path
from typing import Any, Dict, Optional
import xdg.BaseDirectory
from typing import Any, Optional
from podman.api import cached_property
from podman.api.client import APIClient
from podman.api.path_utils import get_runtime_dir
from podman.domain.config import PodmanConfig
from podman.domain.containers_manager import ContainersManager
from podman.domain.events import EventsManager
@ -70,9 +69,7 @@ class PodmanClient(AbstractContextManager):
# Override configured identity, if provided in arguments
api_kwargs["identity"] = kwargs.get("identity", str(connection.identity))
elif "base_url" not in api_kwargs:
path = str(
Path(xdg.BaseDirectory.get_runtime_dir(strict=False)) / "podman" / "podman.sock"
)
path = str(Path(get_runtime_dir()) / "podman" / "podman.sock")
api_kwargs["base_url"] = "http+unix://" + path
self.api = APIClient(**api_kwargs)
@ -85,13 +82,14 @@ class PodmanClient(AbstractContextManager):
@classmethod
def from_env(
cls,
*,
version: str = "auto",
timeout: Optional[int] = None,
max_pool_size: Optional[int] = None,
ssl_version: Optional[int] = None, # pylint: disable=unused-argument
assert_hostname: bool = False, # pylint: disable=unused-argument
environment: Optional[Dict[str, str]] = None,
credstore_env: Optional[Dict[str, str]] = None,
environment: Optional[dict[str, str]] = None,
credstore_env: Optional[dict[str, str]] = None,
use_ssh_client: bool = True, # pylint: disable=unused-argument
) -> "PodmanClient":
"""Returns connection to service using environment variables and parameters.
@ -124,23 +122,24 @@ class PodmanClient(AbstractContextManager):
if version == "auto":
version = None
host = environment.get("CONTAINER_HOST") or environment.get("DOCKER_HOST") or None
if host is None:
raise ValueError("CONTAINER_HOST or DOCKER_HOST must be set to URL of podman service.")
kwargs = {
'version': version,
'timeout': timeout,
'tls': False,
'credstore_env': credstore_env,
'max_pool_size': max_pool_size,
}
return PodmanClient(
base_url=host,
version=version,
timeout=timeout,
tls=False,
credstore_env=credstore_env,
max_pool_size=max_pool_size,
)
host = environment.get("CONTAINER_HOST") or environment.get("DOCKER_HOST") or None
if host is not None:
kwargs['base_url'] = host
return PodmanClient(**kwargs)
@cached_property
def containers(self) -> ContainersManager:
"""Returns Manager for operations on containers stored by a Podman service."""
return ContainersManager(client=self.api)
return ContainersManager(client=self.api, podman_client=self)
@cached_property
def images(self) -> ImagesManager:
@ -176,7 +175,7 @@ class PodmanClient(AbstractContextManager):
def system(self):
return SystemManager(client=self.api)
def df(self) -> Dict[str, Any]: # pylint: disable=missing-function-docstring,invalid-name
def df(self) -> dict[str, Any]: # pylint: disable=missing-function-docstring,invalid-name
return self.system.df()
df.__doc__ = SystemManager.df.__doc__

View File

@ -3,11 +3,11 @@
import sys
import urllib
from pathlib import Path
from typing import Dict, Optional
import xdg.BaseDirectory
from typing import Optional
import json
from podman.api import cached_property
from podman.api.path_utils import get_xdg_config_home
if sys.version_info >= (3, 11):
from tomllib import loads as toml_loads
@ -24,7 +24,7 @@ else:
class ServiceConnection:
"""ServiceConnection defines a connection to the Podman service."""
def __init__(self, name: str, attrs: Dict[str, str]):
def __init__(self, name: str, attrs: dict[str, str]):
"""Create a Podman ServiceConnection."""
self.name = name
self.attrs = attrs
@ -48,12 +48,16 @@ class ServiceConnection:
@cached_property
def url(self):
"""urllib.parse.ParseResult: Returns URL for service connection."""
return urllib.parse.urlparse(self.attrs.get("uri"))
if self.attrs.get("uri"):
return urllib.parse.urlparse(self.attrs.get("uri"))
return urllib.parse.urlparse(self.attrs.get("URI"))
@cached_property
def identity(self):
"""Path: Returns Path to identity file for service connection."""
return Path(self.attrs.get("identity"))
if self.attrs.get("identity"):
return Path(self.attrs.get("identity"))
return Path(self.attrs.get("Identity"))
class PodmanConfig:
@ -62,17 +66,46 @@ class PodmanConfig:
def __init__(self, path: Optional[str] = None):
"""Read Podman configuration from users XDG_CONFIG_HOME."""
self.is_default = False
if path is None:
home = Path(xdg.BaseDirectory.xdg_config_home)
self.path = home / "containers" / "containers.conf"
home = Path(get_xdg_config_home())
self.path = home / "containers" / "podman-connections.json"
old_toml_file = home / "containers" / "containers.conf"
self.is_default = True
# this elif is only for testing purposes
elif "@@is_test@@" in path:
test_path = path.replace("@@is_test@@", '')
self.path = Path(test_path) / "podman-connections.json"
old_toml_file = Path(test_path) / "containers.conf"
self.is_default = True
else:
self.path = Path(path)
old_toml_file = None
self.attrs = {}
if self.path.exists():
with self.path.open(encoding='utf-8') as file:
try:
with open(self.path, encoding='utf-8') as file:
self.attrs = json.load(file)
except Exception:
# if the user specifies a path, it can either be a JSON file
# or a TOML file - so try TOML next
try:
with self.path.open(encoding='utf-8') as file:
buffer = file.read()
loaded_toml = toml_loads(buffer)
self.attrs.update(loaded_toml)
except Exception as e:
raise AttributeError(
"The path given is neither a JSON nor a TOML connections file"
) from e
# Read the old toml file configuration
if self.is_default and old_toml_file.exists():
with old_toml_file.open(encoding='utf-8') as file:
buffer = file.read()
self.attrs = toml_loads(buffer)
loaded_toml = toml_loads(buffer)
self.attrs.update(loaded_toml)
def __hash__(self) -> int:
return hash(tuple(self.path.name))
@ -89,15 +122,16 @@ class PodmanConfig:
@cached_property
def services(self):
"""Dict[str, ServiceConnection]: Returns list of service connections.
"""dict[str, ServiceConnection]: Returns list of service connections.
Examples:
podman_config = PodmanConfig()
address = podman_config.services["testing"]
print(f"Testing service address {address}")
"""
services: Dict[str, ServiceConnection] = {}
services: dict[str, ServiceConnection] = {}
# read the keys of the toml file first
engine = self.attrs.get("engine")
if engine:
destinations = engine.get("service_destinations")
@ -105,17 +139,35 @@ class PodmanConfig:
connection = ServiceConnection(key, attrs=destinations[key])
services[key] = connection
# read the keys of the json file next
# this will ensure that if the new json file and the old toml file
# has a connection with the same name defined, we always pick the
# json one
connection = self.attrs.get("Connection")
if connection:
destinations = connection.get("Connections")
for key in destinations:
connection = ServiceConnection(key, attrs=destinations[key])
services[key] = connection
return services
@cached_property
def active_service(self):
"""Optional[ServiceConnection]: Returns active connection."""
# read the new json file format
connection = self.attrs.get("Connection")
if connection:
active = connection.get("Default")
destinations = connection.get("Connections")
return ServiceConnection(active, attrs=destinations[active])
# if we are here, that means there was no default in the new json file
engine = self.attrs.get("engine")
if engine:
active = engine.get("active_service")
destinations = engine.get("service_destinations")
for key in destinations:
if key == active:
return ServiceConnection(key, attrs=destinations[key])
return ServiceConnection(active, attrs=destinations[active])
return None

View File

@ -3,12 +3,14 @@
import json
import logging
import shlex
from collections.abc import Iterable, Iterator, Mapping
from contextlib import suppress
from typing import Any, Dict, Iterable, Iterator, List, Mapping, Optional, Tuple, Union
from typing import Any, Optional, Union
import requests
from podman import api
from podman.api.output_utils import demux_output
from podman.domain.images import Image
from podman.domain.images_manager import ImagesManager
from podman.domain.manager import PodmanResource
@ -41,15 +43,20 @@ class Container(PodmanResource):
@property
def labels(self):
"""dict[str, str]: Returns labels associated with container."""
labels = None
with suppress(KeyError):
# Container created from ``list()`` operation
if "Labels" in self.attrs:
return self.attrs["Labels"]
return self.attrs["Config"]["Labels"]
return {}
labels = self.attrs["Labels"]
# Container created from ``get()`` operation
else:
labels = self.attrs["Config"].get("Labels", {})
return labels or {}
@property
def status(self):
"""Literal["running", "stopped", "exited", "unknown"]: Returns status of container."""
"""Literal["created", "initialized", "running", "stopped", "exited", "unknown"]:
Returns status of container."""
with suppress(KeyError):
return self.attrs["State"]["Status"]
return "unknown"
@ -92,7 +99,7 @@ class Container(PodmanResource):
Keyword Args:
author (str): Name of commit author
changes (List[str]): Instructions to apply during commit
changes (list[str]): Instructions to apply during commit
comment (str): Commit message to include with Image, overrides keyword message
conf (dict[str, Any]): Ignored.
format (str): Format of the image manifest and metadata
@ -115,7 +122,7 @@ class Container(PodmanResource):
body = response.json()
return ImagesManager(client=self.client).get(body["Id"])
def diff(self) -> List[Dict[str, int]]:
def diff(self) -> list[dict[str, int]]:
"""Report changes of a container's filesystem.
Raises:
@ -125,10 +132,11 @@ class Container(PodmanResource):
response.raise_for_status()
return response.json()
# pylint: disable=too-many-arguments,unused-argument
# pylint: disable=too-many-arguments
def exec_run(
self,
cmd: Union[str, List[str]],
cmd: Union[str, list[str]],
*,
stdout: bool = True,
stderr: bool = True,
stdin: bool = False,
@ -137,11 +145,14 @@ class Container(PodmanResource):
user=None,
detach: bool = False,
stream: bool = False,
socket: bool = False,
environment: Union[Mapping[str, str], List[str]] = None,
socket: bool = False, # pylint: disable=unused-argument
environment: Union[Mapping[str, str], list[str]] = None,
workdir: str = None,
demux: bool = False,
) -> Tuple[Optional[int], Union[Iterator[bytes], Any, Tuple[bytes, bytes]]]:
) -> tuple[
Optional[int],
Union[Iterator[Union[bytes, tuple[bytes, bytes]]], Any, tuple[bytes, bytes]],
]:
"""Run given command inside container and return results.
Args:
@ -151,28 +162,32 @@ class Container(PodmanResource):
stdin: Attach to stdin. Default: False
tty: Allocate a pseudo-TTY. Default: False
privileged: Run as privileged.
user: User to execute command as. Default: root
user: User to execute command as.
detach: If true, detach from the exec command.
Default: False
stream: Stream response data. Default: False
stream: Stream response data. Ignored if ``detach`` is ``True``. Default: False
socket: Return the connection socket to allow custom
read/write operations. Default: False
environment: A dictionary or a List[str] in
environment: A dictionary or a list[str] in
the following format ["PASSWORD=xxx"] or
{"PASSWORD": "xxx"}.
workdir: Path to working directory for this exec session
demux: Return stdout and stderr separately
Returns:
First item is the command response code
Second item is the requests response content
A tuple of (``response_code``, ``output``).
``response_code``:
The exit code of the provided command. ``None`` if ``stream``.
``output``:
If ``stream``, then a generator yielding response chunks.
If ``demux``, then a tuple of (``stdout``, ``stderr``).
Else the response content.
Raises:
NotImplementedError: method not implemented.
APIError: when service reports error
"""
# pylint: disable-msg=too-many-locals
user = user or "root"
if isinstance(environment, dict):
environment = [f"{k}={v}" for k, v in environment.items()]
data = {
@ -184,21 +199,32 @@ class Container(PodmanResource):
"Env": environment,
"Privileged": privileged,
"Tty": tty,
"User": user,
"WorkingDir": workdir,
}
if user:
data["User"] = user
stream = stream and not detach
# create the exec instance
response = self.client.post(f"/containers/{self.name}/exec", data=json.dumps(data))
response.raise_for_status()
exec_id = response.json()['Id']
# start the exec instance, this will store command output
start_resp = self.client.post(
f"/exec/{exec_id}/start", data=json.dumps({"Detach": detach, "Tty": tty})
f"/exec/{exec_id}/start", data=json.dumps({"Detach": detach, "Tty": tty}), stream=stream
)
start_resp.raise_for_status()
if stream:
return None, api.stream_frames(start_resp, demux=demux)
# get and return exec information
response = self.client.get(f"/exec/{exec_id}/json")
response.raise_for_status()
if demux:
stdout_data, stderr_data = demux_output(start_resp.content)
return response.json().get('ExitCode'), (stdout_data, stderr_data)
return response.json().get('ExitCode'), start_resp.content
def export(self, chunk_size: int = api.DEFAULT_CHUNK_SIZE) -> Iterator[bytes]:
@ -217,12 +243,11 @@ class Container(PodmanResource):
response = self.client.get(f"/containers/{self.id}/export", stream=True)
response.raise_for_status()
for out in response.iter_content(chunk_size=chunk_size):
yield out
yield from response.iter_content(chunk_size=chunk_size)
def get_archive(
self, path: str, chunk_size: int = api.DEFAULT_CHUNK_SIZE
) -> Tuple[Iterable, Dict[str, Any]]:
) -> tuple[Iterable, dict[str, Any]]:
"""Download a file or folder from the container's filesystem.
Args:
@ -240,7 +265,12 @@ class Container(PodmanResource):
stat = api.decode_header(stat)
return response.iter_content(chunk_size=chunk_size), stat
def inspect(self) -> Dict:
def init(self) -> None:
"""Initialize the container."""
response = self.client.post(f"/containers/{self.id}/init")
response.raise_for_status()
def inspect(self) -> dict:
"""Inspect a container.
Raises:
@ -280,7 +310,7 @@ class Container(PodmanResource):
params = {
"follow": kwargs.get("follow", kwargs.get("stream", None)),
"since": api.prepare_timestamp(kwargs.get("since")),
"stderr": kwargs.get("stderr", None),
"stderr": kwargs.get("stderr", True),
"stdout": kwargs.get("stdout", True),
"tail": kwargs.get("tail"),
"timestamps": kwargs.get("timestamps"),
@ -391,7 +421,7 @@ class Container(PodmanResource):
def stats(
self, **kwargs
) -> Union[bytes, Dict[str, Any], Iterator[bytes], Iterator[Dict[str, Any]]]:
) -> Union[bytes, dict[str, Any], Iterator[bytes], Iterator[dict[str, Any]]]:
"""Return statistics for container.
Keyword Args:
@ -446,7 +476,7 @@ class Container(PodmanResource):
body = response.json()
raise APIError(body["cause"], response=response, explanation=body["message"])
def top(self, **kwargs) -> Union[Iterator[Dict[str, Any]], Dict[str, Any]]:
def top(self, **kwargs) -> Union[Iterator[dict[str, Any]], dict[str, Any]]:
"""Report on running processes in the container.
Keyword Args:
@ -476,19 +506,234 @@ class Container(PodmanResource):
response = self.client.post(f"/containers/{self.id}/unpause")
response.raise_for_status()
def update(self, **kwargs):
def update(self, **kwargs) -> None:
"""Update resource configuration of the containers.
Keyword Args:
Please refer to Podman API documentation for details:
https://docs.podman.io/en/latest/_static/api.html#tag/containers/operation/ContainerUpdateLibpod
restart_policy (str): New restart policy for the container.
restart_retries (int): New amount of retries for the container's restart policy.
Only allowed if restartPolicy is set to on-failure
blkio_weight_device tuple(str, int):Block IO weight (relative device weight)
in the form: (device_path, weight)
blockio (dict): LinuxBlockIO for Linux cgroup 'blkio' resource management
Example:
blockio = {
"leafWeight": 0
"throttleReadBpsDevice": [{
"major": 0,
"minor": 0,
"rate": 0
}],
"throttleReadIopsDevice": [{
"major": 0,
"minor": 0,
"rate": 0
}],
"throttleWriteBpsDevice": [{
"major": 0,
"minor": 0,
"rate": 0
}],
"throttleWriteIopsDevice": [{
"major": 0,
"minor": 0,
"rate": 0
}],
"weight": 0,
"weightDevice": [{
"leafWeight": 0,
"major": 0,
"minor": 0,
"weight": 0
}],
}
cpu (dict): LinuxCPU for Linux cgroup 'cpu' resource management
Example:
cpu = {
"burst": 0,
"cpus": "string",
"idle": 0,
"mems": "string",
"period": 0
"quota": 0,
"realtimePeriod": 0,
"realtimeRuntime": 0,
"shares": 0
}
device_read_bps (list(dict)): Limit read rate (bytes per second) from a device,
in the form: [{"Path": "string", "Rate": 0}]
device_read_iops (list(dict)): Limit read rate (IO operations per second) from a device,
in the form: [{"Path": "string", "Rate": 0}]
device_write_bps (list(dict)): Limit write rate (bytes per second) to a device,
in the form: [{"Path": "string", "Rate": 0}]
device_write_iops (list(dict)): Limit write rate (IO operations per second) to a device,
in the form: [{"Path": "string", "Rate": 0}]
devices (list(dict)): Devices configures the device allowlist.
Example:
devices = [{
access: "string"
allow: 0,
major: 0,
minor: 0,
type: "string"
}]
health_cmd (str): set a healthcheck command for the container ('None' disables the
existing healthcheck)
health_interval (str): set an interval for the healthcheck (a value of disable results
in no automatic timer setup)(Changing this setting resets timer.) (default "30s")
health_log_destination (str): set the destination of the HealthCheck log. Directory
path, local or events_logger (local use container state file)(Warning: Changing
this setting may cause the loss of previous logs.) (default "local")
health_max_log_count (int): set maximum number of attempts in the HealthCheck log file.
('0' value means an infinite number of attempts in the log file) (default 5)
health_max_logs_size (int): set maximum length in characters of stored HealthCheck log.
('0' value means an infinite log length) (default 500)
health_on_failure (str): action to take once the container turns unhealthy
(default "none")
health_retries (int): the number of retries allowed before a healthcheck is considered
to be unhealthy (default 3)
health_start_period (str): the initialization time needed for a container to bootstrap
(default "0s")
health_startup_cmd (str): Set a startup healthcheck command for the container
health_startup_interval (str): Set an interval for the startup healthcheck. Changing
this setting resets the timer, depending on the state of the container.
(default "30s")
health_startup_retries (int): Set the maximum number of retries before the startup
healthcheck will restart the container
health_startup_success (int): Set the number of consecutive successes before the
startup healthcheck is marked as successful and the normal healthcheck begins
(0 indicates any success will start the regular healthcheck)
health_startup_timeout (str): Set the maximum amount of time that the startup
healthcheck may take before it is considered failed (default "30s")
health_timeout (str): the maximum time allowed to complete the healthcheck before an
interval is considered failed (default "30s")
no_healthcheck (bool): Disable healthchecks on container
hugepage_limits (list(dict)): Hugetlb limits (in bytes).
Default to reservation limits if supported.
Example:
huugepage_limits = [{"limit": 0, "pageSize": "string"}]
memory (dict): LinuxMemory for Linux cgroup 'memory' resource management
Example:
memory = {
"checkBeforeUpdate": True,
"disableOOMKiller": True,
"kernel": 0,
"kernelTCP": 0,
"limit": 0,
"reservation": 0,
"swap": 0,
"swappiness": 0,
"useHierarchy": True,
}
network (dict): LinuxNetwork identification and priority configuration
Example:
network = {
"classID": 0,
"priorities": {
"name": "string",
"priority": 0
}
)
pids (dict): LinuxPids for Linux cgroup 'pids' resource management (Linux 4.3)
Example:
pids = {
"limit": 0
}
rdma (dict): Rdma resource restriction configuration. Limits are a set of key value
pairs that define RDMA resource limits, where the key is device name and value
is resource limits.
Example:
rdma = {
"property1": {
"hcaHandles": 0
"hcaObjects": 0
},
"property2": {
"hcaHandles": 0
"hcaObjects": 0
},
...
}
unified (dict): Unified resources.
Example:
unified = {
"property1": "value1",
"property2": "value2",
...
}
Raises:
NotImplementedError: Podman service unsupported operation.
"""
raise NotImplementedError("Container.update() is not supported by Podman service.")
data = {}
params = {}
health_commands_data = [
"health_cmd",
"health_interval",
"health_log_destination",
"health_max_log_count",
"health_max_logs_size",
"health_on_failure",
"health_retries",
"health_start_period",
"health_startup_cmd",
"health_startup_interval",
"health_startup_retries",
"health_startup_success",
"health_startup_timeout",
"health_timeout",
]
# the healthcheck section of parameters accepted can be either no_healthcheck or a series
# of healthcheck parameters
if kwargs.get("no_healthcheck"):
for command in health_commands_data:
if command in kwargs:
raise ValueError(f"Cannot set {command} when no_healthcheck is True")
data["no_healthcheck"] = kwargs.get("no_healthcheck")
else:
for hc in health_commands_data:
if hc in kwargs:
data[hc] = kwargs.get(hc)
data_mapping = {
"BlkIOWeightDevice": "blkio_weight_device",
"blockio": "blockIO",
"cpu": "cpu",
"device_read_bps": "DeviceReadBPs",
"device_read_iops": "DeviceReadIOps",
"device_write_bps": "DeviceWriteBPs",
"device_write_iops": "DeviceWriteIOps",
"devices": "devices",
"hugepage_limits": "hugepageLimits",
"memory": "memory",
"network": "network",
"pids": "pids",
"rdma": "rdma",
"unified": "unified",
}
for kwarg_key, data_key in data_mapping.items():
value = kwargs.get(kwarg_key)
if value is not None:
data[data_key] = value
if kwargs.get("restart_policy"):
params["restartPolicy"] = kwargs.get("restart_policy")
if kwargs.get("restart_retries"):
params["restartRetries"] = kwargs.get("restart_retries")
response = self.client.post(
f"/containers/{self.id}/update", params=params, data=json.dumps(data)
)
response.raise_for_status()
def wait(self, **kwargs) -> int:
"""Block until the container enters given state.
Keyword Args:
condition (Union[str, List[str]]): Container state on which to release.
condition (Union[str, list[str]]): Container state on which to release.
One or more of: "configured", "created", "running", "stopped",
"paused", "exited", "removing", "stopping".
interval (int): Time interval to wait before polling for completion.

View File

@ -5,7 +5,8 @@ import copy
import logging
import re
from contextlib import suppress
from typing import Any, Dict, List, MutableMapping, Union
from typing import Any, Union
from collections.abc import MutableMapping
from podman import api
from podman.domain.containers import Container
@ -16,12 +17,17 @@ from podman.errors import ImageNotFound
logger = logging.getLogger("podman.containers")
NAMED_VOLUME_PATTERN = re.compile(r"[a-zA-Z0-9][a-zA-Z0-9_.-]*")
class CreateMixin: # pylint: disable=too-few-public-methods
"""Class providing create method for ContainersManager."""
def create(
self, image: Union[Image, str], command: Union[str, List[str], None] = None, **kwargs
self,
image: Union[Image, str],
command: Union[str, list[str], None] = None,
**kwargs,
) -> Container:
"""Create a container.
@ -32,12 +38,12 @@ class CreateMixin: # pylint: disable=too-few-public-methods
Keyword Args:
auto_remove (bool): Enable auto-removal of the container on daemon side when the
container's process exits.
blkio_weight_device (Dict[str, Any]): Block IO weight (relative device weight)
blkio_weight_device (dict[str, Any]): Block IO weight (relative device weight)
in the form of: [{"Path": "device_path", "Weight": weight}].
blkio_weight (int): Block IO weight (relative weight), accepts a weight value
between 10 and 1000.
cap_add (List[str]): Add kernel capabilities. For example: ["SYS_ADMIN", "MKNOD"]
cap_drop (List[str]): Drop kernel capabilities.
cap_add (list[str]): Add kernel capabilities. For example: ["SYS_ADMIN", "MKNOD"]
cap_drop (list[str]): Drop kernel capabilities.
cgroup_parent (str): Override the default parent cgroup.
cpu_count (int): Number of usable CPUs (Windows only).
cpu_percent (int): Usable percentage of the available CPUs (Windows only).
@ -50,32 +56,32 @@ class CreateMixin: # pylint: disable=too-few-public-methods
cpuset_mems (str): Memory nodes (MEMs) in which to allow execution (0-3, 0,1).
Only effective on NUMA systems.
detach (bool): Run container in the background and return a Container object.
device_cgroup_rules (List[str]): A list of cgroup rules to apply to the container.
device_cgroup_rules (list[str]): A list of cgroup rules to apply to the container.
device_read_bps: Limit read rate (bytes per second) from a device in the form of:
`[{"Path": "device_path", "Rate": rate}]`
device_read_iops: Limit read rate (IO per second) from a device.
device_write_bps: Limit write rate (bytes per second) from a device.
device_write_iops: Limit write rate (IO per second) from a device.
devices (List[str]): Expose host devices to the container, as a List[str] in the form
devices (list[str]): Expose host devices to the container, as a list[str] in the form
<path_on_host>:<path_in_container>:<cgroup_permissions>.
For example:
/dev/sda:/dev/xvda:rwm allows the container to have read-write access to the
host's /dev/sda via a node named /dev/xvda inside the container.
dns (List[str]): Set custom DNS servers.
dns_opt (List[str]): Additional options to be added to the container's resolv.conf file.
dns_search (List[str]): DNS search domains.
domainname (Union[str, List[str]]): Set custom DNS search domains.
entrypoint (Union[str, List[str]]): The entrypoint for the container.
environment (Union[Dict[str, str], List[str]): Environment variables to set inside
the container, as a dictionary or a List[str] in the format
dns (list[str]): Set custom DNS servers.
dns_opt (list[str]): Additional options to be added to the container's resolv.conf file.
dns_search (list[str]): DNS search domains.
domainname (Union[str, list[str]]): Set custom DNS search domains.
entrypoint (Union[str, list[str]]): The entrypoint for the container.
environment (Union[dict[str, str], list[str]): Environment variables to set inside
the container, as a dictionary or a list[str] in the format
["SOMEVARIABLE=xxx", "SOMEOTHERVARIABLE=xyz"].
extra_hosts (Dict[str, str]): Additional hostnames to resolve inside the container,
extra_hosts (dict[str, str]): Additional hostnames to resolve inside the container,
as a mapping of hostname to IP address.
group_add (List[str]): List of additional group names and/or IDs that the container
group_add (list[str]): List of additional group names and/or IDs that the container
process will run as.
healthcheck (Dict[str,Any]): Specify a test to perform to check that the
healthcheck (dict[str,Any]): Specify a test to perform to check that the
container is healthy.
health_check_on_failure_action (int): Specify an action if a healthcheck fails.
hostname (str): Optional hostname for the container.
@ -84,14 +90,14 @@ class CreateMixin: # pylint: disable=too-few-public-methods
ipc_mode (str): Set the IPC mode for the container.
isolation (str): Isolation technology to use. Default: `None`.
kernel_memory (int or str): Kernel memory limit
labels (Union[Dict[str, str], List[str]): A dictionary of name-value labels (e.g.
labels (Union[dict[str, str], list[str]): A dictionary of name-value labels (e.g.
{"label1": "value1", "label2": "value2"}) or a list of names of labels to set
with empty values (e.g. ["label1", "label2"])
links (Optional[Dict[str, str]]): Mapping of links using the {'container': 'alias'}
links (Optional[dict[str, str]]): Mapping of links using the {'container': 'alias'}
format. The alias is optional. Containers declared in this dict will be linked to
the new container using the provided alias. Default: None.
log_config (LogConfig): Logging configuration.
lxc_config (Dict[str, str]): LXC config.
lxc_config (dict[str, str]): LXC config.
mac_address (str): MAC address to assign to the container.
mem_limit (Union[int, str]): Memory limit. Accepts float values (which represent the
memory limit of the created container in bytes) or a string with a units
@ -102,7 +108,7 @@ class CreateMixin: # pylint: disable=too-few-public-methods
between 0 and 100.
memswap_limit (Union[int, str]): Maximum amount of memory + swap a container is allowed
to consume.
mounts (List[Mount]): Specification for mounts to be added to the container. More
mounts (list[Mount]): Specification for mounts to be added to the container. More
powerful alternative to volumes. Each item in the list is expected to be a
Mount object.
For example:
@ -148,7 +154,7 @@ class CreateMixin: # pylint: disable=too-few-public-methods
]
name (str): The name for this container.
nano_cpus (int): CPU quota in units of 1e-9 CPUs.
networks (Dict[str, Dict[str, Union[str, List[str]]):
networks (dict[str, dict[str, Union[str, list[str]]):
Networks which will be connected to container during container creation
Values of the network configuration can be :
@ -163,6 +169,7 @@ class CreateMixin: # pylint: disable=too-few-public-methods
- container:<name|id>: Reuse another container's network
stack.
- host: Use the host network stack.
- ns:<path>: User defined netns path.
Incompatible with network.
oom_kill_disable (bool): Whether to disable OOM killer.
@ -173,7 +180,23 @@ class CreateMixin: # pylint: disable=too-few-public-methods
pids_limit (int): Tune a container's pids limit. Set -1 for unlimited.
platform (str): Platform in the format os[/arch[/variant]]. Only used if the method
needs to pull the requested image.
ports (Dict[str, Union[int, Tuple[str, int], List[int], Dict[str, Union[int, Tuple[str, int], List[int]]]]]): Ports to bind inside the container.
ports (
dict[
Union[int, str],
Union[
int,
Tuple[str, int],
list[int],
dict[
str,
Union[
int,
Tuple[str, int],
list[int]
]
]
]
]): Ports to bind inside the container.
The keys of the dictionary are the ports to bind inside the container, either as an
integer or a string in the form port/protocol, where the protocol is either
@ -223,7 +246,7 @@ class CreateMixin: # pylint: disable=too-few-public-methods
read_write_tmpfs (bool): Mount temporary file systems as read write,
in case of read_only options set to True. Default: False
remove (bool): Remove the container when it has finished running. Default: False.
restart_policy (Dict[str, Union[str, int]]): Restart the container when it exits.
restart_policy (dict[str, Union[str, int]]): Restart the container when it exits.
Configured as a dictionary with keys:
- Name: One of on-failure, or always.
@ -231,7 +254,7 @@ class CreateMixin: # pylint: disable=too-few-public-methods
For example: {"Name": "on-failure", "MaximumRetryCount": 5}
runtime (str): Runtime to use with this container.
secrets (List[Union[str, Secret, Dict[str, Union[str, int]]]]): Secrets to
secrets (list[Union[str, Secret, dict[str, Union[str, int]]]]): Secrets to
mount to this container.
For example:
@ -265,42 +288,44 @@ class CreateMixin: # pylint: disable=too-few-public-methods
},
]
secret_env (Dict[str, str]): Secrets to add as environment variables available in the
secret_env (dict[str, str]): Secrets to add as environment variables available in the
container.
For example: {"VARIABLE1": "NameOfSecret", "VARIABLE2": "NameOfAnotherSecret"}
security_opt (List[str]): A List[str]ing values to customize labels for MLS systems,
security_opt (list[str]): A list[str]ing values to customize labels for MLS systems,
such as SELinux.
shm_size (Union[str, int]): Size of /dev/shm (e.g. 1G).
stdin_open (bool): Keep STDIN open even if not attached.
stdout (bool): Return logs from STDOUT when detach=False. Default: True.
stderr (bool): Return logs from STDERR when detach=False. Default: False.
stop_signal (str): The stop signal to use to stop the container (e.g. SIGINT).
storage_opt (Dict[str, str]): Storage driver options per container as a
storage_opt (dict[str, str]): Storage driver options per container as a
key-value mapping.
stream (bool): If true and detach is false, return a log generator instead of a string.
Ignored if detach is true. Default: False.
sysctls (Dict[str, str]): Kernel parameters to set in the container.
tmpfs (Dict[str, str]): Temporary filesystems to mount, as a dictionary mapping a
sysctls (dict[str, str]): Kernel parameters to set in the container.
tmpfs (dict[str, str]): Temporary filesystems to mount, as a dictionary mapping a
path inside the container to options for that path.
For example: {'/mnt/vol2': '', '/mnt/vol1': 'size=3G,uid=1000'}
tty (bool): Allocate a pseudo-TTY.
ulimits (List[Ulimit]): Ulimits to set inside the container.
ulimits (list[Ulimit]): Ulimits to set inside the container.
use_config_proxy (bool): If True, and if the docker client configuration
file (~/.config/containers/config.json by default) contains a proxy configuration,
the corresponding environment variables will be set in the container being built.
user (Union[str, int]): Username or UID to run commands as inside the container.
userns_mode (str): Sets the user namespace mode for the container when user namespace
remapping option is enabled. Supported values documented `here <https://docs.podman.io/en/latest/markdown/options/userns.container.html#userns-mode>`_
remapping option is enabled. Supported values documented
`here <https://docs.podman.io/en/latest/markdown/options/userns.container.html#userns-mode>`_
uts_mode (str): Sets the UTS namespace mode for the container.
`These <https://docs.podman.io/en/latest/markdown/options/uts.container.html>`_ are the supported values.
`These <https://docs.podman.io/en/latest/markdown/options/uts.container.html>`_
are the supported values.
version (str): The version of the API to use. Set to auto to automatically detect
the server's version. Default: 3.0.0
volume_driver (str): The name of a volume driver/plugin.
volumes (Dict[str, Dict[str, Union[str, list]]]): A dictionary to configure
volumes (dict[str, dict[str, Union[str, list]]]): A dictionary to configure
volumes mounted inside the container.
The key is either the host path or a volume name, and the value is
a dictionary with the keys:
@ -328,8 +353,9 @@ class CreateMixin: # pylint: disable=too-few-public-methods
}
volumes_from (List[str]): List of container names or IDs to get volumes from.
volumes_from (list[str]): List of container names or IDs to get volumes from.
working_dir (str): Path to the working directory.
workdir (str): Alias of working_dir - Path to the working directory.
Returns:
A Container object.
@ -340,6 +366,8 @@ class CreateMixin: # pylint: disable=too-few-public-methods
"""
if isinstance(image, Image):
image = image.id
if isinstance(command, str):
command = [command]
payload = {"image": image, "command": command}
payload.update(kwargs)
@ -347,7 +375,9 @@ class CreateMixin: # pylint: disable=too-few-public-methods
payload = api.prepare_body(payload)
response = self.client.post(
"/containers/create", headers={"content-type": "application/json"}, data=payload
"/containers/create",
headers={"content-type": "application/json"},
data=payload,
)
response.raise_for_status(not_found=ImageNotFound)
@ -355,9 +385,51 @@ class CreateMixin: # pylint: disable=too-few-public-methods
return self.get(container_id)
@staticmethod
def _convert_env_list_to_dict(env_list):
"""Convert a list of environment variables to a dictionary.
Args:
env_list (List[str]): List of environment variables in the format ["KEY=value"]
Returns:
Dict[str, str]: Dictionary of environment variables
Raises:
ValueError: If any environment variable is not in the correct format
"""
if not isinstance(env_list, list):
raise TypeError(f"Expected list, got {type(env_list).__name__}")
env_dict = {}
for env_var in env_list:
if not isinstance(env_var, str):
raise TypeError(
f"Environment variable must be a string, "
f"got {type(env_var).__name__}: {repr(env_var)}"
)
# Handle empty strings
if not env_var.strip():
raise ValueError("Environment variable cannot be empty")
if "=" not in env_var:
raise ValueError(
f"Environment variable '{env_var}' is not in the correct format. "
"Expected format: 'KEY=value'"
)
key, value = env_var.split("=", 1) # Split on first '=' only
# Validate key is not empty
if not key.strip():
raise ValueError(f"Environment variable has empty key: '{env_var}'")
env_dict[key] = value
return env_dict
# pylint: disable=too-many-locals,too-many-statements,too-many-branches
@staticmethod
def _render_payload(kwargs: MutableMapping[str, Any]) -> Dict[str, Any]:
def _render_payload(kwargs: MutableMapping[str, Any]) -> dict[str, Any]:
"""Map create/run kwargs into body parameters."""
args = copy.copy(kwargs)
@ -382,6 +454,23 @@ class CreateMixin: # pylint: disable=too-few-public-methods
with suppress(KeyError):
del args[key]
# Handle environment variables
environment = args.pop("environment", None)
if environment is not None:
if isinstance(environment, list):
try:
environment = CreateMixin._convert_env_list_to_dict(environment)
except ValueError as e:
raise ValueError(
"Failed to convert environment variables list to dictionary. "
f"Error: {str(e)}"
) from e
elif not isinstance(environment, dict):
raise TypeError(
"Environment variables must be provided as either a dictionary "
"or a list of strings in the format ['KEY=value']"
)
# These keywords are not supported for various reasons.
unsupported_keys = set(args.keys()).intersection(
(
@ -408,6 +497,13 @@ class CreateMixin: # pylint: disable=too-few-public-methods
def pop(k):
return args.pop(k, None)
def normalize_nsmode(
mode: Union[str, MutableMapping[str, str]],
) -> dict[str, str]:
if isinstance(mode, dict):
return mode
return {"nsmode": mode}
def to_bytes(size: Union[int, str, None]) -> Union[int, None]:
"""
Converts str or int to bytes.
@ -431,9 +527,9 @@ class CreateMixin: # pylint: disable=too-few-public-methods
try:
return int(size)
except ValueError as bad_size:
mapping = {'b': 0, 'k': 1, 'm': 2, 'g': 3}
mapping_regex = ''.join(mapping.keys())
search = re.search(rf'^(\d+)([{mapping_regex}])$', size.lower())
mapping = {"b": 0, "k": 1, "m": 2, "g": 3}
mapping_regex = "".join(mapping.keys())
search = re.search(rf"^(\d+)([{mapping_regex}])$", size.lower())
if search:
return int(search.group(1)) * (1024 ** mapping[search.group(2)])
raise TypeError(
@ -458,11 +554,11 @@ class CreateMixin: # pylint: disable=too-few-public-methods
"conmon_pid_file": pop("conmon_pid_file"), # TODO document, podman only
"containerCreateCommand": pop("containerCreateCommand"), # TODO document, podman only
"devices": [],
"dns_options": pop("dns_opt"),
"dns_option": pop("dns_opt"),
"dns_search": pop("dns_search"),
"dns_server": pop("dns"),
"entrypoint": pop("entrypoint"),
"env": pop("environment"),
"env": environment,
"env_host": pop("env_host"), # TODO document, podman only
"expose": {},
"groups": pop("group_add"),
@ -526,7 +622,7 @@ class CreateMixin: # pylint: disable=too-few-public-methods
"version": pop("version"),
"volumes": [],
"volumes_from": pop("volumes_from"),
"work_dir": pop("working_dir"),
"work_dir": pop("workdir") or pop("working_dir"),
}
for device in args.pop("devices", []):
@ -549,11 +645,12 @@ class CreateMixin: # pylint: disable=too-few-public-methods
args.pop("log_config")
for item in args.pop("mounts", []):
normalized_item = {key.lower(): value for key, value in item.items()}
mount_point = {
"destination": item.get("target"),
"destination": normalized_item.get("target"),
"options": [],
"source": item.get("source"),
"type": item.get("type"),
"source": normalized_item.get("source"),
"type": normalized_item.get("type"),
}
# some names are different for podman-py vs REST API due to compatibility with docker
@ -566,12 +663,13 @@ class CreateMixin: # pylint: disable=too-few-public-methods
regular_options = ["consistency", "mode", "size"]
for k, v in item.items():
option_name = names_dict.get(k, k)
if k in bool_options and v is True:
_k = k.lower()
option_name = names_dict.get(_k, _k)
if _k in bool_options and v is True:
options.append(option_name)
elif k in regular_options:
options.append(f'{option_name}={v}')
elif k in simple_options:
elif _k in regular_options:
options.append(f"{option_name}={v}")
elif _k in simple_options:
options.append(v)
mount_point["options"] = options
@ -620,10 +718,15 @@ class CreateMixin: # pylint: disable=too-few-public-methods
return result
for container, host in args.pop("ports", {}).items():
if "/" in container:
container_port, protocol = container.split("/")
# avoid redefinition of the loop variable, then ensure it's a string
str_container = container
if isinstance(str_container, int):
str_container = str(str_container)
if "/" in str_container:
container_port, protocol = str_container.split("/")
else:
container_port, protocol = container, "tcp"
container_port, protocol = str_container, "tcp"
port_map_list = parse_host_port(container_port, protocol, host)
params["portmappings"].extend(port_map_list)
@ -657,27 +760,42 @@ class CreateMixin: # pylint: disable=too-few-public-methods
}
for item in args.pop("ulimits", []):
params["r_limits"].append({
"type": item["Name"],
"hard": item["Hard"],
"soft": item["Soft"],
})
params["r_limits"].append(
{
"type": item["Name"],
"hard": item["Hard"],
"soft": item["Soft"],
}
)
for item in args.pop("volumes", {}).items():
key, value = item
extended_mode = value.get('extended_mode', [])
extended_mode = value.get("extended_mode", [])
if not isinstance(extended_mode, list):
raise ValueError("'extended_mode' value should be a list")
options = extended_mode
mode = value.get('mode')
mode = value.get("mode")
if mode is not None:
if not isinstance(mode, str):
raise ValueError("'mode' value should be a str")
options.append(mode)
volume = {"Name": key, "Dest": value["bind"], "Options": options}
params["volumes"].append(volume)
# The Podman API only supports named volumes through the ``volume`` parameter. Directory
# mounting needs to happen through the ``mounts`` parameter. Luckily the translation
# isn't too complicated so we can just do it for the user if we suspect that the key
# isn't a named volume.
if NAMED_VOLUME_PATTERN.match(key):
volume = {"Name": key, "Dest": value["bind"], "Options": options}
params["volumes"].append(volume)
else:
mount_point = {
"destination": value["bind"],
"options": options,
"source": key,
"type": "bind",
}
params["mounts"].append(mount_point)
for item in args.pop("secrets", []):
if isinstance(item, Secret):
@ -696,22 +814,27 @@ class CreateMixin: # pylint: disable=too-few-public-methods
params["secret_env"] = args.pop("secret_env", {})
if "cgroupns" in args:
params["cgroupns"] = {"nsmode": args.pop("cgroupns")}
params["cgroupns"] = normalize_nsmode(args.pop("cgroupns"))
if "ipc_mode" in args:
params["ipcns"] = {"nsmode": args.pop("ipc_mode")}
params["ipcns"] = normalize_nsmode(args.pop("ipc_mode"))
if "network_mode" in args:
params["netns"] = {"nsmode": args.pop("network_mode")}
network_mode = args.pop("network_mode")
details = network_mode.split(":")
if len(details) == 2 and details[0] == "ns":
params["netns"] = {"nsmode": "path", "value": details[1]}
else:
params["netns"] = {"nsmode": network_mode}
if "pid_mode" in args:
params["pidns"] = {"nsmode": args.pop("pid_mode")}
params["pidns"] = normalize_nsmode(args.pop("pid_mode"))
if "userns_mode" in args:
params["userns"] = {"nsmode": args.pop("userns_mode")}
params["userns"] = normalize_nsmode(args.pop("userns_mode"))
if "uts_mode" in args:
params["utsns"] = {"nsmode": args.pop("uts_mode")}
params["utsns"] = normalize_nsmode(args.pop("uts_mode"))
if len(args) > 0:
raise TypeError(

View File

@ -2,7 +2,8 @@
import logging
import urllib
from typing import Any, Dict, List, Mapping, Union
from collections.abc import Mapping
from typing import Any, Union
from podman import api
from podman.domain.containers import Container
@ -26,11 +27,14 @@ class ContainersManager(RunMixin, CreateMixin, Manager):
response = self.client.get(f"/containers/{key}/exists")
return response.ok
def get(self, key: str) -> Container:
def get(self, key: str, **kwargs) -> Container:
"""Get container by name or id.
Args:
container_id: Container name or id.
key: Container name or id.
Keyword Args:
compatible (bool): Use Docker compatibility endpoint
Returns:
A `Container` object corresponding to `key`.
@ -39,12 +43,14 @@ class ContainersManager(RunMixin, CreateMixin, Manager):
NotFound: when Container does not exist
APIError: when an error return by service
"""
compatible = kwargs.get("compatible", False)
container_id = urllib.parse.quote_plus(key)
response = self.client.get(f"/containers/{container_id}/json")
response = self.client.get(f"/containers/{container_id}/json", compatible=compatible)
response.raise_for_status()
return self.prepare_model(attrs=response.json())
def list(self, **kwargs) -> List[Container]:
def list(self, **kwargs) -> list[Container]:
"""Report on containers.
Keyword Args:
@ -57,7 +63,7 @@ class ContainersManager(RunMixin, CreateMixin, Manager):
- exited (int): Only containers with specified exit code
- status (str): One of restarting, running, paused, exited
- label (Union[str, List[str]]): Format either "key", "key=value" or a list of such.
- label (Union[str, list[str]]): Format either "key", "key=value" or a list of such.
- id (str): The id of the container.
- name (str): The name of the container.
- ancestor (str): Filter by container ancestor. Format of
@ -66,12 +72,26 @@ class ContainersManager(RunMixin, CreateMixin, Manager):
Give the container name or id.
- since (str): Only containers created after a particular container.
Give container name or id.
sparse: Ignored
sparse: If False, return basic container information without additional
inspection requests. This improves performance when listing many containers
but might provide less detail. You can call Container.reload() on individual
containers later to retrieve complete attributes. Default: True.
When Docker compatibility is enabled with `compatible=True`: Default: False.
ignore_removed: If True, ignore failures due to missing containers.
Raises:
APIError: when service returns an error
"""
compatible = kwargs.get("compatible", False)
# Set sparse default based on mode:
# Libpod behavior: default is sparse=True (faster, requires reload for full details)
# Docker behavior: default is sparse=False (full details immediately, compatible)
if "sparse" in kwargs:
sparse = kwargs["sparse"]
else:
sparse = not compatible # True for libpod, False for compat
params = {
"all": kwargs.get("all"),
"filters": kwargs.get("filters", {}),
@ -85,22 +105,33 @@ class ContainersManager(RunMixin, CreateMixin, Manager):
# filters formatted last because some kwargs may need to be mapped into filters
params["filters"] = api.prepare_filters(params["filters"])
response = self.client.get("/containers/json", params=params)
response = self.client.get("/containers/json", params=params, compatible=compatible)
response.raise_for_status()
return [self.prepare_model(attrs=i) for i in response.json()]
containers: list[Container] = [self.prepare_model(attrs=i) for i in response.json()]
def prune(self, filters: Mapping[str, str] = None) -> Dict[str, Any]:
# If sparse is False, reload each container to get full details
if not sparse:
for container in containers:
try:
container.reload(compatible=compatible)
except APIError:
# Skip containers that might have been removed
pass
return containers
def prune(self, filters: Mapping[str, str] = None) -> dict[str, Any]:
"""Delete stopped containers.
Args:
filters: Criteria for determining containers to remove. Available keys are:
- until (str): Delete containers before this time
- label (List[str]): Labels associated with containers
- label (list[str]): Labels associated with containers
Returns:
Keys:
- ContainersDeleted (List[str]): Identifiers of deleted containers.
- ContainersDeleted (list[str]): Identifiers of deleted containers.
- SpaceReclaimed (int): Amount of disk space reclaimed in bytes.
Raises:
@ -119,7 +150,7 @@ class ContainersManager(RunMixin, CreateMixin, Manager):
explanation=f"""Failed to prune container '{entry["Id"]}'""",
)
results["ContainersDeleted"].append(entry["Id"])
results["ContainersDeleted"].append(entry["Id"]) # type: ignore[attr-defined]
results["SpaceReclaimed"] += entry["Size"]
return results
@ -139,10 +170,8 @@ class ContainersManager(RunMixin, CreateMixin, Manager):
if isinstance(container_id, Container):
container_id = container_id.id
params = {
"v": kwargs.get("v"),
"force": kwargs.get("force"),
}
# v is used for the compat endpoint while volumes is used for the libpod endpoint
params = {"v": kwargs.get("v"), "force": kwargs.get("force"), "volumes": kwargs.get("v")}
response = self.client.delete(f"/containers/{container_id}", params=params)
response.raise_for_status()

View File

@ -1,8 +1,10 @@
"""Mixin to provide Container run() method."""
import logging
import threading
from contextlib import suppress
from typing import Generator, Iterator, List, Union
from typing import Union
from collections.abc import Generator, Iterator
from podman.domain.containers import Container
from podman.domain.images import Image
@ -17,7 +19,8 @@ class RunMixin: # pylint: disable=too-few-public-methods
def run(
self,
image: Union[str, Image],
command: Union[str, List[str], None] = None,
command: Union[str, list[str], None] = None,
*,
stdout=True,
stderr=False,
remove: bool = False,
@ -28,17 +31,27 @@ class RunMixin: # pylint: disable=too-few-public-methods
By default, run() will wait for the container to finish and return its logs.
If detach=True, run() will start the container and return a Container object rather
than logs.
than logs. In this case, if remove=True, run() will monitor and remove the
container after it finishes running; the logs will be lost in this case.
Args:
image: Image to run.
command: Command to run in the container.
stdout: Include stdout. Default: True.
stderr: Include stderr. Default: False.
remove: Delete container when the container's processes exit. Default: False.
remove: Delete container on the client side when the container's processes exit.
The `auto_remove` flag is also available to manage the removal on the daemon
side. Default: False.
Keyword Args:
- See the create() method for keyword arguments.
- These args are directly used to pull an image when the image is not found.
auth_config (Mapping[str, str]): Override the credentials that are found in the
config for this request. auth_config should contain the username and password
keys to be valid.
platform (str): Platform in the format os[/arch[/variant]]
policy (str): Pull policy. "missing" (default), "always", "never", "newer"
- See the create() method for other keyword arguments.
Returns:
- When detach is True, return a Container
@ -60,14 +73,30 @@ class RunMixin: # pylint: disable=too-few-public-methods
try:
container = self.create(image=image, command=command, **kwargs)
except ImageNotFound:
self.client.images.pull(image, platform=kwargs.get("platform"))
self.podman_client.images.pull(
image,
auth_config=kwargs.get("auth_config"),
platform=kwargs.get("platform"),
policy=kwargs.get("policy", "missing"),
)
container = self.create(image=image, command=command, **kwargs)
container.start()
container.wait(condition=["running", "exited"])
container.reload()
def remove_container(container_object: Container) -> None:
"""
Wait the container to finish and remove it.
Args:
container_object: Container object
"""
container_object.wait() # Wait for the container to finish
container_object.remove() # Remove the container
if kwargs.get("detach", False):
if remove:
# Start a background thread to remove the container after finishing
threading.Thread(target=remove_container, args=(container,)).start()
return container
with suppress(KeyError):

View File

@ -3,7 +3,8 @@
import json
import logging
from datetime import datetime
from typing import Any, Dict, Optional, Union, Iterator
from typing import Any, Optional, Union
from collections.abc import Iterator
from podman import api
from podman.api.client import APIClient
@ -26,9 +27,9 @@ class EventsManager: # pylint: disable=too-few-public-methods
self,
since: Union[datetime, int, None] = None,
until: Union[datetime, int, None] = None,
filters: Optional[Dict[str, Any]] = None,
filters: Optional[dict[str, Any]] = None,
decode: bool = False,
) -> Iterator[Union[str, Dict[str, Any]]]:
) -> Iterator[Union[str, dict[str, Any]]]:
"""Report on networks.
Args:
@ -38,7 +39,7 @@ class EventsManager: # pylint: disable=too-few-public-methods
until: Get events older than this time.
Yields:
When decode is True, Iterator[Dict[str, Any]]
When decode is True, Iterator[dict[str, Any]]
When decode is False, Iterator[str]
"""

View File

@ -1,11 +1,17 @@
"""Model and Manager for Image resources."""
import logging
from typing import Any, Dict, Iterator, List, Optional, Union
from typing import Any, Optional, Literal, Union, TYPE_CHECKING
from collections.abc import Iterator
from podman import api
import urllib.parse
from podman.api import DEFAULT_CHUNK_SIZE
from podman.domain.manager import PodmanResource
from podman.errors import ImageNotFound
from podman.errors import ImageNotFound, InvalidArgument
if TYPE_CHECKING:
from podman.domain.images_manager import ImagesManager
logger = logging.getLogger("podman.images")
@ -13,6 +19,8 @@ logger = logging.getLogger("podman.images")
class Image(PodmanResource):
"""Details and configuration for an Image managed by the Podman service."""
manager: "ImagesManager"
def __repr__(self) -> str:
return f"""<{self.__class__.__name__}: '{"', '".join(self.tags)}'>"""
@ -34,7 +42,7 @@ class Image(PodmanResource):
return [tag for tag in repo_tags if tag != "<none>:<none>"]
def history(self) -> List[Dict[str, Any]]:
def history(self) -> list[dict[str, Any]]:
"""Returns history of the Image.
Raises:
@ -47,7 +55,7 @@ class Image(PodmanResource):
def remove(
self, **kwargs
) -> List[Dict[api.Literal["Deleted", "Untagged", "Errors", "ExitCode"], Union[str, int]]]:
) -> list[dict[Literal["Deleted", "Untagged", "Errors", "ExitCode"], Union[str, int]]]:
"""Delete image from Podman service.
Podman only
@ -67,8 +75,8 @@ class Image(PodmanResource):
def save(
self,
chunk_size: Optional[int] = api.DEFAULT_CHUNK_SIZE,
named: Union[str, bool] = False, # pylint: disable=unused-argument
chunk_size: Optional[int] = DEFAULT_CHUNK_SIZE,
named: Union[str, bool] = False,
) -> Iterator[bytes]:
"""Returns Image as tarball.
@ -77,13 +85,28 @@ class Image(PodmanResource):
Args:
chunk_size: If None, data will be streamed in received buffer size.
If not None, data will be returned in sized buffers. Default: 2MB
named: Ignored.
named (str or bool): If ``False`` (default), the tarball will not
retain repository and tag information for this image. If set
to ``True``, the first tag in the :py:attr:`~tags` list will
be used to identify the image. Alternatively, any element of
the :py:attr:`~tags` list can be used as an argument to use
that specific tag as the saved identifier.
Raises:
APIError: when service returns an error
APIError: When service returns an error
InvalidArgument: When the provided Tag name is not valid for the image.
"""
img = self.id
if named:
img = urllib.parse.quote(self.tags[0] if self.tags else img)
if isinstance(named, str):
if named not in self.tags:
raise InvalidArgument(f"'{named}' is not a valid tag for this image")
img = urllib.parse.quote(named)
response = self.client.get(
f"/images/{self.id}/get", params={"format": ["docker-archive"]}, stream=True
f"/images/{img}/get", params={"format": ["docker-archive"]}, stream=True
)
response.raise_for_status(not_found=ImageNotFound)
return response.iter_content(chunk_size=chunk_size)

View File

@ -7,7 +7,8 @@ import random
import re
import shutil
import tempfile
from typing import Any, Dict, Iterator, List, Tuple
from typing import Any
from collections.abc import Iterator
import itertools
@ -22,7 +23,7 @@ class BuildMixin:
"""Class providing build method for ImagesManager."""
# pylint: disable=too-many-locals,too-many-branches,too-few-public-methods,too-many-statements
def build(self, **kwargs) -> Tuple[Image, Iterator[bytes]]:
def build(self, **kwargs) -> tuple[Image, Iterator[bytes]]:
"""Returns built image.
Keyword Args:
@ -33,13 +34,13 @@ class BuildMixin:
nocache (bool) Dont use the cache when set to True
rm (bool) Remove intermediate containers. Default True
timeout (int) HTTP timeout
custom_context (bool) Optional if using fileobj (ignored)
custom_context (bool) Optional if using fileobj
encoding (str) The encoding for a stream. Set to gzip for compressing (ignored)
pull (bool) Downloads any updates to the FROM image in Dockerfile
forcerm (bool) Always remove intermediate containers, even after unsuccessful builds
dockerfile (str) full path to the Dockerfile / Containerfile
buildargs (Mapping[str,str) A dictionary of build arguments
container_limits (Dict[str, Union[int,str]])
container_limits (dict[str, Union[int,str]])
A dictionary of limits applied to each container created by the build process.
Valid keys:
@ -52,11 +53,11 @@ class BuildMixin:
shmsize (int) Size of /dev/shm in bytes. The size must be greater than 0.
If omitted the system uses 64MB
labels (Mapping[str,str]) A dictionary of labels to set on the image
cache_from (List[str]) A list of image's identifier used for build cache resolution
cache_from (list[str]) A list of image's identifier used for build cache resolution
target (str) Name of the build-stage to build in a multi-stage Dockerfile
network_mode (str) networking mode for the run commands during build
squash (bool) Squash the resulting images layers into a single layer.
extra_hosts (Dict[str,str]) Extra hosts to add to /etc/hosts in building
extra_hosts (dict[str,str]) Extra hosts to add to /etc/hosts in building
containers, as a mapping of hostname to IP address.
platform (str) Platform in the format os[/arch[/variant]].
isolation (str) Isolation technology used during build. (ignored)
@ -81,7 +82,23 @@ class BuildMixin:
body = None
path = None
if "fileobj" in kwargs:
if kwargs.get("custom_context"):
if "fileobj" not in kwargs:
raise PodmanError(
"Custom context requires fileobj to be set to a binary file-like object "
"containing a build-directory tarball."
)
if "dockerfile" not in kwargs:
# TODO: Scan the tarball for either a Dockerfile or a Containerfile.
# This could be slow if the tarball is large,
# and could require buffering/copying the tarball if `fileobj` is not seekable.
# As a workaround for now, don't support omitting the filename.
raise PodmanError(
"Custom context requires specifying the name of the Dockerfile "
"(typically 'Dockerfile' or 'Containerfile')."
)
body = kwargs["fileobj"]
elif "fileobj" in kwargs:
path = tempfile.TemporaryDirectory() # pylint: disable=consider-using-with
filename = pathlib.Path(path.name) / params["dockerfile"]
@ -140,7 +157,7 @@ class BuildMixin:
raise BuildError(unknown or "Unknown", report_stream)
@staticmethod
def _render_params(kwargs) -> Dict[str, List[Any]]:
def _render_params(kwargs) -> dict[str, list[Any]]:
"""Map kwargs to query parameters.
All unsupported kwargs are silently ignored.

View File

@ -1,21 +1,35 @@
"""PodmanResource manager subclassed for Images."""
import builtins
import io
import json
import logging
import os
import urllib.parse
from typing import Any, Dict, Generator, Iterator, List, Mapping, Optional, Union
from typing import Any, Literal, Optional, Union
from collections.abc import Iterator, Mapping, Generator
from pathlib import Path
import requests
from rich.progress import Progress, TextColumn, BarColumn, TaskProgressColumn, TimeRemainingColumn
from podman import api
from podman.api import Literal
from podman.api.http_utils import encode_auth_header
from podman.api.parse_utils import parse_repository
from podman.domain.images import Image
from podman.domain.images_build import BuildMixin
from podman.domain.json_stream import json_stream
from podman.domain.manager import Manager
from podman.domain.registry_data import RegistryData
from podman.errors import APIError, ImageNotFound
from podman.errors import APIError, ImageNotFound, PodmanError
try:
from rich.progress import (
Progress,
TextColumn,
BarColumn,
TaskProgressColumn,
TimeRemainingColumn,
)
except (ImportError, ModuleNotFoundError):
Progress = None
logger = logging.getLogger("podman.images")
@ -34,25 +48,28 @@ class ImagesManager(BuildMixin, Manager):
response = self.client.get(f"/images/{key}/exists")
return response.ok
def list(self, **kwargs) -> List[Image]:
def list(self, **kwargs) -> builtins.list[Image]:
"""Report on images.
Keyword Args:
name (str) Only show images belonging to the repository name
all (bool) Show intermediate image layers. By default, these are filtered out.
filters (Mapping[str, Union[str, List[str]]) Filters to be used on the image list.
filters (Mapping[str, Union[str, list[str]]) Filters to be used on the image list.
Available filters:
- dangling (bool)
- label (Union[str, List[str]]): format either "key" or "key=value"
- label (Union[str, list[str]]): format either "key" or "key=value"
Raises:
APIError: when service returns an error
"""
filters = kwargs.get("filters", {}).copy()
if name := kwargs.get("name"):
filters["reference"] = name
params = {
"all": kwargs.get("all"),
"name": kwargs.get("name"),
"filters": api.prepare_filters(kwargs.get("filters")),
"filters": api.prepare_filters(filters=filters),
}
response = self.client.get("/images/json", params=params)
if response.status_code == requests.codes.not_found:
@ -103,60 +120,107 @@ class ImagesManager(BuildMixin, Manager):
collection=self,
)
def load(self, data: bytes) -> Generator[Image, None, None]:
def load(
self, data: Optional[bytes] = None, file_path: Optional[os.PathLike] = None
) -> Generator[Image, None, None]:
"""Restore an image previously saved.
Args:
data: Image to be loaded in tarball format.
file_path: Path of the Tarball.
It works with both str and Path-like objects
Raises:
APIError: when service returns an error
APIError: When service returns an error.
PodmanError: When the arguments are not set correctly.
"""
# TODO fix podman swagger cannot use this header!
# headers = {"Content-type": "application/x-www-form-urlencoded"}
response = self.client.post(
"/images/load", data=data, headers={"Content-type": "application/x-tar"}
)
response.raise_for_status()
# Check that exactly one of the data or file_path is provided
if not data and not file_path:
raise PodmanError("The 'data' or 'file_path' parameter should be set.")
body = response.json()
for item in body["Names"]:
yield self.get(item)
if data and file_path:
raise PodmanError(
"Only one parameter should be set from 'data' and 'file_path' parameters."
)
post_data = data
if file_path:
# Convert to Path if file_path is a string
file_path_object = Path(file_path)
post_data = file_path_object.read_bytes() # Read the tarball file as bytes
# Make the client request before entering the generator
response = self.client.post(
"/images/load", data=post_data, headers={"Content-type": "application/x-tar"}
)
response.raise_for_status() # Catch any errors before proceeding
def _generator(body: dict) -> Generator[Image, None, None]:
# Iterate and yield images from response body
for item in body["Names"]:
yield self.get(item)
# Pass the response body to the generator
return _generator(response.json())
def prune(
self, filters: Optional[Mapping[str, Any]] = None
) -> Dict[Literal["ImagesDeleted", "SpaceReclaimed"], Any]:
self,
all: Optional[bool] = False, # pylint: disable=redefined-builtin
external: Optional[bool] = False,
filters: Optional[Mapping[str, Any]] = None,
) -> dict[Literal["ImagesDeleted", "SpaceReclaimed"], Any]:
"""Delete unused images.
The Untagged keys will always be "".
Args:
all: Remove all images not in use by containers, not just dangling ones.
external: Remove images even when they are used by external containers
(e.g, by build containers).
filters: Qualify Images to prune. Available filters:
- dangling (bool): when true, only delete unused and untagged images.
- label: (dict): filter by label.
Examples:
filters={"label": {"key": "value"}}
filters={"label!": {"key": "value"}}
- until (str): Delete images older than this timestamp.
Raises:
APIError: when service returns an error
"""
response = self.client.post(
"/images/prune", params={"filters": api.prepare_filters(filters)}
)
params = {
"all": all,
"external": external,
"filters": api.prepare_filters(filters),
}
response = self.client.post("/images/prune", params=params)
response.raise_for_status()
deleted: List[Dict[str, str]] = []
error: List[str] = []
deleted: builtins.list[dict[str, str]] = []
error: builtins.list[str] = []
reclaimed: int = 0
for element in response.json():
if "Err" in element and element["Err"] is not None:
error.append(element["Err"])
else:
reclaimed += element["Size"]
deleted.append({
"Deleted": element["Id"],
"Untagged": "",
})
# If the prune doesn't remove images, the API returns "null"
# and it's interpreted as None (NoneType)
# so the for loop throws "TypeError: 'NoneType' object is not iterable".
# The below if condition fixes this issue.
if response.json() is not None:
for element in response.json():
if "Err" in element and element["Err"] is not None:
error.append(element["Err"])
else:
reclaimed += element["Size"]
deleted.append(
{
"Deleted": element["Id"],
"Untagged": "",
}
)
if len(error) > 0:
raise APIError(response.url, response=response, explanation="; ".join(error))
@ -165,7 +229,7 @@ class ImagesManager(BuildMixin, Manager):
"SpaceReclaimed": reclaimed,
}
def prune_builds(self) -> Dict[Literal["CachesDeleted", "SpaceReclaimed"], Any]:
def prune_builds(self) -> dict[Literal["CachesDeleted", "SpaceReclaimed"], Any]:
"""Delete builder cache.
Method included to complete API, the operation always returns empty
@ -175,7 +239,7 @@ class ImagesManager(BuildMixin, Manager):
def push(
self, repository: str, tag: Optional[str] = None, **kwargs
) -> Union[str, Iterator[Union[str, Dict[str, Any]]]]:
) -> Union[str, Iterator[Union[str, dict[str, Any]]]]:
"""Push Image or repository to the registry.
Args:
@ -185,29 +249,37 @@ class ImagesManager(BuildMixin, Manager):
Keyword Args:
auth_config (Mapping[str, str]: Override configured credentials. Must include
username and password keys.
decode (bool): return data from server as Dict[str, Any]. Ignored unless stream=True.
decode (bool): return data from server as dict[str, Any]. Ignored unless stream=True.
destination (str): alternate destination for image. (Podman only)
stream (bool): return output as blocking generator. Default: False.
tlsVerify (bool): Require TLS verification.
format (str): Manifest type (oci, v2s1, or v2s2) to use when pushing an image.
Default is manifest type of source, with fallbacks.
Raises:
APIError: when service returns an error
"""
auth_config: Optional[Dict[str, str]] = kwargs.get("auth_config")
auth_config: Optional[dict[str, str]] = kwargs.get("auth_config")
headers = {
# A base64url-encoded auth configuration
"X-Registry-Auth": encode_auth_header(auth_config) if auth_config else ""
"X-Registry-Auth": api.encode_auth_header(auth_config) if auth_config else ""
}
params = {
"destination": kwargs.get("destination"),
"tlsVerify": kwargs.get("tlsVerify"),
"format": kwargs.get("format"),
}
stream = kwargs.get("stream", False)
decode = kwargs.get("decode", False)
name = f'{repository}:{tag}' if tag else repository
name = urllib.parse.quote_plus(name)
response = self.client.post(f"/images/{name}/push", params=params, headers=headers)
response = self.client.post(
f"/images/{name}/push", params=params, stream=stream, headers=headers
)
response.raise_for_status(not_found=ImageNotFound)
tag_count = 0 if tag is None else 1
@ -222,8 +294,6 @@ class ImagesManager(BuildMixin, Manager):
},
]
stream = kwargs.get("stream", False)
decode = kwargs.get("decode", False)
if stream:
return self._push_helper(decode, body)
@ -234,8 +304,8 @@ class ImagesManager(BuildMixin, Manager):
@staticmethod
def _push_helper(
decode: bool, body: List[Dict[str, Any]]
) -> Iterator[Union[str, Dict[str, Any]]]:
decode: bool, body: builtins.list[dict[str, Any]]
) -> Iterator[Union[str, dict[str, Any]]]:
"""Helper needed to allow push() to return either a generator or a str."""
for entry in body:
if decode:
@ -245,8 +315,12 @@ class ImagesManager(BuildMixin, Manager):
# pylint: disable=too-many-locals,too-many-branches
def pull(
self, repository: str, tag: Optional[str] = None, all_tags: bool = False, **kwargs
) -> Union[Image, List[Image], Iterator[str]]:
self,
repository: str,
tag: Optional[str] = None,
all_tags: bool = False,
**kwargs,
) -> Union[Image, builtins.list[Image], Iterator[str]]:
"""Request Podman service to pull image(s) from repository.
Args:
@ -258,7 +332,12 @@ class ImagesManager(BuildMixin, Manager):
auth_config (Mapping[str, str]) Override the credentials that are found in the
config for this request. auth_config should contain the username and password
keys to be valid.
compatMode (bool) Return the same JSON payload as the Docker-compat endpoint.
Default: True.
decode (bool) Decode the JSON data from the server into dicts.
Only applies with ``stream=True``
platform (str) Platform in the format os[/arch[/variant]]
policy (str) - Pull policy. "always" (default), "missing", "never", "newer"
progress_bar (bool) - Display a progress bar with the image pull progress (uses
the compat endpoint). Default: False
tls_verify (bool) - Require TLS verification. Default: True.
@ -273,23 +352,24 @@ class ImagesManager(BuildMixin, Manager):
APIError: when service returns an error
"""
if tag is None or len(tag) == 0:
tokens = repository.split(":")
if len(tokens) == 2:
repository = tokens[0]
tag = tokens[1]
repository, parsed_tag = parse_repository(repository)
if parsed_tag is not None:
tag = parsed_tag
else:
tag = "latest"
auth_config: Optional[Dict[str, str]] = kwargs.get("auth_config")
auth_config: Optional[dict[str, str]] = kwargs.get("auth_config")
headers = {
# A base64url-encoded auth configuration
"X-Registry-Auth": encode_auth_header(auth_config) if auth_config else ""
"X-Registry-Auth": api.encode_auth_header(auth_config) if auth_config else ""
}
params = {
"policy": kwargs.get("policy", "always"),
"reference": repository,
"tlsVerify": kwargs.get("tls_verify"),
"tlsVerify": kwargs.get("tls_verify", True),
"compatMode": kwargs.get("compatMode", True),
}
if all_tags:
@ -297,7 +377,8 @@ class ImagesManager(BuildMixin, Manager):
else:
params["reference"] = f"{repository}:{tag}"
if "platform" in kwargs:
# Check if "platform" in kwargs AND it has value.
if "platform" in kwargs and kwargs["platform"]:
tokens = kwargs.get("platform").split("/")
if 1 < len(tokens) > 3:
raise ValueError(f'\'{kwargs.get("platform")}\' is not a legal platform.')
@ -314,6 +395,8 @@ class ImagesManager(BuildMixin, Manager):
# progress bar
progress_bar = kwargs.get("progress_bar", False)
if progress_bar:
if Progress is None:
raise ModuleNotFoundError('progress_bar requires \'rich.progress\' module')
params["compatMode"] = True
stream = True
@ -336,12 +419,12 @@ class ImagesManager(BuildMixin, Manager):
return None
if stream:
return response.iter_lines()
return self._stream_helper(response, decode=kwargs.get("decode"))
for item in response.iter_lines():
for item in reversed(list(response.iter_lines())):
obj = json.loads(item)
if all_tags and "images" in obj:
images: List[Image] = []
images: builtins.list[Image] = []
for name in obj["images"]:
images.append(self.get(name))
return images
@ -386,7 +469,7 @@ class ImagesManager(BuildMixin, Manager):
image: Union[Image, str],
force: Optional[bool] = None,
noprune: bool = False, # pylint: disable=unused-argument
) -> List[Dict[Literal["Deleted", "Untagged", "Errors", "ExitCode"], Union[str, int]]]:
) -> builtins.list[dict[Literal["Deleted", "Untagged", "Errors", "ExitCode"], Union[str, int]]]:
"""Delete image from Podman service.
Args:
@ -405,7 +488,7 @@ class ImagesManager(BuildMixin, Manager):
response.raise_for_status(not_found=ImageNotFound)
body = response.json()
results: List[Dict[str, Union[int, str]]] = []
results: builtins.list[dict[str, Union[int, str]]] = []
for key in ("Deleted", "Untagged", "Errors"):
if key in body:
for element in body[key]:
@ -413,14 +496,14 @@ class ImagesManager(BuildMixin, Manager):
results.append({"ExitCode": body["ExitCode"]})
return results
def search(self, term: str, **kwargs) -> List[Dict[str, Any]]:
def search(self, term: str, **kwargs) -> builtins.list[dict[str, Any]]:
"""Search Images on registries.
Args:
term: Used to target Image results.
Keyword Args:
filters (Mapping[str, List[str]): Refine results of search. Available filters:
filters (Mapping[str, list[str]): Refine results of search. Available filters:
- is-automated (bool): Image build is automated.
- is-official (bool): Image build is owned by product provider.
@ -473,3 +556,24 @@ class ImagesManager(BuildMixin, Manager):
response = self.client.post(f"/images/scp/{source}", params=params)
response.raise_for_status()
return response.json()
def _stream_helper(self, response, decode=False):
"""Generator for data coming from a chunked-encoded HTTP response."""
if response.raw._fp.chunked:
if decode:
yield from json_stream(self._stream_helper(response, False))
else:
reader = response.raw
while not reader.closed:
# this read call will block until we get a chunk
data = reader.read(1)
if not data:
break
if reader._fp.chunk_left:
data += reader.read(reader._fp.chunk_left)
yield data
else:
# Response isn't chunked, meaning we probably
# encountered an error immediately
yield self._result(response, json=decode)

View File

@ -3,7 +3,8 @@
Provided for compatibility
"""
from typing import Any, List, Mapping, Optional
from typing import Any, Optional
from collections.abc import Mapping
class IPAMPool(dict):
@ -25,12 +26,14 @@ class IPAMPool(dict):
aux_addresses: Ignored.
"""
super().__init__()
self.update({
"AuxiliaryAddresses": aux_addresses,
"Gateway": gateway,
"IPRange": iprange,
"Subnet": subnet,
})
self.update(
{
"AuxiliaryAddresses": aux_addresses,
"Gateway": gateway,
"IPRange": iprange,
"Subnet": subnet,
}
)
class IPAMConfig(dict):
@ -38,8 +41,8 @@ class IPAMConfig(dict):
def __init__(
self,
driver: Optional[str] = "default",
pool_configs: Optional[List[IPAMPool]] = None,
driver: Optional[str] = "host-local",
pool_configs: Optional[list[IPAMPool]] = None,
options: Optional[Mapping[str, Any]] = None,
):
"""Create IPAMConfig.
@ -50,8 +53,10 @@ class IPAMConfig(dict):
options: Options to provide to the Network driver.
"""
super().__init__()
self.update({
"Config": pool_configs or [],
"Driver": driver,
"Options": options or {},
})
self.update(
{
"Config": pool_configs or [],
"Driver": driver,
"Options": options or {},
}
)

View File

@ -0,0 +1,75 @@
import json
import json.decoder
from podman.errors import StreamParseError
json_decoder = json.JSONDecoder()
def stream_as_text(stream):
"""
Given a stream of bytes or text, if any of the items in the stream
are bytes convert them to text.
This function can be removed once we return text streams
instead of byte streams.
"""
for data in stream:
_data = data
if not isinstance(data, str):
_data = data.decode('utf-8', 'replace')
yield _data
def json_splitter(buffer):
"""Attempt to parse a json object from a buffer. If there is at least one
object, return it and the rest of the buffer, otherwise return None.
"""
buffer = buffer.strip()
try:
obj, index = json_decoder.raw_decode(buffer)
rest = buffer[json.decoder.WHITESPACE.match(buffer, index).end() :]
return obj, rest
except ValueError:
return None
def json_stream(stream):
"""Given a stream of text, return a stream of json objects.
This handles streams which are inconsistently buffered (some entries may
be newline delimited, and others are not).
"""
return split_buffer(stream, json_splitter, json_decoder.decode)
def line_splitter(buffer, separator='\n'):
index = buffer.find(str(separator))
if index == -1:
return None
return buffer[: index + 1], buffer[index + 1 :]
def split_buffer(stream, splitter=None, decoder=lambda a: a):
"""Given a generator which yields strings and a splitter function,
joins all input, splits on the separator and yields each chunk.
Unlike string.split(), each chunk includes the trailing
separator, except for the last one if none was found on the end
of the input.
"""
splitter = splitter or line_splitter
buffered = ''
for data in stream_as_text(stream):
buffered += data
while True:
buffer_split = splitter(buffered)
if buffer_split is None:
break
item, buffered = buffer_split
yield item
if buffered:
try:
yield decoder(buffered)
except Exception as e:
raise StreamParseError(e) from e

View File

@ -2,15 +2,19 @@
from abc import ABC, abstractmethod
from collections import abc
from typing import Any, List, Mapping, Optional, TypeVar, Union
from typing import Any, Optional, TypeVar, Union, TYPE_CHECKING
from collections.abc import Mapping
from podman.api.client import APIClient
if TYPE_CHECKING:
from podman import PodmanClient
# Methods use this Type when a subclass of PodmanResource is expected.
PodmanResourceType: TypeVar = TypeVar("PodmanResourceType", bound="PodmanResource")
class PodmanResource(ABC):
class PodmanResource(ABC): # noqa: B024
"""Base class for representing resource of a Podman service.
Attributes:
@ -22,6 +26,7 @@ class PodmanResource(ABC):
attrs: Optional[Mapping[str, Any]] = None,
client: Optional[APIClient] = None,
collection: Optional["Manager"] = None,
podman_client: Optional["PodmanClient"] = None,
):
"""Initialize base class for PodmanResource's.
@ -29,10 +34,12 @@ class PodmanResource(ABC):
attrs: Mapping of attributes for resource from Podman service.
client: Configured connection to a Podman service.
collection: Manager of this category of resource, named `collection` for compatibility
podman_client: PodmanClient() configured to connect to Podman object.
"""
super().__init__()
self.client = client
self.manager = collection
self.podman_client = podman_client
self.attrs = {}
if attrs is not None:
@ -63,9 +70,13 @@ class PodmanResource(ABC):
return self.id[:17]
return self.id[:10]
def reload(self) -> None:
"""Refresh this object's data from the service."""
latest = self.manager.get(self.id)
def reload(self, **kwargs) -> None:
"""Refresh this object's data from the service.
Keyword Args:
compatible (bool): Use Docker compatibility endpoint
"""
latest = self.manager.get(self.id, **kwargs)
self.attrs = latest.attrs
@ -77,14 +88,18 @@ class Manager(ABC):
def resource(self):
"""Type[PodmanResource]: Class which the factory method prepare_model() will use."""
def __init__(self, client: APIClient = None) -> None:
def __init__(
self, client: Optional[APIClient] = None, podman_client: Optional["PodmanClient"] = None
) -> None:
"""Initialize Manager() object.
Args:
client: APIClient() configured to connect to Podman service.
podman_client: PodmanClient() configured to connect to Podman object.
"""
super().__init__()
self.client = client
self.podman_client = podman_client
@abstractmethod
def exists(self, key: str) -> bool:
@ -101,7 +116,7 @@ class Manager(ABC):
"""Returns representation of resource."""
@abstractmethod
def list(self, **kwargs) -> List[PodmanResourceType]:
def list(self, **kwargs) -> list[PodmanResourceType]:
"""Returns list of resources."""
def prepare_model(self, attrs: Union[PodmanResource, Mapping[str, Any]]) -> PodmanResourceType:
@ -110,6 +125,7 @@ class Manager(ABC):
# Refresh existing PodmanResource.
if isinstance(attrs, PodmanResource):
attrs.client = self.client
attrs.podman_client = self.podman_client
attrs.collection = self
return attrs
@ -117,7 +133,9 @@ class Manager(ABC):
if isinstance(attrs, abc.Mapping):
# TODO Determine why pylint is reporting typing.Type not callable
# pylint: disable=not-callable
return self.resource(attrs=attrs, client=self.client, collection=self)
return self.resource(
attrs=attrs, client=self.client, podman_client=self.podman_client, collection=self
)
# pylint: disable=broad-exception-raised
raise Exception(f"Can't create {self.resource.__name__} from {attrs}")

View File

@ -3,7 +3,7 @@
import logging
import urllib.parse
from contextlib import suppress
from typing import Any, Dict, List, Optional, Union
from typing import Any, Optional, Union
from podman import api
from podman.domain.images import Image
@ -38,7 +38,7 @@ class Manifest(PodmanResource):
@property
def names(self):
"""List[str]: Returns the identifier of the manifest."""
"""list[str]: Returns the identifier of the manifest."""
return self.name
@property
@ -51,7 +51,7 @@ class Manifest(PodmanResource):
"""int: Returns the schema version type for this manifest."""
return self.attrs.get("schemaVersion")
def add(self, images: List[Union[Image, str]], **kwargs) -> None:
def add(self, images: list[Union[Image, str]], **kwargs) -> None:
"""Add Image to manifest list.
Args:
@ -59,9 +59,9 @@ class Manifest(PodmanResource):
Keyword Args:
all (bool):
annotation (Dict[str, str]):
annotation (dict[str, str]):
arch (str):
features (List[str]):
features (list[str]):
os (str):
os_version (str):
variant (str):
@ -82,9 +82,11 @@ class Manifest(PodmanResource):
"operation": "update",
}
for item in images:
if isinstance(item, Image):
item = item.attrs["RepoTags"][0]
data["images"].append(item)
# avoid redefinition of the loop variable, then ensure it's an image
img_item = item
if isinstance(img_item, Image):
img_item = img_item.attrs["RepoTags"][0]
data["images"].append(img_item)
data = api.prepare_body(data)
response = self.client.put(f"/manifests/{self.quoted_name}", data=data)
@ -95,6 +97,7 @@ class Manifest(PodmanResource):
self,
destination: str,
all: Optional[bool] = None, # pylint: disable=redefined-builtin
**kwargs,
) -> None:
"""Push a manifest list or image index to a registry.
@ -102,15 +105,32 @@ class Manifest(PodmanResource):
destination: Target for push.
all: Push all images.
Keyword Args:
auth_config (Mapping[str, str]: Override configured credentials. Must include
username and password keys.
Raises:
NotFound: when the Manifest could not be found
APIError: when service reports an error
"""
auth_config: Optional[dict[str, str]] = kwargs.get("auth_config")
headers = {
# A base64url-encoded auth configuration
"X-Registry-Auth": api.encode_auth_header(auth_config) if auth_config else ""
}
params = {
"all": all,
"destination": destination,
}
response = self.client.post(f"/manifests/{self.quoted_name}/push", params=params)
destination_quoted = urllib.parse.quote_plus(destination)
response = self.client.post(
f"/manifests/{self.quoted_name}/registry/{destination_quoted}",
params=params,
headers=headers,
)
response.raise_for_status()
def remove(self, digest: str) -> None:
@ -151,7 +171,7 @@ class ManifestsManager(Manager):
def create(
self,
name: str,
images: Optional[List[Union[Image, str]]] = None,
images: Optional[list[Union[Image, str]]] = None,
all: Optional[bool] = None, # pylint: disable=redefined-builtin
) -> Manifest:
"""Create a Manifest.
@ -165,13 +185,15 @@ class ManifestsManager(Manager):
ValueError: when no names are provided
NotFoundImage: when a given image does not exist
"""
params: Dict[str, Any] = {}
params: dict[str, Any] = {}
if images is not None:
params["images"] = []
for item in images:
if isinstance(item, Image):
item = item.attrs["RepoTags"][0]
params["images"].append(item)
# avoid redefinition of the loop variable, then ensure it's an image
img_item = item
if isinstance(img_item, Image):
img_item = img_item.attrs["RepoTags"][0]
params["images"].append(img_item)
if all is not None:
params["all"] = all
@ -215,12 +237,12 @@ class ManifestsManager(Manager):
body["names"] = key
return self.prepare_model(attrs=body)
def list(self, **kwargs) -> List[Manifest]:
def list(self, **kwargs) -> list[Manifest]:
"""Not Implemented."""
raise NotImplementedError("Podman service currently does not support listing manifests.")
def remove(self, name: Union[Manifest, str]) -> Dict[str, Any]:
def remove(self, name: Union[Manifest, str]) -> dict[str, Any]:
"""Delete the manifest list from the Podman service."""
if isinstance(name, Manifest):
name = name.name

View File

@ -24,7 +24,7 @@ class Network(PodmanResource):
"""Details and configuration for a networks managed by the Podman service.
Attributes:
attrs (Dict[str, Any]): Attributes of Network reported from Podman service
attrs (dict[str, Any]): Attributes of Network reported from Podman service
"""
@property
@ -41,7 +41,7 @@ class Network(PodmanResource):
@property
def containers(self):
"""List[Container]: Returns list of Containers connected to network."""
"""list[Container]: Returns list of Containers connected to network."""
with suppress(KeyError):
container_manager = ContainersManager(client=self.client)
return [container_manager.get(ident) for ident in self.attrs["Containers"].keys()]
@ -71,12 +71,12 @@ class Network(PodmanResource):
container: To add to this Network
Keyword Args:
aliases (List[str]): Aliases to add for this endpoint
driver_opt (Dict[str, Any]): Options to provide to network driver
aliases (list[str]): Aliases to add for this endpoint
driver_opt (dict[str, Any]): Options to provide to network driver
ipv4_address (str): IPv4 address for given Container on this network
ipv6_address (str): IPv6 address for given Container on this network
link_local_ips (List[str]): list of link-local addresses
links (List[Union[str, Containers]]): Ignored
link_local_ips (list[str]): list of link-local addresses
links (list[Union[str, Containers]]): Ignored
Raises:
APIError: when Podman service reports an error
@ -111,6 +111,7 @@ class Network(PodmanResource):
f"/networks/{self.name}/connect",
data=json.dumps(data),
headers={"Content-type": "application/json"},
**kwargs,
)
response.raise_for_status()

View File

@ -12,10 +12,9 @@ Example:
import ipaddress
import logging
from contextlib import suppress
from typing import Any, Dict, List, Optional
from typing import Any, Optional, Literal, Union
from podman import api
from podman.api import http_utils
from podman.api import http_utils, prepare_filters
from podman.domain.manager import Manager
from podman.domain.networks import Network
from podman.errors import APIError
@ -46,8 +45,8 @@ class NetworksManager(Manager):
ingress (bool): Ignored, always False.
internal (bool): Restrict external access to the network.
ipam (IPAMConfig): Optional custom IP scheme for the network.
labels (Dict[str, str]): Map of labels to set on the network.
options (Dict[str, Any]): Driver options.
labels (dict[str, str]): Map of labels to set on the network.
options (dict[str, Any]): Driver options.
scope (str): Ignored, always "local".
Raises:
@ -75,7 +74,10 @@ class NetworksManager(Manager):
response.raise_for_status()
return self.prepare_model(attrs=response.json())
def _prepare_ipam(self, data: Dict[str, Any], ipam: Dict[str, Any]):
def _prepare_ipam(self, data: dict[str, Any], ipam: dict[str, Any]):
if "Driver" in ipam:
data["ipam_options"] = {"driver": ipam["Driver"]}
if "Config" not in ipam:
return
@ -114,23 +116,23 @@ class NetworksManager(Manager):
return self.prepare_model(attrs=response.json())
def list(self, **kwargs) -> List[Network]:
def list(self, **kwargs) -> list[Network]:
"""Report on networks.
Keyword Args:
names (List[str]): List of names to filter by.
ids (List[str]): List of identifiers to filter by.
names (list[str]): List of names to filter by.
ids (list[str]): List of identifiers to filter by.
filters (Mapping[str,str]): Criteria for listing networks. Available filters:
- driver="bridge": Matches a network's driver. Only "bridge" is supported.
- label=(Union[str, List[str]]): format either "key", "key=value"
- label=(Union[str, list[str]]): format either "key", "key=value"
or a list of such.
- type=(str): Filters networks by type, legal values are:
- "custom"
- "builtin"
- plugin=(List[str]]): Matches CNI plugins included in a network, legal
- plugin=(list[str]]): Matches CNI plugins included in a network, legal
values are (Podman only):
- bridge
@ -149,7 +151,7 @@ class NetworksManager(Manager):
filters = kwargs.get("filters", {})
filters["name"] = kwargs.get("names")
filters["id"] = kwargs.get("ids")
filters = api.prepare_filters(filters)
filters = prepare_filters(filters)
params = {"filters": filters}
response = self.client.get("/networks/json", params=params)
@ -158,8 +160,8 @@ class NetworksManager(Manager):
return [self.prepare_model(i) for i in response.json()]
def prune(
self, filters: Optional[Dict[str, Any]] = None
) -> Dict[api.Literal["NetworksDeleted", "SpaceReclaimed"], Any]:
self, filters: Optional[dict[str, Any]] = None
) -> dict[Literal["NetworksDeleted", "SpaceReclaimed"], Any]:
"""Delete unused Networks.
SpaceReclaimed always reported as 0
@ -170,11 +172,11 @@ class NetworksManager(Manager):
Raises:
APIError: when service reports error
"""
params = {"filters": api.prepare_filters(filters)}
params = {"filters": prepare_filters(filters)}
response = self.client.post("/networks/prune", params=params)
response.raise_for_status()
deleted: List[str] = []
deleted: list[str] = []
for item in response.json():
if item["Error"] is not None:
raise APIError(
@ -186,7 +188,7 @@ class NetworksManager(Manager):
return {"NetworksDeleted": deleted, "SpaceReclaimed": 0}
def remove(self, name: [Network, str], force: Optional[bool] = None) -> None:
def remove(self, name: Union[Network, str], force: Optional[bool] = None) -> None:
"""Remove Network resource.
Args:

View File

@ -1,11 +1,14 @@
"""Model and Manager for Pod resources."""
import logging
from typing import Any, Dict, Optional, Tuple, Union
from typing import Any, Optional, Union, TYPE_CHECKING
from podman.domain.manager import PodmanResource
_Timeout = Union[None, float, Tuple[float, float], Tuple[float, None]]
if TYPE_CHECKING:
from podman.domain.pods_manager import PodsManager
_Timeout = Union[None, int, tuple[int, int], tuple[int, None]]
logger = logging.getLogger("podman.pods")
@ -13,6 +16,8 @@ logger = logging.getLogger("podman.pods")
class Pod(PodmanResource):
"""Details and configuration for a pod managed by the Podman service."""
manager: "PodsManager"
@property
def id(self): # pylint: disable=invalid-name
return self.attrs.get("ID", self.attrs.get("Id"))
@ -88,7 +93,7 @@ class Pod(PodmanResource):
response = self.client.post(f"/pods/{self.id}/stop", params=params)
response.raise_for_status()
def top(self, **kwargs) -> Dict[str, Any]:
def top(self, **kwargs) -> dict[str, Any]:
"""Report on running processes in pod.
Keyword Args:

View File

@ -1,8 +1,10 @@
"""PodmanResource manager subclassed for Networks."""
import builtins
import json
import logging
from typing import Any, Dict, List, Optional, Union, Iterator
from typing import Any, Optional, Union
from collections.abc import Iterator
from podman import api
from podman.domain.manager import Manager
@ -57,24 +59,24 @@ class PodsManager(Manager):
response.raise_for_status()
return self.prepare_model(attrs=response.json())
def list(self, **kwargs) -> List[Pod]:
def list(self, **kwargs) -> builtins.list[Pod]:
"""Report on pods.
Keyword Args:
filters (Mapping[str, str]): Criteria for listing pods. Available filters:
- ctr-ids (List[str]): List of container ids to filter by.
- ctr-names (List[str]): List of container names to filter by.
- ctr-number (List[int]): list pods with given number of containers.
- ctr-status (List[str]): List pods with containers in given state.
- ctr-ids (list[str]): list of container ids to filter by.
- ctr-names (list[str]): list of container names to filter by.
- ctr-number (list[int]): list pods with given number of containers.
- ctr-status (list[str]): list pods with containers in given state.
Legal values are: "created", "running", "paused", "stopped",
"exited", or "unknown"
- id (str) - List pod with this id.
- name (str) - List pod with this name.
- status (List[str]): List pods in given state. Legal values are:
- status (list[str]): List pods in given state. Legal values are:
"created", "running", "paused", "stopped", "exited", or "unknown"
- label (List[str]): List pods with given labels.
- network (List[str]): List pods associated with given Network Ids (not Names).
- label (list[str]): List pods with given labels.
- network (list[str]): List pods associated with given Network Ids (not Names).
Raises:
APIError: when an error returned by service
@ -84,12 +86,12 @@ class PodsManager(Manager):
response.raise_for_status()
return [self.prepare_model(attrs=i) for i in response.json()]
def prune(self, filters: Optional[Dict[str, str]] = None) -> Dict[str, Any]:
def prune(self, filters: Optional[dict[str, str]] = None) -> dict[str, Any]:
"""Delete unused Pods.
Returns:
Dictionary Keys:
- PodsDeleted (List[str]): List of pod ids deleted.
- PodsDeleted (list[str]): List of pod ids deleted.
- SpaceReclaimed (int): Always zero.
Raises:
@ -98,7 +100,7 @@ class PodsManager(Manager):
response = self.client.post("/pods/prune", params={"filters": api.prepare_filters(filters)})
response.raise_for_status()
deleted: List[str] = []
deleted: builtins.list[str] = []
for item in response.json():
if item["Err"] is not None:
raise APIError(
@ -129,12 +131,14 @@ class PodsManager(Manager):
response = self.client.delete(f"/pods/{pod_id}", params={"force": force})
response.raise_for_status()
def stats(self, **kwargs) -> Union[List[Dict[str, Any]], Iterator[List[Dict[str, Any]]]]:
def stats(
self, **kwargs
) -> Union[builtins.list[dict[str, Any]], Iterator[builtins.list[dict[str, Any]]]]:
"""Resource usage statistics for the containers in pods.
Keyword Args:
all (bool): Provide statistics for all running pods.
name (Union[str, List[str]]): Pods to include in report.
name (Union[str, list[str]]): Pods to include in report.
stream (bool): Stream statistics until cancelled. Default: False.
decode (bool): If True, response will be decoded into dict. Default: False.

View File

@ -1,7 +1,8 @@
"""Module for tracking registry metadata."""
import logging
from typing import Any, Mapping, Optional, Union
from typing import Any, Optional, Union
from collections.abc import Mapping
from podman import api
from podman.domain.images import Image
@ -39,7 +40,7 @@ class RegistryData(PodmanResource):
Args:
platform: Platform for which to pull Image. Default: None (all platforms.)
"""
repository = api.parse_repository(self.image_name)
repository, _ = api.parse_repository(self.image_name)
return self.manager.pull(repository, tag=self.id, platform=platform)
def has_platform(self, platform: Union[str, Mapping[str, Any]]) -> bool:

View File

@ -1,7 +1,8 @@
"""Model and Manager for Secrets resources."""
from contextlib import suppress
from typing import Any, List, Mapping, Optional, Union
from typing import Any, Optional, Union
from collections.abc import Mapping
from podman.api import APIClient
from podman.domain.manager import Manager, PodmanResource
@ -75,11 +76,11 @@ class SecretsManager(Manager):
response.raise_for_status()
return self.prepare_model(attrs=response.json())
def list(self, **kwargs) -> List[Secret]:
def list(self, **kwargs) -> list[Secret]:
"""Report on Secrets.
Keyword Args:
filters (Dict[str, Any]): Ignored.
filters (dict[str, Any]): Ignored.
Raises:
APIError: when error returned by service

View File

@ -1,7 +1,7 @@
"""SystemManager to provide system level information from Podman service."""
import logging
from typing import Any, Dict, Optional
from typing import Any, Optional, Union
from podman.api.client import APIClient
from podman import api
@ -20,7 +20,7 @@ class SystemManager:
"""
self.client = client
def df(self) -> Dict[str, Any]: # pylint: disable=invalid-name
def df(self) -> dict[str, Any]: # pylint: disable=invalid-name
"""Disk usage by Podman resources.
Returns:
@ -30,21 +30,25 @@ class SystemManager:
response.raise_for_status()
return response.json()
def info(self, *_, **__) -> Dict[str, Any]:
def info(self, *_, **__) -> dict[str, Any]:
"""Returns information on Podman service."""
response = self.client.get("/info")
response.raise_for_status()
return response.json()
def login(
def login( # pylint: disable=too-many-arguments,too-many-positional-arguments,unused-argument
self,
username: str,
password: Optional[str] = None,
email: Optional[str] = None,
registry: Optional[str] = None,
reauth: Optional[bool] = False, # pylint: disable=unused-argument
dockercfg_path: Optional[str] = None, # pylint: disable=unused-argument
) -> Dict[str, Any]:
reauth: Optional[bool] = False,
dockercfg_path: Optional[str] = None,
auth: Optional[str] = None,
identitytoken: Optional[str] = None,
registrytoken: Optional[str] = None,
tls_verify: Optional[Union[bool, str]] = None,
) -> dict[str, Any]:
"""Log into Podman service.
Args:
@ -52,9 +56,14 @@ class SystemManager:
password: Registry plaintext password
email: Registry account email address
registry: URL for registry access. For example,
https://quay.io/v2
reauth: Ignored: If True, refresh existing authentication. Default: False
dockercfg_path: Ignored: Path to custom configuration file.
https://quay.io/v2
auth: TODO: Add description based on the source code of Podman.
identitytoken: IdentityToken is used to authenticate the user and
get an access token for the registry.
registrytoken: RegistryToken is a bearer token to be sent to a registry
tls_verify: Whether to verify TLS certificates.
"""
payload = {
@ -62,6 +71,9 @@ class SystemManager:
"password": password,
"email": email,
"serveraddress": registry,
"auth": auth,
"identitytoken": identitytoken,
"registrytoken": registrytoken,
}
payload = api.prepare_body(payload)
response = self.client.post(
@ -69,6 +81,7 @@ class SystemManager:
headers={"Content-type": "application/json"},
data=payload,
compatible=True,
verify=tls_verify, # Pass tls_verify to the client
)
response.raise_for_status()
return response.json()
@ -78,7 +91,7 @@ class SystemManager:
response = self.client.head("/_ping")
return response.ok
def version(self, **kwargs) -> Dict[str, Any]:
def version(self, **kwargs) -> dict[str, Any]:
"""Get version information from service.
Keyword Args:

View File

@ -1,12 +1,11 @@
"""Model and Manager for Volume resources."""
import logging
from typing import Any, Dict, List, Optional, Union
from typing import Any, Literal, Optional, Union
import requests
from podman import api
from podman.api import Literal
from podman.domain.manager import Manager, PodmanResource
from podman.errors import APIError
@ -36,6 +35,23 @@ class Volume(PodmanResource):
"""
self.manager.remove(self.name, force=force)
def inspect(self, **kwargs) -> dict:
"""Inspect this volume
Keyword Args:
tls_verify (bool) - Require TLS verification. Default: True.
Returns:
Display attributes of volume.
Raises:
APIError: when service reports an error
"""
params = {"tlsVerify": kwargs.get("tls_verify", True)}
response = self.client.get(f"/volumes/{self.id}/json", params=params)
response.raise_for_status()
return response.json()
class VolumesManager(Manager):
"""Specialized Manager for Volume resources."""
@ -53,8 +69,8 @@ class VolumesManager(Manager):
Keyword Args:
driver (str): Volume driver to use
driver_opts (Dict[str, str]): Options to use with driver
labels (Dict[str, str]): Labels to apply to volume
driver_opts (dict[str, str]): Options to use with driver
labels (dict[str, str]): Labels to apply to volume
Raises:
APIError: when service reports error
@ -92,14 +108,14 @@ class VolumesManager(Manager):
response.raise_for_status()
return self.prepare_model(attrs=response.json())
def list(self, *_, **kwargs) -> List[Volume]:
def list(self, *_, **kwargs) -> list[Volume]:
"""Report on volumes.
Keyword Args:
filters (Dict[str, str]): criteria to filter Volume list
filters (dict[str, str]): criteria to filter Volume list
- driver (str): filter volumes by their driver
- label (Dict[str, str]): filter by label and/or value
- label (dict[str, str]): filter by label and/or value
- name (str): filter by volume's name
"""
filters = api.prepare_filters(kwargs.get("filters"))
@ -112,8 +128,9 @@ class VolumesManager(Manager):
return [self.prepare_model(i) for i in response.json()]
def prune(
self, filters: Optional[Dict[str, str]] = None # pylint: disable=unused-argument
) -> Dict[Literal["VolumesDeleted", "SpaceReclaimed"], Any]:
self,
filters: Optional[dict[str, str]] = None, # pylint: disable=unused-argument
) -> dict[Literal["VolumesDeleted", "SpaceReclaimed"], Any]:
"""Delete unused volumes.
Args:
@ -126,7 +143,7 @@ class VolumesManager(Manager):
data = response.json()
response.raise_for_status()
volumes: List[str] = []
volumes: list[str] = []
space_reclaimed = 0
for item in data:
if "Err" in item:

View File

@ -21,6 +21,7 @@ __all__ = [
'NotFound',
'NotFoundError',
'PodmanError',
'StreamParseError',
]
try:
@ -32,6 +33,7 @@ try:
InvalidArgument,
NotFound,
PodmanError,
StreamParseError,
)
except ImportError:
pass
@ -46,7 +48,9 @@ class NotFoundError(HTTPException):
def __init__(self, message, response=None):
super().__init__(message)
self.response = response
warnings.warn("APIConnection() and supporting classes.", PendingDeprecationWarning)
warnings.warn(
"APIConnection() and supporting classes.", PendingDeprecationWarning, stacklevel=2
)
# If found, use new ImageNotFound otherwise old class
@ -54,7 +58,7 @@ try:
from .exceptions import ImageNotFound
except ImportError:
class ImageNotFound(NotFoundError):
class ImageNotFound(NotFoundError): # type: ignore[no-redef]
"""HTTP request returned a http.HTTPStatus.NOT_FOUND.
Specialized for Image not found. Deprecated.
@ -98,7 +102,9 @@ class RequestError(HTTPException):
def __init__(self, message, response=None):
super().__init__(message)
self.response = response
warnings.warn("APIConnection() and supporting classes.", PendingDeprecationWarning)
warnings.warn(
"APIConnection() and supporting classes.", PendingDeprecationWarning, stacklevel=2
)
class InternalServerError(HTTPException):
@ -110,4 +116,6 @@ class InternalServerError(HTTPException):
def __init__(self, message, response=None):
super().__init__(message)
self.response = response
warnings.warn("APIConnection() and supporting classes.", PendingDeprecationWarning)
warnings.warn(
"APIConnection() and supporting classes.", PendingDeprecationWarning, stacklevel=2
)

View File

@ -1,6 +1,7 @@
"""Podman API Errors."""
from typing import Iterable, List, Optional, Union, TYPE_CHECKING
from typing import Optional, Union, TYPE_CHECKING
from collections.abc import Iterable
from requests import Response
from requests.exceptions import HTTPError
@ -112,10 +113,10 @@ class ContainerError(PodmanError):
self,
container: "Container",
exit_status: int,
command: Union[str, List[str]],
command: Union[str, list[str]],
image: str,
stderr: Optional[Iterable[str]] = None,
):
): # pylint: disable=too-many-positional-arguments
"""Initialize ContainerError.
Args:
@ -142,3 +143,8 @@ class ContainerError(PodmanError):
class InvalidArgument(PodmanError):
"""Parameter to method/function was not valid."""
class StreamParseError(RuntimeError):
def __init__(self, reason):
self.msg = reason

0
podman/py.typed Normal file
View File

View File

@ -7,4 +7,3 @@
## Coverage Reporting Framework
`coverage.py` see https://coverage.readthedocs.io/en/coverage-5.0.3/#quick-start

View File

@ -3,5 +3,5 @@
# Do not auto-update these from version.py,
# as test code should be changed to reflect changes in Podman API versions
BASE_SOCK = "unix:///run/api.sock"
LIBPOD_URL = "http://%2Frun%2Fapi.sock/v4.8.0/libpod"
LIBPOD_URL = "http://%2Frun%2Fapi.sock/v5.6.0/libpod"
COMPATIBLE_URL = "http://%2Frun%2Fapi.sock/v1.40"

21
podman/tests/conftest.py Normal file
View File

@ -0,0 +1,21 @@
import pytest
def pytest_addoption(parser):
parser.addoption(
"--pnext", action="store_true", default=False, help="run tests against podman_next copr"
)
def pytest_configure(config):
config.addinivalue_line("markers", "pnext: mark test as run against podman_next")
def pytest_collection_modifyitems(config, items):
if config.getoption("--pnext"):
# --pnext given in cli: run tests marked as pnext
return
podman_next = pytest.mark.skip(reason="need --pnext option to run")
for item in items:
if "pnext" in item.keywords:
item.add_marker(podman_next)

View File

@ -13,6 +13,7 @@
# under the License.
#
"""Base integration test code"""
import logging
import os
import shutil

View File

@ -39,10 +39,10 @@ class AdapterIntegrationTest(base.IntegrationTest):
podman.start(check_socket=False)
time.sleep(0.5)
with PodmanClient(base_url=f"tcp:localhost:8889") as client:
with PodmanClient(base_url="tcp:localhost:8889") as client:
self.assertTrue(client.ping())
with PodmanClient(base_url=f"http://localhost:8889") as client:
with PodmanClient(base_url="http://localhost:8889") as client:
self.assertTrue(client.ping())
finally:
podman.stop()

View File

@ -1,9 +1,11 @@
import unittest
import re
import os
import pytest
import podman.tests.integration.base as base
from podman import PodmanClient
from podman.tests.utils import PODMAN_VERSION
# @unittest.skipIf(os.geteuid() != 0, 'Skipping, not running as root')
@ -20,11 +22,11 @@ class ContainersIntegrationTest(base.IntegrationTest):
self.alpine_image = self.client.images.pull("quay.io/libpod/alpine", tag="latest")
self.containers = []
def tearUp(self):
def tearDown(self):
for container in self.containers:
container.remove(force=True)
def test_container_volume_mount(self):
def test_container_named_volume_mount(self):
with self.subTest("Check volume mount"):
volumes = {
'test_bind_1': {'bind': '/mnt/vol1', 'mode': 'rw'},
@ -52,6 +54,33 @@ class ContainersIntegrationTest(base.IntegrationTest):
for o in other_options:
self.assertIn(o, mount.get('Options'))
def test_container_directory_volume_mount(self):
"""Test that directories can be mounted with the ``volume`` parameter."""
with self.subTest("Check bind mount"):
volumes = {
"/etc/hosts": dict(bind="/test_ro", mode='ro'),
"/etc/hosts": dict(bind="/test_rw", mode='rw'), # noqa: F601
}
container = self.client.containers.create(
self.alpine_image, command=["cat", "/test_ro", "/test_rw"], volumes=volumes
)
container_mounts = container.attrs.get('Mounts', {})
self.assertEqual(len(container_mounts), len(volumes))
self.containers.append(container)
for directory, mount_spec in volumes.items():
self.assertIn(
f"{directory}:{mount_spec['bind']}:{mount_spec['mode']},rprivate,rbind",
container.attrs.get('HostConfig', {}).get('Binds', list()),
)
# check if container can be started and exits with EC == 0
container.start()
container.wait()
self.assertEqual(container.attrs.get('State', dict()).get('ExitCode', 256), 0)
def test_container_extra_hosts(self):
"""Test Container Extra hosts"""
extra_hosts = {"host1 host3": "127.0.0.2", "host2": "127.0.0.3"}
@ -75,6 +104,44 @@ class ContainersIntegrationTest(base.IntegrationTest):
for hosts_entry in formatted_hosts:
self.assertIn(hosts_entry, logs)
def test_container_environment_variables(self):
"""Test environment variables passed to the container."""
with self.subTest("Check environment variables as dictionary"):
env_dict = {"MY_VAR": "123", "ANOTHER_VAR": "456"}
container = self.client.containers.create(
self.alpine_image, command=["env"], environment=env_dict
)
self.containers.append(container)
container_env = container.attrs.get('Config', {}).get('Env', [])
for key, value in env_dict.items():
self.assertIn(f"{key}={value}", container_env)
container.start()
container.wait()
logs = b"\n".join(container.logs()).decode()
for key, value in env_dict.items():
self.assertIn(f"{key}={value}", logs)
with self.subTest("Check environment variables as list"):
env_list = ["MY_VAR=123", "ANOTHER_VAR=456"]
container = self.client.containers.create(
self.alpine_image, command=["env"], environment=env_list
)
self.containers.append(container)
container_env = container.attrs.get('Config', {}).get('Env', [])
for env in env_list:
self.assertIn(env, container_env)
container.start()
container.wait()
logs = b"\n".join(container.logs()).decode()
for env in env_list:
self.assertIn(env, logs)
def _test_memory_limit(self, parameter_name, host_config_name, set_mem_limit=False):
"""Base for tests which checks memory limits"""
memory_limit_tests = [
@ -142,6 +209,16 @@ class ContainersIntegrationTest(base.IntegrationTest):
'1223/tcp': [{'HostIp': '', 'HostPort': '1235'}],
},
},
{
'input': {
2244: 3344,
},
'expected_output': {
'2244/tcp': [
{'HostIp': '', 'HostPort': '3344'},
],
},
},
]
for port_test in port_tests:
@ -149,10 +226,32 @@ class ContainersIntegrationTest(base.IntegrationTest):
self.containers.append(container)
self.assertTrue(
all([
x in port_test['expected_output']
for x in container.attrs.get('HostConfig', {}).get('PortBindings')
])
all(
[
x in port_test['expected_output']
for x in container.attrs.get('HostConfig', {}).get('PortBindings')
]
)
)
def test_container_dns_option(self):
expected_dns_opt = ['edns0']
container = self.client.containers.create(
self.alpine_image, command=["cat", "/etc/resolv.conf"], dns_opt=expected_dns_opt
)
self.containers.append(container)
with self.subTest("Check HostConfig"):
self.assertEqual(
container.attrs.get('HostConfig', {}).get('DnsOptions'), expected_dns_opt
)
with self.subTest("Check content of /etc/resolv.conf"):
container.start()
container.wait()
self.assertTrue(
all([opt in b"\n".join(container.logs()).decode() for opt in expected_dns_opt])
)
def test_container_healthchecks(self):
@ -180,6 +279,11 @@ class ContainersIntegrationTest(base.IntegrationTest):
"""Test passing shared memory size"""
self._test_memory_limit('shm_size', 'ShmSize')
@pytest.mark.skipif(os.geteuid() != 0, reason='Skipping, not running as root')
@pytest.mark.skipif(
PODMAN_VERSION >= (5, 6, 0),
reason="Test against this feature in Podman 5.6.0 or greater https://github.com/containers/podman/pull/25942",
)
def test_container_mounts(self):
"""Test passing mounts"""
with self.subTest("Check bind mount"):
@ -229,6 +333,70 @@ class ContainersIntegrationTest(base.IntegrationTest):
)
)
with self.subTest("Check uppercase mount option attributes"):
mount = {
"TypE": "bind",
"SouRce": "/etc/hosts",
"TarGet": "/test",
"Read_Only": True,
"ReLabel": "Z",
}
container = self.client.containers.create(
self.alpine_image, command=["cat", "/test"], mounts=[mount]
)
self.containers.append(container)
self.assertIn(
f"{mount['SouRce']}:{mount['TarGet']}:ro,Z,rprivate,rbind",
container.attrs.get('HostConfig', {}).get('Binds', list()),
)
# check if container can be started and exits with EC == 0
container.start()
container.wait()
self.assertEqual(container.attrs.get('State', dict()).get('ExitCode', 256), 0)
@pytest.mark.skipif(os.geteuid() != 0, reason='Skipping, not running as root')
@pytest.mark.skipif(
PODMAN_VERSION < (5, 6, 0),
reason="Test against this feature before Podman 5.6.0 https://github.com/containers/podman/pull/25942",
)
def test_container_mounts_without_rw_as_default(self):
"""Test passing mounts"""
with self.subTest("Check bind mount"):
mount = {
"type": "bind",
"source": "/etc/hosts",
"target": "/test",
"read_only": True,
"relabel": "Z",
}
container = self.client.containers.create(
self.alpine_image, command=["cat", "/test"], mounts=[mount]
)
self.containers.append(container)
self.assertIn(
f"{mount['source']}:{mount['target']}:ro,Z,rprivate,rbind",
container.attrs.get('HostConfig', {}).get('Binds', list()),
)
# check if container can be started and exits with EC == 0
container.start()
container.wait()
self.assertEqual(container.attrs.get('State', dict()).get('ExitCode', 256), 0)
with self.subTest("Check tmpfs mount"):
mount = {"type": "tmpfs", "source": "tmpfs", "target": "/test", "size": "456k"}
container = self.client.containers.create(
self.alpine_image, command=["df", "-h"], mounts=[mount]
)
self.containers.append(container)
self.assertEqual(
container.attrs.get('HostConfig', {}).get('Tmpfs', {}).get(mount['target']),
f"size={mount['size']},rprivate,nosuid,nodev,tmpcopyup",
)
def test_container_devices(self):
devices = ["/dev/null:/dev/foo", "/dev/zero:/dev/bar"]
container = self.client.containers.create(
@ -241,11 +409,13 @@ class ContainersIntegrationTest(base.IntegrationTest):
for device in devices:
path_on_host, path_in_container = device.split(':', 1)
self.assertTrue(
any([
c.get('PathOnHost') == path_on_host
and c.get('PathInContainer') == path_in_container
for c in container_devices
])
any(
[
c.get('PathOnHost') == path_on_host
and c.get('PathInContainer') == path_in_container
for c in container_devices
]
)
)
with self.subTest("Check devices in running container object"):

View File

@ -0,0 +1,122 @@
import podman.tests.integration.base as base
from podman import PodmanClient
# @unittest.skipIf(os.geteuid() != 0, 'Skipping, not running as root')
class ContainersExecIntegrationTests(base.IntegrationTest):
"""Containers integration tests for exec"""
def setUp(self):
super().setUp()
self.client = PodmanClient(base_url=self.socket_uri)
self.addCleanup(self.client.close)
self.alpine_image = self.client.images.pull("quay.io/libpod/alpine", tag="latest")
self.containers = []
def tearDown(self):
for container in self.containers:
container.remove(force=True)
def test_container_exec_run(self):
"""Test any command that will return code 0 and no output"""
container = self.client.containers.create(self.alpine_image, command=["top"], detach=True)
container.start()
error_code, stdout = container.exec_run("echo hello")
self.assertEqual(error_code, 0)
self.assertEqual(stdout, b'\x01\x00\x00\x00\x00\x00\x00\x06hello\n')
def test_container_exec_run_errorcode(self):
"""Test a failing command with stdout and stderr in a single bytestring"""
container = self.client.containers.create(self.alpine_image, command=["top"], detach=True)
container.start()
error_code, output = container.exec_run("ls nonexistent")
self.assertEqual(error_code, 1)
self.assertEqual(
output, b"\x02\x00\x00\x00\x00\x00\x00+ls: nonexistent: No such file or directory\n"
)
def test_container_exec_run_demux(self):
"""Test a failing command with stdout and stderr in a bytestring tuple"""
container = self.client.containers.create(self.alpine_image, command=["top"], detach=True)
container.start()
error_code, output = container.exec_run("ls nonexistent", demux=True)
self.assertEqual(error_code, 1)
self.assertEqual(output[0], None)
self.assertEqual(output[1], b"ls: nonexistent: No such file or directory\n")
def test_container_exec_run_stream(self):
"""Test streaming the output from a long running command."""
container = self.client.containers.create(self.alpine_image, command=["top"], detach=True)
container.start()
command = [
'/bin/sh',
'-c',
'echo 0 ; sleep .1 ; echo 1 ; sleep .1 ; echo 2 ; sleep .1 ;',
]
error_code, output = container.exec_run(command, stream=True)
self.assertEqual(error_code, None)
self.assertEqual(
list(output),
[
b'0\n',
b'1\n',
b'2\n',
],
)
def test_container_exec_run_stream_demux(self):
"""Test streaming the output from a long running command with demux enabled."""
container = self.client.containers.create(self.alpine_image, command=["top"], detach=True)
container.start()
command = [
'/bin/sh',
'-c',
'echo 0 ; >&2 echo 1 ; sleep .1 ; '
+ 'echo 2 ; >&2 echo 3 ; sleep .1 ; '
+ 'echo 4 ; >&2 echo 5 ; sleep .1 ;',
]
error_code, output = container.exec_run(command, stream=True, demux=True)
self.assertEqual(error_code, None)
self.assertEqual(
list(output),
[
(b'0\n', None),
(None, b'1\n'),
(b'2\n', None),
(None, b'3\n'),
(b'4\n', None),
(None, b'5\n'),
],
)
def test_container_exec_run_stream_detach(self):
"""Test streaming the output from a long running command with detach enabled."""
container = self.client.containers.create(self.alpine_image, command=["top"], detach=True)
container.start()
command = [
'/bin/sh',
'-c',
'echo 0 ; sleep .1 ; echo 1 ; sleep .1 ; echo 2 ; sleep .1 ;',
]
error_code, output = container.exec_run(command, stream=True, detach=True)
# Detach should make the ``exec_run`` ignore the ``stream`` flag so we will
# assert against the standard, non-streaming behavior.
self.assertEqual(error_code, 0)
# The endpoint should return immediately, before we are able to actually
# get any of the output.
self.assertEqual(
output,
b'\n',
)

View File

@ -1,14 +1,15 @@
import io
import random
import tarfile
import tempfile
import unittest
try:
# Python >= 3.10
from collections.abc import Iterator
except:
except ImportError:
# Python < 3.10
from collections import Iterator
from collections.abc import Iterator
import podman.tests.integration.base as base
from podman import PodmanClient
@ -16,7 +17,6 @@ from podman.domain.containers import Container
from podman.domain.images import Image
from podman.errors import NotFound
# @unittest.skipIf(os.geteuid() != 0, 'Skipping, not running as root')
@ -42,7 +42,9 @@ class ContainersIntegrationTest(base.IntegrationTest):
with self.subTest("Create from Alpine Image"):
container = self.client.containers.create(
self.alpine_image, command=["echo", random_string], ports={'2222/tcp': 3333}
self.alpine_image,
command=["echo", random_string],
ports={'2222/tcp': 3333, 2244: 3344},
)
self.assertIsInstance(container, Container)
self.assertGreater(len(container.attrs), 0)
@ -62,6 +64,10 @@ class ContainersIntegrationTest(base.IntegrationTest):
self.assertEqual(
"3333", container.attrs["NetworkSettings"]["Ports"]["2222/tcp"][0]["HostPort"]
)
self.assertIn("2244/tcp", container.attrs["NetworkSettings"]["Ports"])
self.assertEqual(
"3344", container.attrs["NetworkSettings"]["Ports"]["2244/tcp"][0]["HostPort"]
)
file_contents = b"This is an integration test for archive."
file_buffer = io.BytesIO(file_contents)
@ -136,6 +142,24 @@ class ContainersIntegrationTest(base.IntegrationTest):
top_ctnr.reload()
self.assertIn(top_ctnr.status, ("exited", "stopped"))
with self.subTest("Create-Init-Start Container"):
top_ctnr = self.client.containers.create(
self.alpine_image, ["/usr/bin/top"], name="TestInitPs", detach=True
)
self.assertEqual(top_ctnr.status, "created")
top_ctnr.init()
top_ctnr.reload()
self.assertEqual(top_ctnr.status, "initialized")
top_ctnr.start()
top_ctnr.reload()
self.assertEqual(top_ctnr.status, "running")
top_ctnr.stop()
top_ctnr.reload()
self.assertIn(top_ctnr.status, ("exited", "stopped"))
with self.subTest("Prune Containers"):
report = self.client.containers.prune()
self.assertIn(top_ctnr.id, report["ContainersDeleted"])
@ -158,6 +182,93 @@ class ContainersIntegrationTest(base.IntegrationTest):
self.assertIn("localhost/busybox.local:unittest", image.attrs["RepoTags"])
busybox.remove(force=True)
def test_container_rm_anonymous_volume(self):
with self.subTest("Check anonymous volume is removed"):
container_file = """
FROM alpine
VOLUME myvol
ENV foo=bar
"""
tmp_file = tempfile.mktemp()
file = open(tmp_file, 'w')
file.write(container_file)
file.close()
self.client.images.build(dockerfile=tmp_file, tag="test-img", path=".")
# get existing number of containers and volumes
existing_containers = self.client.containers.list(all=True)
existing_volumes = self.client.volumes.list()
container = self.client.containers.create("test-img")
container_list = self.client.containers.list(all=True)
self.assertEqual(len(container_list), len(existing_containers) + 1)
volume_list = self.client.volumes.list()
self.assertEqual(len(volume_list), len(existing_volumes) + 1)
# remove the container with v=True
container.remove(v=True)
container_list = self.client.containers.list(all=True)
self.assertEqual(len(container_list), len(existing_containers))
volume_list = self.client.volumes.list()
self.assertEqual(len(volume_list), len(existing_volumes))
def test_container_labels(self):
labels = {'label1': 'value1', 'label2': 'value2'}
labeled_container = self.client.containers.create(self.alpine_image, labels=labels)
unlabeled_container = self.client.containers.create(
self.alpine_image,
)
# inspect and list have 2 different schemas so we need to verify that we can
# successfully retrieve the labels on both
try:
# inspect schema
self.assertEqual(labeled_container.labels, labels)
self.assertEqual(unlabeled_container.labels, {})
# list schema
for container in self.client.containers.list(all=True):
if container.id == labeled_container.id:
self.assertEqual(container.labels, labels)
elif container.id == unlabeled_container.id:
self.assertEqual(container.labels, {})
finally:
labeled_container.remove(v=True)
unlabeled_container.remove(v=True)
def test_container_update(self):
"""Update container"""
to_update_container = self.client.containers.run(
self.alpine_image, name="to_update_container", detach=True
)
with self.subTest("Test container update changing the healthcheck"):
to_update_container.update(health_cmd="ls")
self.assertEqual(
to_update_container.inspect()['Config']['Healthcheck']['Test'], ['CMD-SHELL', 'ls']
)
with self.subTest("Test container update disabling the healthcheck"):
to_update_container.update(no_healthcheck=True)
self.assertEqual(
to_update_container.inspect()['Config']['Healthcheck']['Test'], ['NONE']
)
with self.subTest("Test container update passing payload and data"):
to_update_container.update(
restart_policy="always", health_cmd="echo", health_timeout="10s"
)
self.assertEqual(
to_update_container.inspect()['Config']['Healthcheck']['Test'],
['CMD-SHELL', 'echo'],
)
self.assertEqual(
to_update_container.inspect()['Config']['Healthcheck']['Timeout'], 10000000000
)
self.assertEqual(
to_update_container.inspect()['HostConfig']['RestartPolicy']['Name'], 'always'
)
to_update_container.remove(v=True)
if __name__ == '__main__':
unittest.main()

View File

@ -13,19 +13,17 @@
# under the License.
#
"""Images integration tests."""
import io
import queue
import platform
import tarfile
import threading
import types
import unittest
from contextlib import suppress
from datetime import datetime, timedelta
import podman.tests.integration.base as base
from podman import PodmanClient
from podman.domain.images import Image
from podman.errors import APIError, ImageNotFound
from podman.errors import APIError, ImageNotFound, PodmanError
# @unittest.skipIf(os.geteuid() != 0, 'Skipping, not running as root')
@ -44,7 +42,7 @@ class ImagesIntegrationTest(base.IntegrationTest):
"""Test Image CRUD.
Notes:
Written to maximize re-use of pulled image.
Written to maximize reuse of pulled image.
"""
with self.subTest("Pull Alpine Image"):
@ -109,31 +107,89 @@ class ImagesIntegrationTest(base.IntegrationTest):
self.assertIn(image.id, deleted)
self.assertGreater(actual["SpaceReclaimed"], 0)
def test_search(self):
actual = self.client.images.search("alpine", filters={"is-official": True})
self.assertEqual(len(actual), 1)
self.assertEqual(actual[0]["Official"], "[OK]")
with self.subTest("Export Image to tarball (in memory) with named mode"):
alpine_image = self.client.images.pull("quay.io/libpod/alpine", tag="latest")
image_buffer = io.BytesIO()
for chunk in alpine_image.save(named=True):
image_buffer.write(chunk)
image_buffer.seek(0, 0)
actual = self.client.images.search("alpine", listTags=True)
self.assertIsNotNone(actual[0]["Tag"])
with tarfile.open(fileobj=image_buffer, mode="r") as tar:
items_in_tar = tar.getnames()
# Check if repositories file is available in the tarball
self.assertIn("repositories", items_in_tar)
# Extract the 'repositories' file
repositories_file = tar.extractfile("repositories")
if repositories_file is not None:
# Check the content of the "repositories" file.
repositories_content = repositories_file.read().decode("utf-8")
# Check if "repositories" file contains the name of the Image (named).
self.assertTrue("alpine" in str(repositories_content))
def test_search(self):
# N/B: This is an infrequently used feature, that tends to flake a lot.
# Just check that it doesn't throw an exception and move on.
self.client.images.search("alpine")
@unittest.skip("Needs Podman 3.1.0")
def test_corrupt_load(self):
with self.assertRaises(APIError) as e:
next(self.client.images.load("This is a corrupt tarball".encode("utf-8")))
next(self.client.images.load(b"This is a corrupt tarball"))
self.assertIn("payload does not match", e.exception.explanation)
def test_build(self):
buffer = io.StringIO(f"""FROM quay.io/libpod/alpine_labels:latest""")
buffer = io.StringIO("""FROM quay.io/libpod/alpine_labels:latest""")
image, stream = self.client.images.build(fileobj=buffer)
self.assertIsNotNone(image)
self.assertIsNotNone(image.id)
def test_build_with_context(self):
context = io.BytesIO()
with tarfile.open(fileobj=context, mode="w") as tar:
def add_file(name: str, content: str):
binary_content = content.encode("utf-8")
fileobj = io.BytesIO(binary_content)
tarinfo = tarfile.TarInfo(name=name)
tarinfo.size = len(binary_content)
tar.addfile(tarinfo, fileobj)
# Use a non-standard Dockerfile name to test the 'dockerfile' argument
add_file(
"MyDockerfile", ("FROM quay.io/libpod/alpine_labels:latest\nCOPY example.txt .\n")
)
add_file("example.txt", "This is an example file.\n")
# Rewind to the start of the generated file so we can read it
context.seek(0)
with self.assertRaises(PodmanError):
# If requesting a custom context, must provide the context as `fileobj`
self.client.images.build(custom_context=True, path='invalid')
with self.assertRaises(PodmanError):
# If requesting a custom context, currently must specify the dockerfile name
self.client.images.build(custom_context=True, fileobj=context)
image, stream = self.client.images.build(
fileobj=context,
dockerfile="MyDockerfile",
custom_context=True,
)
self.assertIsNotNone(image)
self.assertIsNotNone(image.id)
@unittest.skipIf(platform.architecture()[0] == "32bit", "no 32-bit image available")
def test_pull_stream(self):
generator = self.client.images.pull("ubi8", tag="latest", stream=True)
self.assertIsInstance(generator, types.GeneratorType)
@unittest.skipIf(platform.architecture()[0] == "32bit", "no 32-bit image available")
def test_pull_stream_decode(self):
generator = self.client.images.pull("ubi8", tag="latest", stream=True, decode=True)
self.assertIsInstance(generator, types.GeneratorType)
def test_scp(self):
with self.assertRaises(APIError) as e:
next(

View File

@ -13,7 +13,7 @@
# under the License.
#
"""Network integration tests."""
import os
import random
import unittest
from contextlib import suppress

View File

@ -62,7 +62,5 @@ class SystemIntegrationTest(base.IntegrationTest):
)
def test_from_env(self):
"""integration: from_env() error message"""
with self.assertRaises(ValueError) as e:
next(self.client.from_env())
self.assertIn("CONTAINER_HOST or DOCKER_HOST", repr(e.exception))
"""integration: from_env() no error"""
PodmanClient.from_env()

View File

@ -13,13 +13,14 @@
# under the License.
#
"""Integration Test Utils"""
import logging
import os
import shutil
import subprocess
import threading
from contextlib import suppress
from typing import List, Optional
from typing import Optional
import time
@ -49,10 +50,10 @@ class PodmanLauncher:
self.socket_file: str = socket_uri.replace('unix://', '')
self.log_level = log_level
self.proc = None
self.proc: Optional[subprocess.Popen[bytes]] = None
self.reference_id = hash(time.monotonic())
self.cmd: List[str] = []
self.cmd: list[str] = []
if privileged:
self.cmd.append('sudo')
@ -66,12 +67,14 @@ class PodmanLauncher:
if os.environ.get("container") == "oci":
self.cmd.append("--storage-driver=vfs")
self.cmd.extend([
"system",
"service",
f"--time={timeout}",
socket_uri,
])
self.cmd.extend(
[
"system",
"service",
f"--time={timeout}",
socket_uri,
]
)
process = subprocess.run(
[podman_exe, "--version"], check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
@ -95,9 +98,7 @@ class PodmanLauncher:
def consume(line: str):
logger.debug(line.strip("\n") + f" refid={self.reference_id}")
self.proc = subprocess.Popen(
self.cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
) # pylint: disable=consider-using-with
self.proc = subprocess.Popen(self.cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # pylint: disable=consider-using-with
threading.Thread(target=consume_lines, args=[self.proc.stdout, consume]).start()
if not check_socket:

View File

@ -3,7 +3,7 @@ import pathlib
import unittest
from typing import Any, Optional
from unittest import mock
from unittest.mock import Mock, mock_open, patch
from unittest.mock import mock_open, patch
from dataclasses import dataclass
@ -11,7 +11,7 @@ from podman import api
class TestUtilsCase(unittest.TestCase):
def test_format_filters(self):
def test_format_filters(self) -> None:
@dataclass
class TestCase:
name: str
@ -22,10 +22,10 @@ class TestUtilsCase(unittest.TestCase):
TestCase(name="empty str", input="", expected=None),
TestCase(name="str", input="reference=fedora", expected='{"reference": ["fedora"]}'),
TestCase(
name="List[str]", input=["reference=fedora"], expected='{"reference": ["fedora"]}'
name="list[str]", input=["reference=fedora"], expected='{"reference": ["fedora"]}'
),
TestCase(
name="Dict[str,str]",
name="dict[str,str]",
input={"reference": "fedora"},
expected='{"reference": ["fedora"]}',
),
@ -42,12 +42,12 @@ class TestUtilsCase(unittest.TestCase):
if actual is not None:
self.assertIsInstance(actual, str)
def test_containerignore_404(self):
def test_containerignore_404(self) -> None:
actual = api.prepare_containerignore("/does/not/exists")
self.assertListEqual([], actual)
@patch.object(pathlib.Path, "exists", return_value=True)
def test_containerignore_read(self, patch_exists):
def test_containerignore_read(self, patch_exists) -> None:
data = r"""# unittest
#Ignore the logs directory
@ -74,7 +74,7 @@ class TestUtilsCase(unittest.TestCase):
patch_exists.assert_called_once_with()
@patch.object(pathlib.Path, "exists", return_value=True)
def test_containerignore_empty(self, patch_exists):
def test_containerignore_empty(self, patch_exists) -> None:
data = r"""# unittest
"""
@ -86,21 +86,21 @@ class TestUtilsCase(unittest.TestCase):
patch_exists.assert_called_once_with()
@mock.patch("pathlib.Path.parent", autospec=True)
def test_containerfile_1(self, mock_parent):
def test_containerfile_1(self, mock_parent) -> None:
mock_parent.samefile.return_value = True
actual = api.prepare_containerfile("/work", "/work/Dockerfile")
self.assertEqual(actual, "Dockerfile")
mock_parent.samefile.assert_called()
@mock.patch("pathlib.Path.parent", autospec=True)
def test_containerfile_2(self, mock_parent):
def test_containerfile_2(self, mock_parent) -> None:
mock_parent.samefile.return_value = True
actual = api.prepare_containerfile(".", "Dockerfile")
self.assertEqual(actual, "Dockerfile")
mock_parent.samefile.assert_called()
@mock.patch("shutil.copy2")
def test_containerfile_copy(self, mock_copy):
def test_containerfile_copy(self, mock_copy) -> None:
mock_copy.return_value = None
with mock.patch.object(pathlib.Path, "parent") as mock_parent:
@ -109,7 +109,7 @@ class TestUtilsCase(unittest.TestCase):
actual = api.prepare_containerfile("/work", "/home/Dockerfile")
self.assertRegex(actual, r"\.containerfile\..*")
def test_prepare_body_all_types(self):
def test_prepare_body_all_types(self) -> None:
payload = {
"String": "string",
"Integer": 42,
@ -121,7 +121,7 @@ class TestUtilsCase(unittest.TestCase):
actual = api.prepare_body(payload)
self.assertEqual(actual, json.dumps(payload, sort_keys=True))
def test_prepare_body_none(self):
def test_prepare_body_none(self) -> None:
payload = {
"String": "",
"Integer": None,
@ -133,8 +133,8 @@ class TestUtilsCase(unittest.TestCase):
actual = api.prepare_body(payload)
self.assertEqual(actual, '{"Boolean": false}')
def test_prepare_body_embedded(self):
payload = {
def test_prepare_body_embedded(self) -> None:
payload: dict[str, Any] = {
"String": "",
"Integer": None,
"Boolean": False,
@ -154,7 +154,7 @@ class TestUtilsCase(unittest.TestCase):
self.assertDictEqual(actual_dict["Dictionary"], payload["Dictionary"])
self.assertEqual(set(actual_dict["Set1"]), {"item1", "item2"})
def test_prepare_body_dict_empty_string(self):
def test_prepare_body_dict_empty_string(self) -> None:
payload = {"Dictionary": {"key1": "", "key2": {"key3": ""}, "key4": [], "key5": {}}}
actual = api.prepare_body(payload)
@ -164,6 +164,15 @@ class TestUtilsCase(unittest.TestCase):
self.assertDictEqual(payload, actual_dict)
def test_encode_auth_header(self):
auth_config = {
"username": "user",
"password": "pass",
}
expected = b"eyJ1c2VybmFtZSI6ICJ1c2VyIiwgInBhc3N3b3JkIjogInBhc3MifQ=="
actual = api.encode_auth_header(auth_config)
self.assertEqual(expected, actual)
if __name__ == '__main__':
unittest.main()

View File

@ -5,9 +5,9 @@ import unittest
try:
# Python >= 3.10
from collections.abc import Iterable
except:
except ImportError:
# Python < 3.10
from collections import Iterable
from collections.abc import Iterable
from unittest.mock import patch
import requests_mock
@ -61,8 +61,7 @@ class TestBuildCase(unittest.TestCase):
with requests_mock.Mocker() as mock:
mock.post(
tests.LIBPOD_URL
+ "/build"
tests.LIBPOD_URL + "/build"
"?t=latest"
"&buildargs=%7B%22BUILD_DATE%22%3A+%22January+1%2C+1970%22%7D"
"&cpuperiod=10"

View File

@ -1,14 +1,40 @@
import unittest
import urllib.parse
import json
import os
import tempfile
from pathlib import Path
from unittest import mock
from unittest.mock import MagicMock
from podman.domain.config import PodmanConfig
class PodmanConfigTestCase(unittest.TestCase):
opener = mock.mock_open(read_data="""
class PodmanConfigTestCaseDefault(unittest.TestCase):
def setUp(self):
self.temp_dir = tempfile.mkdtemp()
# Data to be written to the JSON file
self.data_json = """
{
"Connection": {
"Default": "testing_json",
"Connections": {
"testing_json": {
"URI": "ssh://qe@localhost:2222/run/podman/podman.sock",
"Identity": "/home/qe/.ssh/id_rsa"
},
"production": {
"URI": "ssh://root@localhost:22/run/podman/podman.sock",
"Identity": "/home/root/.ssh/id_rsajson"
}
}
},
"Farm": {}
}
"""
# Data to be written to the TOML file
self.data_toml = """
[containers]
log_size_max = -1
pids_limit = 2048
@ -27,13 +53,61 @@ class PodmanConfigTestCase(unittest.TestCase):
identity = "/home/qe/.ssh/id_rsa"
[network]
""")
"""
# Define the file path
self.path_json = os.path.join(self.temp_dir, 'podman-connections.json')
self.path_toml = os.path.join(self.temp_dir, 'containers.conf')
# Write data to the JSON file
j_data = json.loads(self.data_json)
with open(self.path_json, 'w+') as file_json:
json.dump(j_data, file_json)
# Write data to the TOML file
with open(self.path_toml, 'w+') as file_toml:
# toml.dump(self.data_toml, file_toml)
file_toml.write(self.data_toml)
def test_connections(self):
config = PodmanConfig("@@is_test@@" + self.temp_dir)
self.assertEqual(config.active_service.id, "testing_json")
expected = urllib.parse.urlparse("ssh://qe@localhost:2222/run/podman/podman.sock")
self.assertEqual(config.active_service.url, expected)
self.assertEqual(config.services["production"].identity, Path("/home/root/.ssh/id_rsajson"))
class PodmanConfigTestCaseTOML(unittest.TestCase):
opener = mock.mock_open(
read_data="""
[containers]
log_size_max = -1
pids_limit = 2048
userns_size = 65536
[engine]
num_locks = 2048
active_service = "testing"
stop_timeout = 10
[engine.service_destinations]
[engine.service_destinations.production]
uri = "ssh://root@localhost:22/run/podman/podman.sock"
identity = "/home/root/.ssh/id_rsa"
[engine.service_destinations.testing]
uri = "ssh://qe@localhost:2222/run/podman/podman.sock"
identity = "/home/qe/.ssh/id_rsa"
[network]
"""
)
def setUp(self) -> None:
super().setUp()
def mocked_open(self, *args, **kwargs):
return PodmanConfigTestCase.opener(self, *args, **kwargs)
return PodmanConfigTestCaseTOML.opener(self, *args, **kwargs)
self.mocked_open = mocked_open
@ -47,10 +121,50 @@ class PodmanConfigTestCase(unittest.TestCase):
self.assertEqual(config.active_service.url, expected)
self.assertEqual(config.services["production"].identity, Path("/home/root/.ssh/id_rsa"))
PodmanConfigTestCase.opener.assert_called_with(
PodmanConfigTestCaseTOML.opener.assert_called_with(
Path("/home/developer/containers.conf"), encoding='utf-8'
)
class PodmanConfigTestCaseJSON(unittest.TestCase):
def setUp(self) -> None:
super().setUp()
self.temp_dir = tempfile.mkdtemp()
self.data = """
{
"Connection": {
"Default": "testing",
"Connections": {
"testing": {
"URI": "ssh://qe@localhost:2222/run/podman/podman.sock",
"Identity": "/home/qe/.ssh/id_rsa"
},
"production": {
"URI": "ssh://root@localhost:22/run/podman/podman.sock",
"Identity": "/home/root/.ssh/id_rsa"
}
}
},
"Farm": {}
}
"""
self.path = os.path.join(self.temp_dir, 'podman-connections.json')
# Write data to the JSON file
data = json.loads(self.data)
with open(self.path, 'w+') as file:
json.dump(data, file)
def test_connections(self):
config = PodmanConfig(self.path)
self.assertEqual(config.active_service.id, "testing")
expected = urllib.parse.urlparse("ssh://qe@localhost:2222/run/podman/podman.sock")
self.assertEqual(config.active_service.url, expected)
self.assertEqual(config.services["production"].identity, Path("/home/root/.ssh/id_rsa"))
if __name__ == '__main__':
unittest.main()

View File

@ -6,9 +6,9 @@ import unittest
try:
# Python >= 3.10
from collections.abc import Iterable
except:
except ImportError:
# Python < 3.10
from collections import Iterable
from collections.abc import Iterable
import requests_mock
@ -38,8 +38,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_remove(self, mock):
adapter = mock.delete(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd?v=True&force=True",
status_code=204,
)
@ -71,8 +70,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_restart(self, mock):
adapter = mock.post(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/restart?timeout=10",
status_code=204,
)
@ -83,8 +81,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_start_dkeys(self, mock):
adapter = mock.post(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/start"
"?detachKeys=%5Ef%5Eu",
status_code=204,
@ -104,6 +101,17 @@ class ContainersTestCase(unittest.TestCase):
container.start()
self.assertTrue(adapter.called_once)
@requests_mock.Mocker()
def test_init(self, mock):
adapter = mock.post(
tests.LIBPOD_URL
+ "/containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/init",
status_code=204,
)
container = Container(attrs=FIRST_CONTAINER, client=self.client.api)
container.init()
self.assertTrue(adapter.called_once)
@requests_mock.Mocker()
def test_stats(self, mock):
stream = [
@ -126,8 +134,7 @@ class ContainersTestCase(unittest.TestCase):
buffer.write("\n")
adapter = mock.get(
tests.LIBPOD_URL
+ "/containers/stats"
tests.LIBPOD_URL + "/containers/stats"
"?containers=87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
"&stream=True",
text=buffer.getvalue(),
@ -149,8 +156,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_stop(self, mock):
adapter = mock.post(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/stop"
"?all=True&timeout=10.0",
status_code=204,
@ -179,8 +185,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_unpause(self, mock):
adapter = mock.post(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/unpause",
status_code=204,
)
@ -231,8 +236,7 @@ class ContainersTestCase(unittest.TestCase):
{"Path": "deleted", "Kind": 2},
]
adapter = mock.get(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/changes",
json=payload,
)
@ -244,8 +248,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_diff_404(self, mock):
adapter = mock.get(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/changes",
json={
"cause": "Container not found.",
@ -290,8 +293,7 @@ class ContainersTestCase(unittest.TestCase):
encoded_value = base64.urlsafe_b64encode(json.dumps(header_value).encode("utf8"))
adapter = mock.get(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/archive"
"?path=/etc/motd",
body=body,
@ -312,8 +314,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_commit(self, mock):
post_adapter = mock.post(
tests.LIBPOD_URL
+ "/commit"
tests.LIBPOD_URL + "/commit"
"?author=redhat&changes=ADD+%2fetc%2fmod&comment=This+is+a+unittest"
"&container=87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd&format=docker"
"&pause=True&repo=quay.local&tag=unittest",
@ -346,8 +347,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_put_archive(self, mock):
adapter = mock.put(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/archive"
"?path=%2fetc%2fmotd",
status_code=200,
@ -363,8 +363,7 @@ class ContainersTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_put_archive_404(self, mock):
adapter = mock.put(
tests.LIBPOD_URL
+ "/containers/"
tests.LIBPOD_URL + "/containers/"
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/archive"
"?path=deadbeef",
status_code=404,
@ -424,10 +423,8 @@ class ContainersTestCase(unittest.TestCase):
'Mar01',
'?',
'00:00:01',
(
'/usr/bin/ssh-agent /bin/sh -c exec -l /bin/bash -c'
' "/usr/bin/gnome-session"'
),
'/usr/bin/ssh-agent /bin/sh -c exec -l /bin/bash'
+ '-c "/usr/bin/gnome-session"',
],
['jhonce', '5544', '3522', '0', 'Mar01', 'pts/1', '00:00:02', '-bash'],
['jhonce', '6140', '3522', '0', 'Mar01', 'pts/2', '00:00:00', '-bash'],

View File

@ -1,18 +1,20 @@
import json
import unittest
try:
# Python >= 3.10
from collections.abc import Iterator
except:
except ImportError:
# Python < 3.10
from collections import Iterator
from collections.abc import Iterator
from unittest.mock import DEFAULT, patch
from unittest.mock import DEFAULT, MagicMock, patch
import requests_mock
from podman import PodmanClient, tests
from podman.domain.containers import Container
from podman.domain.containers_create import CreateMixin
from podman.domain.containers_manager import ContainersManager
from podman.errors import ImageNotFound, NotFound
@ -63,7 +65,8 @@ class ContainersManagerTestCase(unittest.TestCase):
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
)
self.assertEqual(
actual.id, "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
actual.id,
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
)
@requests_mock.Mocker()
@ -103,17 +106,18 @@ class ContainersManagerTestCase(unittest.TestCase):
self.assertIsInstance(actual, list)
self.assertEqual(
actual[0].id, "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
actual[0].id,
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
)
self.assertEqual(
actual[1].id, "6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03"
actual[1].id,
"6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03",
)
@requests_mock.Mocker()
def test_list_filtered(self, mock):
mock.get(
tests.LIBPOD_URL
+ "/containers/json?"
tests.LIBPOD_URL + "/containers/json?"
"all=True"
"&filters=%7B"
"%22before%22%3A"
@ -132,10 +136,12 @@ class ContainersManagerTestCase(unittest.TestCase):
self.assertIsInstance(actual, list)
self.assertEqual(
actual[0].id, "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
actual[0].id,
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
)
self.assertEqual(
actual[1].id, "6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03"
actual[1].id,
"6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03",
)
@requests_mock.Mocker()
@ -147,6 +153,24 @@ class ContainersManagerTestCase(unittest.TestCase):
actual = self.client.containers.list()
self.assertIsInstance(actual, list)
self.assertEqual(
actual[0].id,
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
)
self.assertEqual(
actual[1].id,
"6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03",
)
@requests_mock.Mocker()
def test_list_sparse_libpod_default(self, mock):
mock.get(
tests.LIBPOD_URL + "/containers/json",
json=[FIRST_CONTAINER, SECOND_CONTAINER],
)
actual = self.client.containers.list()
self.assertIsInstance(actual, list)
self.assertEqual(
actual[0].id, "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
)
@ -154,6 +178,118 @@ class ContainersManagerTestCase(unittest.TestCase):
actual[1].id, "6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03"
)
# Verify that no individual reload() calls were made for sparse=True (default)
# Should be only 1 request for the list endpoint
self.assertEqual(len(mock.request_history), 1)
# lower() needs to be enforced since the mocked url is transformed as lowercase and
# this avoids %2f != %2F errors. Same applies for other instances of assertEqual
self.assertEqual(mock.request_history[0].url, tests.LIBPOD_URL.lower() + "/containers/json")
@requests_mock.Mocker()
def test_list_sparse_libpod_false(self, mock):
mock.get(
tests.LIBPOD_URL + "/containers/json",
json=[FIRST_CONTAINER, SECOND_CONTAINER],
)
# Mock individual container detail endpoints for reload() calls
# that are done for sparse=False
mock.get(
tests.LIBPOD_URL + f"/containers/{FIRST_CONTAINER['Id']}/json",
json=FIRST_CONTAINER,
)
mock.get(
tests.LIBPOD_URL + f"/containers/{SECOND_CONTAINER['Id']}/json",
json=SECOND_CONTAINER,
)
actual = self.client.containers.list(sparse=False)
self.assertIsInstance(actual, list)
self.assertEqual(
actual[0].id, "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
)
self.assertEqual(
actual[1].id, "6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03"
)
# Verify that individual reload() calls were made for sparse=False
# Should be 3 requests total: 1 for list + 2 for individual container details
self.assertEqual(len(mock.request_history), 3)
# Verify the list endpoint was called first
self.assertEqual(mock.request_history[0].url, tests.LIBPOD_URL.lower() + "/containers/json")
# Verify the individual container detail endpoints were called
individual_urls = {req.url for req in mock.request_history[1:]}
expected_urls = {
tests.LIBPOD_URL.lower() + f"/containers/{FIRST_CONTAINER['Id']}/json",
tests.LIBPOD_URL.lower() + f"/containers/{SECOND_CONTAINER['Id']}/json",
}
self.assertEqual(individual_urls, expected_urls)
@requests_mock.Mocker()
def test_list_sparse_compat_default(self, mock):
mock.get(
tests.COMPATIBLE_URL + "/containers/json",
json=[FIRST_CONTAINER, SECOND_CONTAINER],
)
# Mock individual container detail endpoints for reload() calls
# that are done for sparse=False
mock.get(
tests.COMPATIBLE_URL + f"/containers/{FIRST_CONTAINER['Id']}/json",
json=FIRST_CONTAINER,
)
mock.get(
tests.COMPATIBLE_URL + f"/containers/{SECOND_CONTAINER['Id']}/json",
json=SECOND_CONTAINER,
)
actual = self.client.containers.list(compatible=True)
self.assertIsInstance(actual, list)
self.assertEqual(
actual[0].id, "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
)
self.assertEqual(
actual[1].id, "6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03"
)
# Verify that individual reload() calls were made for compat default (sparse=True)
# Should be 3 requests total: 1 for list + 2 for individual container details
self.assertEqual(len(mock.request_history), 3)
self.assertEqual(
mock.request_history[0].url, tests.COMPATIBLE_URL.lower() + "/containers/json"
)
# Verify the individual container detail endpoints were called
individual_urls = {req.url for req in mock.request_history[1:]}
expected_urls = {
tests.COMPATIBLE_URL.lower() + f"/containers/{FIRST_CONTAINER['Id']}/json",
tests.COMPATIBLE_URL.lower() + f"/containers/{SECOND_CONTAINER['Id']}/json",
}
self.assertEqual(individual_urls, expected_urls)
@requests_mock.Mocker()
def test_list_sparse_compat_true(self, mock):
mock.get(
tests.COMPATIBLE_URL + "/containers/json",
json=[FIRST_CONTAINER, SECOND_CONTAINER],
)
actual = self.client.containers.list(sparse=True, compatible=True)
self.assertIsInstance(actual, list)
self.assertEqual(
actual[0].id, "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
)
self.assertEqual(
actual[1].id, "6dc84cc0a46747da94e4c1571efcc01a756b4017261440b4b8985d37203c3c03"
)
# Verify that no individual reload() calls were made for sparse=True
# Should be only 1 request for the list endpoint
self.assertEqual(len(mock.request_history), 1)
self.assertEqual(
mock.request_history[0].url, tests.COMPATIBLE_URL.lower() + "/containers/json"
)
@requests_mock.Mocker()
def test_prune(self, mock):
mock.post(
@ -214,14 +350,226 @@ class ContainersManagerTestCase(unittest.TestCase):
with self.assertRaises(ImageNotFound):
self.client.containers.create("fedora", "/usr/bin/ls", cpu_count=9999)
@requests_mock.Mocker()
def test_create_parse_host_port(self, mock):
mock_response = MagicMock()
mock_response.json = lambda: {
"Id": "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
"Size": 1024,
}
self.client.containers.client.post = MagicMock(return_value=mock_response)
mock.get(
tests.LIBPOD_URL
+ "/containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
json=FIRST_CONTAINER,
)
port_str = {"2233": 3333}
port_str_protocol = {"2244/tcp": 3344}
port_int = {2255: 3355}
ports = {**port_str, **port_str_protocol, **port_int}
self.client.containers.create("fedora", "/usr/bin/ls", ports=ports)
self.client.containers.client.post.assert_called()
expected_ports = [
{
"container_port": 2233,
"host_port": 3333,
"protocol": "tcp",
},
{
"container_port": 2244,
"host_port": 3344,
"protocol": "tcp",
},
{
"container_port": 2255,
"host_port": 3355,
"protocol": "tcp",
},
]
actual_ports = json.loads(self.client.containers.client.post.call_args[1]["data"])[
"portmappings"
]
self.assertEqual(expected_ports, actual_ports)
@requests_mock.Mocker()
def test_create_userns_mode_simple(self, mock):
mock_response = MagicMock()
mock_response.json = lambda: {
"Id": "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
"Size": 1024,
}
self.client.containers.client.post = MagicMock(return_value=mock_response)
mock.get(
tests.LIBPOD_URL
+ "/containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
json=FIRST_CONTAINER,
)
userns = "keep-id"
self.client.containers.create("fedora", "/usr/bin/ls", userns_mode=userns)
self.client.containers.client.post.assert_called()
expected_userns = {"nsmode": userns}
actual_userns = json.loads(self.client.containers.client.post.call_args[1]["data"])[
"userns"
]
self.assertEqual(expected_userns, actual_userns)
@requests_mock.Mocker()
def test_create_userns_mode_dict(self, mock):
mock_response = MagicMock()
mock_response.json = lambda: {
"Id": "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
"Size": 1024,
}
self.client.containers.client.post = MagicMock(return_value=mock_response)
mock.get(
tests.LIBPOD_URL
+ "/containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
json=FIRST_CONTAINER,
)
userns = {"nsmode": "keep-id", "value": "uid=900"}
self.client.containers.create("fedora", "/usr/bin/ls", userns_mode=userns)
self.client.containers.client.post.assert_called()
expected_userns = dict(**userns)
actual_userns = json.loads(self.client.containers.client.post.call_args[1]["data"])[
"userns"
]
self.assertEqual(expected_userns, actual_userns)
def test_create_unsupported_key(self):
with self.assertRaises(TypeError) as e:
with self.assertRaises(TypeError):
self.client.containers.create("fedora", "/usr/bin/ls", blkio_weight=100.0)
def test_create_unknown_key(self):
with self.assertRaises(TypeError) as e:
with self.assertRaises(TypeError):
self.client.containers.create("fedora", "/usr/bin/ls", unknown_key=100.0)
@requests_mock.Mocker()
def test_create_convert_env_list_to_dict(self, mock):
env_list1 = ["FOO=foo", "BAR=bar"]
# Test valid list
converted_dict1 = {"FOO": "foo", "BAR": "bar"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list1), converted_dict1)
# Test empty string
env_list2 = ["FOO=foo", ""]
self.assertRaises(ValueError, CreateMixin._convert_env_list_to_dict, env_list2)
# Test non iterable
env_list3 = ["FOO=foo", None]
self.assertRaises(TypeError, CreateMixin._convert_env_list_to_dict, env_list3)
# Test iterable with non string element
env_list4 = ["FOO=foo", []]
self.assertRaises(TypeError, CreateMixin._convert_env_list_to_dict, env_list4)
# Test empty list
env_list5 = []
converted_dict5 = {}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list5), converted_dict5)
# Test single valid environment variable
env_list6 = ["SINGLE=value"]
converted_dict6 = {"SINGLE": "value"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list6), converted_dict6)
# Test environment variable with empty value
env_list7 = ["EMPTY="]
converted_dict7 = {"EMPTY": ""}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list7), converted_dict7)
# Test environment variable with multiple equals signs
env_list8 = ["URL=https://example.com/path?param=value"]
converted_dict8 = {"URL": "https://example.com/path?param=value"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list8), converted_dict8)
# Test environment variable with spaces in value
env_list9 = ["MESSAGE=Hello World", "PATH=/usr/local/bin:/usr/bin"]
converted_dict9 = {"MESSAGE": "Hello World", "PATH": "/usr/local/bin:/usr/bin"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list9), converted_dict9)
# Test environment variable with special characters
env_list10 = ["SPECIAL=!@#$%^&*()_+-=[]{}|;':\",./<>?"]
converted_dict10 = {"SPECIAL": "!@#$%^&*()_+-=[]{}|;':\",./<>?"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list10), converted_dict10)
# Test environment variable with numeric values
env_list11 = ["PORT=8080", "TIMEOUT=30"]
converted_dict11 = {"PORT": "8080", "TIMEOUT": "30"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list11), converted_dict11)
# Test environment variable with boolean-like values
env_list12 = ["DEBUG=true", "VERBOSE=false", "ENABLED=1", "DISABLED=0"]
converted_dict12 = {
"DEBUG": "true",
"VERBOSE": "false",
"ENABLED": "1",
"DISABLED": "0",
}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list12), converted_dict12)
# Test environment variable with whitespace in key (should preserve)
env_list13 = [" SPACED_KEY =value", "KEY= spaced_value "]
converted_dict13 = {" SPACED_KEY ": "value", "KEY": " spaced_value "}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list13), converted_dict13)
# Test missing equals sign
env_list14 = ["FOO=foo", "INVALID"]
self.assertRaises(ValueError, CreateMixin._convert_env_list_to_dict, env_list14)
# Test environment variable with only equals sign (empty key)
env_list15 = ["FOO=foo", "=value"]
self.assertRaises(ValueError, CreateMixin._convert_env_list_to_dict, env_list15)
# Test environment variable with only whitespace key
env_list16 = ["FOO=foo", " =value"]
self.assertRaises(ValueError, CreateMixin._convert_env_list_to_dict, env_list16)
# Test whitespace-only string
env_list17 = ["FOO=foo", " "]
self.assertRaises(ValueError, CreateMixin._convert_env_list_to_dict, env_list17)
# Test various non-string types in list
env_list18 = ["FOO=foo", 123]
self.assertRaises(TypeError, CreateMixin._convert_env_list_to_dict, env_list18)
env_list19 = ["FOO=foo", {"key": "value"}]
self.assertRaises(TypeError, CreateMixin._convert_env_list_to_dict, env_list19)
env_list20 = ["FOO=foo", True]
self.assertRaises(TypeError, CreateMixin._convert_env_list_to_dict, env_list20)
# Test duplicate keys (last one should win)
env_list21 = ["KEY=first", "KEY=second", "OTHER=value"]
converted_dict21 = {"KEY": "second", "OTHER": "value"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list21), converted_dict21)
# Test very long environment variable
long_value = "x" * 1000
env_list22 = [f"LONG_VAR={long_value}"]
converted_dict22 = {"LONG_VAR": long_value}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list22), converted_dict22)
# Test environment variable with newlines and tabs
env_list23 = ["MULTILINE=line1\nline2\ttabbed"]
converted_dict23 = {"MULTILINE": "line1\nline2\ttabbed"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list23), converted_dict23)
# Test environment variable with unicode characters
env_list24 = ["UNICODE=こんにちは", "EMOJI=🚀🌟"]
converted_dict24 = {"UNICODE": "こんにちは", "EMOJI": "🚀🌟"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list24), converted_dict24)
# Test case sensitivity
env_list25 = ["path=/usr/bin", "PATH=/usr/local/bin"]
converted_dict25 = {"path": "/usr/bin", "PATH": "/usr/local/bin"}
self.assertEqual(CreateMixin._convert_env_list_to_dict(env_list25), converted_dict25)
@requests_mock.Mocker()
def test_run_detached(self, mock):
mock.post(
@ -284,7 +632,7 @@ class ContainersManagerTestCase(unittest.TestCase):
actual = self.client.containers.run("fedora", "/usr/bin/ls")
self.assertIsInstance(actual, bytes)
self.assertEqual(actual, b'This is a unittest - line 1This is a unittest - line 2')
self.assertEqual(actual, b"This is a unittest - line 1This is a unittest - line 2")
# iter() cannot be reset so subtests used to create new instance
with self.subTest("Stream results"):
@ -297,5 +645,5 @@ class ContainersManagerTestCase(unittest.TestCase):
self.assertEqual(next(actual), b"This is a unittest - line 2")
if __name__ == '__main__':
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,84 @@
import unittest
import requests_mock
from podman import PodmanClient, tests
CONTAINER = {
"Id": "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd",
"Name": "quay.io/fedora:latest",
"Image": "eloquent_pare",
"State": {"Status": "running"},
}
class PodmanResourceTestCase(unittest.TestCase):
"""Test PodmanResource area of concern."""
def setUp(self) -> None:
super().setUp()
self.client = PodmanClient(base_url=tests.BASE_SOCK)
def tearDown(self) -> None:
super().tearDown()
self.client.close()
@requests_mock.Mocker()
def test_reload_with_compatible_options(self, mock):
"""Test that reload uses the correct endpoint."""
# Mock the get() call
mock.get(
f"{tests.LIBPOD_URL}/"
f"containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
json=CONTAINER,
)
# Mock the reload() call
mock.get(
f"{tests.LIBPOD_URL}/"
f"containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
json=CONTAINER,
)
# Mock the reload(compatible=False) call
mock.get(
f"{tests.LIBPOD_URL}/"
f"containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
json=CONTAINER,
)
# Mock the reload(compatible=True) call
mock.get(
f"{tests.COMPATIBLE_URL}/"
f"containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
json=CONTAINER,
)
container = self.client.containers.get(
"87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd"
)
container.reload()
container.reload(compatible=False)
container.reload(compatible=True)
self.assertEqual(len(mock.request_history), 4)
for i in range(3):
self.assertEqual(
mock.request_history[i].url,
tests.LIBPOD_URL.lower()
+ "/containers/"
+ "87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
)
self.assertEqual(
mock.request_history[3].url,
tests.COMPATIBLE_URL.lower()
+ "/containers/87e1325c82424e49a00abdd4de08009eb76c7de8d228426a9b8af9318ced5ecd/json",
)
if __name__ == '__main__':
unittest.main()

View File

@ -44,7 +44,7 @@ class EventsManagerTestCase(unittest.TestCase):
buffer.write(json.JSONEncoder().encode(item))
buffer.write("\n")
adapter = mock.get(tests.LIBPOD_URL + "/events", text=buffer.getvalue())
adapter = mock.get(tests.LIBPOD_URL + "/events", text=buffer.getvalue()) # noqa: F841
manager = EventsManager(client=self.client.api)
actual = manager.list(decode=True)

View File

@ -1,19 +1,20 @@
import types
import unittest
from unittest.mock import patch
try:
# Python >= 3.10
from collections.abc import Iterable
except:
except ImportError:
# Python < 3.10
from collections import Iterable
from collections.abc import Iterable
import requests_mock
from podman import PodmanClient, tests
from podman.domain.images import Image
from podman.domain.images_manager import ImagesManager
from podman.errors import APIError, ImageNotFound
from podman.errors import APIError, ImageNotFound, PodmanError
FIRST_IMAGE = {
"Id": "sha256:326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab",
@ -207,6 +208,66 @@ class ImagesManagerTestCase(unittest.TestCase):
self.assertEqual(len(untagged), 2)
self.assertEqual(len("".join(untagged)), 0)
@requests_mock.Mocker()
def test_prune_filters_label(self, mock):
"""Unit test filters param label for Images prune()."""
mock.post(
tests.LIBPOD_URL
+ "/images/prune?filters=%7B%22label%22%3A+%5B%22%7B%27license%27%3A+"
+ "%27Apache-2.0%27%7D%22%5D%7D",
json=[
{
"Id": "326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab",
"Size": 1024,
},
],
)
report = self.client.images.prune(filters={"label": {"license": "Apache-2.0"}})
self.assertIn("ImagesDeleted", report)
self.assertIn("SpaceReclaimed", report)
self.assertEqual(report["SpaceReclaimed"], 1024)
deleted = [r["Deleted"] for r in report["ImagesDeleted"] if "Deleted" in r]
self.assertEqual(len(deleted), 1)
self.assertIn("326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab", deleted)
self.assertGreater(len("".join(deleted)), 0)
untagged = [r["Untagged"] for r in report["ImagesDeleted"] if "Untagged" in r]
self.assertEqual(len(untagged), 1)
self.assertEqual(len("".join(untagged)), 0)
@requests_mock.Mocker()
def test_prune_filters_not_label(self, mock):
"""Unit test filters param NOT-label for Images prune()."""
mock.post(
tests.LIBPOD_URL
+ "/images/prune?filters=%7B%22label%21%22%3A+%5B%22%7B%27license%27%3A+"
+ "%27Apache-2.0%27%7D%22%5D%7D",
json=[
{
"Id": "c4b16966ecd94ffa910eab4e630e24f259bf34a87e924cd4b1434f267b0e354e",
"Size": 1024,
},
],
)
report = self.client.images.prune(filters={"label!": {"license": "Apache-2.0"}})
self.assertIn("ImagesDeleted", report)
self.assertIn("SpaceReclaimed", report)
self.assertEqual(report["SpaceReclaimed"], 1024)
deleted = [r["Deleted"] for r in report["ImagesDeleted"] if "Deleted" in r]
self.assertEqual(len(deleted), 1)
self.assertIn("c4b16966ecd94ffa910eab4e630e24f259bf34a87e924cd4b1434f267b0e354e", deleted)
self.assertGreater(len("".join(deleted)), 0)
untagged = [r["Untagged"] for r in report["ImagesDeleted"] if "Untagged" in r]
self.assertEqual(len(untagged), 1)
self.assertEqual(len("".join(untagged)), 0)
@requests_mock.Mocker()
def test_prune_failure(self, mock):
"""Unit test to report error carried in response body."""
@ -223,6 +284,15 @@ class ImagesManagerTestCase(unittest.TestCase):
self.client.images.prune()
self.assertEqual(e.exception.explanation, "Test prune failure in response body.")
@requests_mock.Mocker()
def test_prune_empty(self, mock):
"""Unit test if prune API responses null (None)."""
mock.post(tests.LIBPOD_URL + "/images/prune", text="null")
report = self.client.images.prune()
self.assertEqual(report["ImagesDeleted"], [])
self.assertEqual(report["SpaceReclaimed"], 0)
@requests_mock.Mocker()
def test_get(self, mock):
mock.get(
@ -311,6 +381,37 @@ class ImagesManagerTestCase(unittest.TestCase):
@requests_mock.Mocker()
def test_load(self, mock):
with self.assertRaises(PodmanError):
self.client.images.load()
with self.assertRaises(PodmanError):
self.client.images.load(b'data', b'file_path')
with self.assertRaises(PodmanError):
self.client.images.load(data=b'data', file_path=b'file_path')
# Patch Path.read_bytes to mock the file reading behavior
with patch("pathlib.Path.read_bytes", return_value=b"mock tarball data"):
mock.post(
tests.LIBPOD_URL + "/images/load",
json={"Names": ["quay.io/fedora:latest"]},
)
mock.get(
tests.LIBPOD_URL + "/images/quay.io%2ffedora%3Alatest/json",
json=FIRST_IMAGE,
)
# 3a. Test the case where only 'file_path' is provided
gntr = self.client.images.load(file_path="mock_file.tar")
self.assertIsInstance(gntr, types.GeneratorType)
report = list(gntr)
self.assertEqual(len(report), 1)
self.assertEqual(
report[0].id,
"sha256:326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab",
)
mock.post(
tests.LIBPOD_URL + "/images/load",
json={"Names": ["quay.io/fedora:latest"]},
@ -416,7 +517,7 @@ class ImagesManagerTestCase(unittest.TestCase):
self.assertEqual(report[0]["name"], "quay.io/libpod/fedora")
@requests_mock.Mocker()
def test_search_listTags(self, mock):
def test_search_list_tags(self, mock):
mock.get(
tests.LIBPOD_URL + "/images/search?term=fedora&noTrunc=true&listTags=true",
json=[
@ -461,8 +562,7 @@ class ImagesManagerTestCase(unittest.TestCase):
},
)
mock.get(
tests.LIBPOD_URL
+ "/images"
tests.LIBPOD_URL + "/images"
"/sha256%3A326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab/json",
json=FIRST_IMAGE,
)
@ -483,8 +583,7 @@ class ImagesManagerTestCase(unittest.TestCase):
},
)
mock.get(
tests.LIBPOD_URL
+ "/images"
tests.LIBPOD_URL + "/images"
"/sha256%3A326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab/json",
json=FIRST_IMAGE,
)
@ -505,8 +604,7 @@ class ImagesManagerTestCase(unittest.TestCase):
},
)
mock.get(
tests.LIBPOD_URL
+ "/images"
tests.LIBPOD_URL + "/images"
"/sha256%3A326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab/json",
json=FIRST_IMAGE,
)
@ -531,8 +629,7 @@ class ImagesManagerTestCase(unittest.TestCase):
},
)
mock.get(
tests.LIBPOD_URL
+ "/images"
tests.LIBPOD_URL + "/images"
"/sha256%3A326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab/json",
json=FIRST_IMAGE,
)
@ -552,6 +649,109 @@ class ImagesManagerTestCase(unittest.TestCase):
images[1].id, "c4b16966ecd94ffa910eab4e630e24f259bf34a87e924cd4b1434f267b0e354e"
)
@requests_mock.Mocker()
def test_pull_policy(self, mock):
image_id = "sha256:326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab"
mock.post(
tests.LIBPOD_URL + "/images/pull?reference=quay.io%2ffedora%3Alatest&policy=missing",
json={
"error": "",
"id": image_id,
"images": [image_id],
"stream": "",
},
)
mock.get(
tests.LIBPOD_URL + "/images"
"/sha256%3A326dd9d7add24646a325e8eaa82125294027db2332e49c5828d96312c5d773ab/json",
json=FIRST_IMAGE,
)
image = self.client.images.pull("quay.io/fedora:latest", policy="missing")
self.assertEqual(image.id, image_id)
@requests_mock.Mocker()
def test_list_with_name_parameter(self, mock):
"""Test that name parameter is correctly converted to a reference filter"""
mock.get(
tests.LIBPOD_URL + "/images/json?filters=%7B%22reference%22%3A+%5B%22fedora%22%5D%7D",
json=[FIRST_IMAGE],
)
images = self.client.images.list(name="fedora")
self.assertEqual(len(images), 1)
self.assertIsInstance(images[0], Image)
self.assertEqual(images[0].tags, ["fedora:latest", "fedora:33"])
@requests_mock.Mocker()
def test_list_with_name_and_existing_filters(self, mock):
"""Test that name parameter works alongside other filters"""
mock.get(
tests.LIBPOD_URL
+ (
"/images/json?filters=%7B%22dangling%22%3A+%5B%22True%22%5D%2C+"
"%22reference%22%3A+%5B%22fedora%22%5D%7D"
),
json=[FIRST_IMAGE],
)
images = self.client.images.list(name="fedora", filters={"dangling": True})
self.assertEqual(len(images), 1)
self.assertIsInstance(images[0], Image)
@requests_mock.Mocker()
def test_list_with_name_overrides_reference_filter(self, mock):
"""Test that name parameter takes precedence over existing reference filter"""
mock.get(
tests.LIBPOD_URL + "/images/json?filters=%7B%22reference%22%3A+%5B%22fedora%22%5D%7D",
json=[FIRST_IMAGE],
)
# The name parameter should override the reference filter
images = self.client.images.list(
name="fedora",
filters={"reference": "ubuntu"}, # This should be overridden
)
self.assertEqual(len(images), 1)
self.assertIsInstance(images[0], Image)
@requests_mock.Mocker()
def test_list_with_all_and_name(self, mock):
"""Test that all parameter works alongside name filter"""
mock.get(
tests.LIBPOD_URL
+ "/images/json?all=true&filters=%7B%22reference%22%3A+%5B%22fedora%22%5D%7D",
json=[FIRST_IMAGE],
)
images = self.client.images.list(all=True, name="fedora")
self.assertEqual(len(images), 1)
self.assertIsInstance(images[0], Image)
@requests_mock.Mocker()
def test_list_with_empty_name(self, mock):
"""Test that empty name parameter doesn't add a reference filter"""
mock.get(tests.LIBPOD_URL + "/images/json", json=[FIRST_IMAGE])
images = self.client.images.list(name="")
self.assertEqual(len(images), 1)
self.assertIsInstance(images[0], Image)
@requests_mock.Mocker()
def test_list_with_none_name(self, mock):
"""Test that None name parameter doesn't add a reference filter"""
mock.get(tests.LIBPOD_URL + "/images/json", json=[FIRST_IMAGE])
images = self.client.images.list(name=None)
self.assertEqual(len(images), 1)
self.assertIsInstance(images[0], Image)
if __name__ == '__main__':
unittest.main()

View File

@ -1,8 +1,15 @@
import unittest
import requests_mock
from podman import PodmanClient, tests
from podman.domain.manifests import Manifest, ManifestsManager
FIRST_MANIFEST = {
"Id": "326dd9d7add24646a389e8eaa82125294027db2332e49c5828d96312c5d773ab",
"names": "quay.io/fedora:latest",
}
class ManifestTestCase(unittest.TestCase):
def setUp(self) -> None:
@ -23,6 +30,34 @@ class ManifestTestCase(unittest.TestCase):
manifest = Manifest()
self.assertIsNone(manifest.name)
@requests_mock.Mocker()
def test_push(self, mock):
adapter = mock.post(
tests.LIBPOD_URL + "/manifests/quay.io%2Ffedora%3Alatest/registry/quay.io%2Ffedora%3Av1"
)
manifest = Manifest(attrs=FIRST_MANIFEST, client=self.client.api)
manifest.push(destination="quay.io/fedora:v1")
self.assertTrue(adapter.called_once)
@requests_mock.Mocker()
def test_push_with_auth(self, mock):
adapter = mock.post(
tests.LIBPOD_URL
+ "/manifests/quay.io%2Ffedora%3Alatest/registry/quay.io%2Ffedora%3Av1",
request_headers={
"X-Registry-Auth": b"eyJ1c2VybmFtZSI6ICJ1c2VyIiwgInBhc3N3b3JkIjogInBhc3MifQ=="
},
)
manifest = Manifest(attrs=FIRST_MANIFEST, client=self.client.api)
manifest.push(
destination="quay.io/fedora:v1", auth_config={"username": "user", "password": "pass"}
)
self.assertTrue(adapter.called_once)
if __name__ == '__main__':
unittest.main()

View File

@ -171,6 +171,8 @@ class NetworksManagerTestCase(unittest.TestCase):
adapter = mock.post(tests.LIBPOD_URL + "/networks/create", json=FIRST_NETWORK_LIBPOD)
network = self.client.networks.create("podman")
self.assertIsInstance(network, Network)
self.assertEqual(adapter.call_count, 1)
self.assertDictEqual(
adapter.last_request.json(),

View File

@ -3,7 +3,8 @@ import ipaddress
import json
import unittest
from dataclasses import dataclass
from typing import Any, Iterable, Optional, Tuple
from typing import Any, Optional
from collections.abc import Iterable
from unittest import mock
from requests import Response
@ -12,12 +13,12 @@ from podman import api
class ParseUtilsTestCase(unittest.TestCase):
def test_parse_repository(self):
def test_parse_repository(self) -> None:
@dataclass
class TestCase:
name: str
input: Any
expected: Tuple[str, Optional[str]]
expected: tuple[str, Optional[str]]
cases = [
TestCase(name="empty str", input="", expected=("", None)),
@ -28,14 +29,39 @@ class ParseUtilsTestCase(unittest.TestCase):
),
TestCase(
name="@digest",
input="quay.io/libpod/testimage@71f1b47263fc",
expected=("quay.io/libpod/testimage", "71f1b47263fc"),
input="quay.io/libpod/testimage@sha256:71f1b47263fc",
expected=("quay.io/libpod/testimage@sha256", "71f1b47263fc"),
),
TestCase(
name=":tag",
input="quay.io/libpod/testimage:latest",
expected=("quay.io/libpod/testimage", "latest"),
),
TestCase(
name=":tag@digest",
input="quay.io/libpod/testimage:latest@sha256:71f1b47263fc",
expected=("quay.io/libpod/testimage:latest@sha256", "71f1b47263fc"),
),
TestCase(
name=":port",
input="quay.io:5000/libpod/testimage",
expected=("quay.io:5000/libpod/testimage", None),
),
TestCase(
name=":port@digest",
input="quay.io:5000/libpod/testimage@sha256:71f1b47263fc",
expected=("quay.io:5000/libpod/testimage@sha256", "71f1b47263fc"),
),
TestCase(
name=":port:tag",
input="quay.io:5000/libpod/testimage:latest",
expected=("quay.io:5000/libpod/testimage", "latest"),
),
TestCase(
name=":port:tag:digest",
input="quay.io:5000/libpod/testimage:latest@sha256:71f1b47263fc",
expected=("quay.io:5000/libpod/testimage:latest@sha256", "71f1b47263fc"),
),
]
for case in cases:
@ -46,13 +72,13 @@ class ParseUtilsTestCase(unittest.TestCase):
f"failed test {case.name} expected {case.expected}, actual {actual}",
)
def test_decode_header(self):
def test_decode_header(self) -> None:
actual = api.decode_header("eyJIZWFkZXIiOiJ1bml0dGVzdCJ9")
self.assertDictEqual(actual, {"Header": "unittest"})
self.assertDictEqual(api.decode_header(None), {})
def test_prepare_timestamp(self):
def test_prepare_timestamp(self) -> None:
time = datetime.datetime(2022, 1, 24, 12, 0, 0)
self.assertEqual(api.prepare_timestamp(time), 1643025600)
self.assertEqual(api.prepare_timestamp(2), 2)
@ -61,11 +87,11 @@ class ParseUtilsTestCase(unittest.TestCase):
with self.assertRaises(ValueError):
api.prepare_timestamp("bad input") # type: ignore
def test_prepare_cidr(self):
def test_prepare_cidr(self) -> None:
net = ipaddress.IPv4Network("127.0.0.0/24")
self.assertEqual(api.prepare_cidr(net), ("127.0.0.0", "////AA=="))
def test_stream_helper(self):
def test_stream_helper(self) -> None:
streamed_results = [b'{"test":"val1"}', b'{"test":"val2"}']
mock_response = mock.Mock(spec=Response)
mock_response.iter_lines.return_value = iter(streamed_results)
@ -77,7 +103,7 @@ class ParseUtilsTestCase(unittest.TestCase):
self.assertIsInstance(actual, bytes)
self.assertEqual(expected, actual)
def test_stream_helper_with_decode(self):
def test_stream_helper_with_decode(self) -> None:
streamed_results = [b'{"test":"val1"}', b'{"test":"val2"}']
mock_response = mock.Mock(spec=Response)
mock_response.iter_lines.return_value = iter(streamed_results)
@ -87,7 +113,7 @@ class ParseUtilsTestCase(unittest.TestCase):
self.assertIsInstance(streamable, Iterable)
for expected, actual in zip(streamed_results, streamable):
self.assertIsInstance(actual, dict)
self.assertDictEqual(json.loads(expected), actual)
self.assertDictEqual(json.loads(expected), actual) # type: ignore[arg-type]
if __name__ == '__main__':

View File

@ -0,0 +1,37 @@
import os
import unittest
import tempfile
from unittest import mock
from podman import api
class PathUtilsTestCase(unittest.TestCase):
def setUp(self):
self.xdg_runtime_dir = os.getenv('XDG_RUNTIME_DIR')
@mock.patch.dict(os.environ, clear=True)
def test_get_runtime_dir_env_var_set(self):
with tempfile.TemporaryDirectory() as tmpdir:
os.environ['XDG_RUNTIME_DIR'] = str(tmpdir)
self.assertEqual(str(tmpdir), api.path_utils.get_runtime_dir())
@mock.patch.dict(os.environ, clear=True)
def test_get_runtime_dir_env_var_not_set(self):
if not self.xdg_runtime_dir:
self.skipTest('XDG_RUNTIME_DIR must be set for this test.')
if self.xdg_runtime_dir.startswith('/run/user/'):
self.skipTest("XDG_RUNTIME_DIR in /run/user/, can't check")
self.assertNotEqual(self.xdg_runtime_dir, api.path_utils.get_runtime_dir())
@mock.patch('os.path.isdir', lambda d: False)
@mock.patch.dict(os.environ, clear=True)
def test_get_runtime_dir_env_var_not_set_and_no_run(self):
"""Fake that XDG_RUNTIME_DIR is not set and /run/user/ does not exist."""
if not self.xdg_runtime_dir:
self.skipTest('XDG_RUNTIME_DIR must be set to fetch a working dir.')
self.assertNotEqual(self.xdg_runtime_dir, api.path_utils.get_runtime_dir())
if __name__ == '__main__':
unittest.main()

View File

@ -149,7 +149,7 @@ class PodTestCase(unittest.TestCase):
def test_stop(self, mock):
adapter = mock.post(
tests.LIBPOD_URL
+ "/pods/c8b9f5b17dc1406194010c752fc6dcb330192032e27648db9b14060447ecf3b8/stop?t=70.0",
+ "/pods/c8b9f5b17dc1406194010c752fc6dcb330192032e27648db9b14060447ecf3b8/stop?t=70",
json={
"Errs": [],
"Id": "c8b9f5b17dc1406194010c752fc6dcb330192032e27648db9b14060447ecf3b8",
@ -157,7 +157,7 @@ class PodTestCase(unittest.TestCase):
)
pod = Pod(attrs=FIRST_POD, client=self.client.api)
pod.stop(timeout=70.0)
pod.stop(timeout=70)
self.assertTrue(adapter.called_once)
@requests_mock.Mocker()
@ -180,8 +180,7 @@ class PodTestCase(unittest.TestCase):
"Titles": ["UID", "PID", "PPID", "C", "STIME", "TTY", "TIME CMD"],
}
adapter = mock.get(
tests.LIBPOD_URL
+ "/pods"
tests.LIBPOD_URL + "/pods"
"/c8b9f5b17dc1406194010c752fc6dcb330192032e27648db9b14060447ecf3b8/top"
"?ps_args=aux&stream=False",
json=body,

View File

@ -5,15 +5,16 @@ from unittest import mock
from unittest.mock import MagicMock
import requests_mock
import xdg
from podman import PodmanClient, tests
from podman.api.path_utils import get_runtime_dir, get_xdg_config_home
class PodmanClientTestCase(unittest.TestCase):
"""Test the PodmanClient() object."""
opener = mock.mock_open(read_data="""
opener = mock.mock_open(
read_data="""
[containers]
log_size_max = -1
pids_limit = 2048
@ -32,7 +33,8 @@ class PodmanClientTestCase(unittest.TestCase):
identity = "/home/qe/.ssh/id_rsa"
[network]
""")
"""
)
def setUp(self) -> None:
super().setUp()
@ -57,7 +59,7 @@ class PodmanClientTestCase(unittest.TestCase):
"os": "linux",
}
}
adapter = mock.get(tests.LIBPOD_URL + "/info", json=body)
adapter = mock.get(tests.LIBPOD_URL + "/info", json=body) # noqa: F841
with PodmanClient(base_url=tests.BASE_SOCK) as client:
actual = client.info()
@ -86,7 +88,7 @@ class PodmanClientTestCase(unittest.TestCase):
)
# Build path to support tests running as root or a user
expected = Path(xdg.BaseDirectory.xdg_config_home) / "containers" / "containers.conf"
expected = Path(get_xdg_config_home()) / "containers" / "containers.conf"
PodmanClientTestCase.opener.assert_called_with(expected, encoding="utf-8")
def test_connect_404(self):
@ -98,16 +100,12 @@ class PodmanClientTestCase(unittest.TestCase):
with mock.patch.multiple(Path, open=self.mocked_open, exists=MagicMock(return_value=True)):
with PodmanClient() as client:
expected = "http+unix://" + urllib.parse.quote_plus(
str(
Path(xdg.BaseDirectory.get_runtime_dir(strict=False))
/ "podman"
/ "podman.sock"
)
str(Path(get_runtime_dir()) / "podman" / "podman.sock")
)
self.assertEqual(client.api.base_url.geturl(), expected)
# Build path to support tests running as root or a user
expected = Path(xdg.BaseDirectory.xdg_config_home) / "containers" / "containers.conf"
expected = Path(get_xdg_config_home()) / "containers" / "containers.conf"
PodmanClientTestCase.opener.assert_called_with(expected, encoding="utf-8")

View File

@ -1,7 +1,7 @@
import io
import json
import unittest
from typing import Iterable
from collections.abc import Iterable
import requests_mock
@ -149,8 +149,7 @@ class PodsManagerTestCase(unittest.TestCase):
"Titles": ["UID", "PID", "PPID", "C", "STIME", "TTY", "TIME CMD"],
}
mock.get(
tests.LIBPOD_URL
+ "/pods/stats"
tests.LIBPOD_URL + "/pods/stats"
"?namesOrIDs=c8b9f5b17dc1406194010c752fc6dcb330192032e27648db9b14060447ecf3b8",
json=body,
)
@ -180,8 +179,7 @@ class PodsManagerTestCase(unittest.TestCase):
"Titles": ["UID", "PID", "PPID", "C", "STIME", "TTY", "TIME CMD"],
}
mock.get(
tests.LIBPOD_URL
+ "/pods/stats"
tests.LIBPOD_URL + "/pods/stats"
"?namesOrIDs=c8b9f5b17dc1406194010c752fc6dcb330192032e27648db9b14060447ecf3b8",
json=body,
)

View File

@ -0,0 +1,44 @@
import unittest
from unittest.mock import patch, MagicMock
from podman.tests import utils
class TestPodmanVersion(unittest.TestCase):
@patch('podman.tests.utils.subprocess.Popen')
def test_podman_version(self, mock_popen):
mock_proc = MagicMock()
mock_proc.stdout.read.return_value = b'5.6.0'
mock_popen.return_value.__enter__.return_value = mock_proc
self.assertEqual(utils.podman_version(), (5, 6, 0))
@patch('podman.tests.utils.subprocess.Popen')
def test_podman_version_dev(self, mock_popen):
mock_proc = MagicMock()
mock_proc.stdout.read.return_value = b'5.6.0-dev'
mock_popen.return_value.__enter__.return_value = mock_proc
self.assertEqual(utils.podman_version(), (5, 6, 0))
@patch('podman.tests.utils.subprocess.Popen')
def test_podman_version_four_digits(self, mock_popen):
mock_proc = MagicMock()
mock_proc.stdout.read.return_value = b'5.6.0.1'
mock_popen.return_value.__enter__.return_value = mock_proc
self.assertEqual(utils.podman_version(), (5, 6, 0))
@patch('podman.tests.utils.subprocess.Popen')
def test_podman_version_release_candidate(self, mock_popen):
mock_proc = MagicMock()
mock_proc.stdout.read.return_value = b'5.6.0-rc1'
mock_popen.return_value.__enter__.return_value = mock_proc
self.assertEqual(utils.podman_version(), (5, 6, 0))
@patch('podman.tests.utils.subprocess.Popen')
def test_podman_version_none(self, mock_popen):
mock_proc = MagicMock()
mock_proc.stdout.read.return_value = b''
mock_popen.return_value.__enter__.return_value = mock_proc
with self.assertRaises(RuntimeError) as context:
utils.podman_version()
self.assertEqual(str(context.exception), "Unable to detect podman version. Got \"\"")

View File

@ -39,6 +39,13 @@ class VolumeTestCase(unittest.TestCase):
volume.remove(force=True)
self.assertTrue(adapter.called_once)
@requests_mock.Mocker()
def test_inspect(self, mock):
mock.get(tests.LIBPOD_URL + "/volumes/dbase/json?tlsVerify=False", json=FIRST_VOLUME)
vol_manager = VolumesManager(self.client.api)
actual = vol_manager.prepare_model(attrs=FIRST_VOLUME)
self.assertEqual(actual.inspect(tls_verify=False)["Mountpoint"], "/var/database")
if __name__ == '__main__':
unittest.main()

32
podman/tests/utils.py Normal file
View File

@ -0,0 +1,32 @@
import pathlib
import csv
import re
import subprocess
try:
from platform import freedesktop_os_release
except ImportError:
def freedesktop_os_release() -> dict[str, str]:
"""This is a fallback for platforms that don't have the freedesktop_os_release function.
Python < 3.10
"""
path = pathlib.Path("/etc/os-release")
with open(path) as f:
reader = csv.reader(f, delimiter="=")
return dict(reader)
def podman_version() -> tuple[int, ...]:
cmd = ["podman", "info", "--format", "{{.Version.Version}}"]
with subprocess.Popen(cmd, stdout=subprocess.PIPE) as proc:
version = proc.stdout.read().decode("utf-8").strip()
match = re.match(r"(\d+\.\d+\.\d+)", version)
if not match:
raise RuntimeError(f"Unable to detect podman version. Got \"{version}\"")
version = match.group(1)
return tuple(int(x) for x in version.split("."))
OS_RELEASE = freedesktop_os_release()
PODMAN_VERSION = podman_version()

View File

@ -1,4 +1,4 @@
"""Version of PodmanPy."""
__version__ = "4.8.0"
__version__ = "5.6.0"
__compatible_version__ = "1.40"

View File

@ -1,33 +1,164 @@
[tool.black]
line-length = 100
skip-string-normalization = true
preview = true
target-version = ["py36"]
include = '\.pyi?$'
exclude = '''
/(
\.git
| \.tox
| \.venv
| \.history
| build
| dist
| docs
| hack
)/
'''
[tool.isort]
profile = "black"
line_length = 100
[build-system]
# Any changes should be copied into requirements.txt, setup.cfg, and/or test-requirements.txt
requires = [
"setuptools>=46.4",
]
requires = ["setuptools>=46.4"]
build-backend = "setuptools.build_meta"
[project]
name = "podman"
# TODO: remove the line version = ... on podman-py > 5.4.0 releases
# dynamic = ["version"]
version = "5.6.0"
description = "Bindings for Podman RESTful API"
readme = "README.md"
license = {file = "LICENSE"}
requires-python = ">=3.9"
authors = [
{ name = "Brent Baude" },
{ name = "Jhon Honce", email = "jhonce@redhat.com" },
{ name = "Urvashi Mohnani" },
{ name = "Nicola Sella", email = "nsella@redhat.com" },
]
keywords = [
"libpod",
"podman",
]
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
]
# compatible releases
# ~= with version numbers
dependencies = [
"requests >=2.24",
"tomli>=1.2.3; python_version<'3.11'",
"urllib3",
]
[project.optional-dependencies]
progress_bar = [
"rich >= 12.5.1",
]
docs = [
"sphinx"
]
test = [
"coverage",
"fixtures",
"pytest",
"requests-mock",
"tox",
]
[project.urls]
"Bug Tracker" = "https://github.com/containers/podman-py/issues"
Homepage = "https://github.com/containers/podman-py"
"Libpod API" = "https://docs.podman.io/en/latest/_static/api.html"
[tool.pytest.ini_options]
log_cli = true
log_cli_level = "DEBUG"
log_cli_format = "%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)"
log_cli_date_format = "%Y-%m-%d %H:%M:%S"
testpaths = [
"podman/tests",
]
[tool.setuptools.packages.find]
where = ["."]
include = ["podman*"]
# TODO: remove the line version = ... on podman-py > 5.4.0 releases
# [tool.setuptools.dynamic]
# version = {attr = "podman.version.__version__"}
[tool.ruff]
line-length = 100
src = ["podman"]
# This is the section where Black is mostly replaced with Ruff
[tool.ruff.format]
exclude = [
".git",
".history",
".tox",
".venv",
"build",
"dist",
"docs",
"hack",
]
quote-style = "preserve"
[tool.ruff.lint]
select = [
# More stuff here https://docs.astral.sh/ruff/rules/
"F", # Pyflakes
"E", # Pycodestyle Error
"W", # Pycodestyle Warning
"N", # PEP8 Naming
"UP", # Pyupgrade
# TODO "ANN",
# TODO "S", # Bandit
"B", # Bugbear
"A", # flake-8-builtins
"YTT", # flake-8-2020
"PLC", # Pylint Convention
"PLE", # Pylint Error
"PLW", # Pylint Warning
]
# Some checks should be enabled for code sanity disabled now
# to avoid changing too many lines
ignore = [
"N818", # TODO Error Suffix in exception name
]
[tool.ruff.lint.flake8-builtins]
builtins-ignorelist = ["copyright", "all"]
[tool.ruff.lint.per-file-ignores]
"podman/tests/*.py" = ["S"]
[tool.mypy]
install_types = true
non_interactive = true
allow_redefinition = true
no_strict_optional = true
ignore_missing_imports = true
[[tool.mypy.overrides]]
module = [
"podman.api.adapter_utils",
"podman.api.client",
"podman.api.ssh",
"podman.api.tar_utils",
"podman.api.uds",
"podman.domain.config",
"podman.domain.containers",
"podman.domain.containers_create",
"podman.domain.containers_run",
"podman.domain.events",
"podman.domain.images_build",
"podman.domain.images_manager",
"podman.domain.manager",
"podman.domain.manifests",
"podman.domain.networks",
"podman.domain.networks_manager",
"podman.domain.pods",
"podman.domain.pods_manager",
"podman.domain.registry_data",
"podman.domain.secrets",
"podman.domain.volumes",
"podman.errors.exceptions"
]
ignore_errors = true
[tool.coverage.report]
exclude_also = [
"unittest.main()",
]

View File

@ -1,9 +0,0 @@
# Any changes should be copied into pyproject.toml
pyxdg>=0.26
requests>=2.24
setuptools
sphinx
tomli>=1.2.3; python_version<'3.11'
urllib3
wheel
rich >= 12.5.1

View File

@ -81,6 +81,8 @@ export PBR_VERSION="0.0.0"
%pyproject_save_files %{pypi_name}
%endif
%check
%if %{defined rhel8_py}
%files -n python%{python3_pkgversion}-%{pypi_name}
%dir %{python3_sitelib}/%{pypi_name}-*-py%{python3_version}.egg-info
@ -88,15 +90,11 @@ export PBR_VERSION="0.0.0"
%dir %{python3_sitelib}/%{pypi_name}
%{python3_sitelib}/%{pypi_name}/*
%else
%pyproject_extras_subpkg -n python%{python3_pkgversion}-%{pypi_name} progress_bar
%files -n python%{python3_pkgversion}-%{pypi_name} -f %{pyproject_files}
%endif
%license LICENSE
%doc README.md
%changelog
%if %{defined autochangelog}
%autochangelog
%else
* Mon May 01 2023 RH Container Bot <rhcontainerbot@fedoraproject.org>
- Placeholder changelog for envs that are not autochangelog-ready
%endif

View File

@ -1,7 +1,7 @@
[metadata]
name = podman
version = 4.8.0
author = Brent Baude, Jhon Honce
version = 5.6.0
author = Brent Baude, Jhon Honce, Urvashi Mohnani, Nicola Sella
author_email = jhonce@redhat.com
description = Bindings for Podman RESTful API
long_description = file: README.md
@ -19,26 +19,28 @@ classifiers =
License :: OSI Approved :: Apache Software License
Operating System :: OS Independent
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
Programming Language :: Python :: 3.12
Programming Language :: Python :: 3.13
Topic :: Software Development :: Libraries :: Python Modules
keywords = podman, libpod
[options]
include_package_data = True
python_requires = >=3.6
python_requires = >=3.9
test_suite =
# Any changes should be copied into pyproject.toml
install_requires =
pyxdg >=0.26
requests >=2.24
tomli>=1.2.3; python_version<'3.11'
urllib3
[options.extras_require]
progress_bar =
rich >= 12.5.1
# typing_extensions are included for RHEL 8.5
# typing_extensions;python_version<'3.8'

View File

@ -9,7 +9,7 @@ excluded = [
]
class build_py(build_py_orig):
class build_py(build_py_orig): # noqa: N801
def find_package_modules(self, package, package_dir):
modules = super().find_package_modules(package, package_dir)
return [

View File

@ -1,10 +0,0 @@
# Any changes should be copied into pyproject.toml
-r requirements.txt
black
coverage
fixtures
pylint
pytest
requests-mock >= 1.11.0
tox
rich >= 12.5.1

37
tests/main.fmf Normal file
View File

@ -0,0 +1,37 @@
require:
- make
- python3-pip
/lint:
tag: [ stable, lint ]
summary: Run linting on the whole codebase
test: cd .. && make lint
/coverage_integration:
tag: [ stable, coverage, integration ]
summary: Run integration tests coverage check
test: cd .. && make integration
/coverage_unittest:
tag: [ stable, coverage, unittest ]
summary: Run unit tests coverage check
test: cd .. && make unittest
/tests:
/base_python:
tag: [ base ]
summary: Run all tests on the base python version
test: cd .. && make tests-ci-base-python
duration: 10m
/base_python_pnext:
tag: [ pnext ]
summary: Run all tests on the base python version and podman-next
test: cd .. && make tests-ci-base-python-podman-next
duration: 5m
/all_python:
tag: [ matrix ]
summary: Run all tests for all python versions available
test: cd .. && make tests-ci-all-python
duration: 20m

Some files were not shown because too many files have changed in this diff Show More