Compare commits

...

39 Commits
main ... 4.4.2

Author SHA1 Message Date
aiordache 98eadb9f98 Release 4.4.2
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 16:08:12 +01:00
aiordache 93e02ab207 Revert "Support for docker.types.Placement.MaxReplicas (new in API 1.40) in Docker Swarm Service"
This reverts commit b701d5c999.
2021-02-15 15:49:01 +01:00
aiordache aa2ea7f7d5 Revert "Support for docker.types.Placement.MaxReplicas (new in API 1.40) in Docker Swarm Service"
This reverts commit 92e0e9c30d.
2021-02-15 15:48:57 +01:00
aiordache 0e82a7723f Revert "Unit and integration tests added"
This reverts commit ade4a52073.
2021-02-15 15:48:51 +01:00
Vlad Romanenko 672db57151 Fix doc formatting
Signed-off-by: Vlad Romanenko <vlad.romanenko@hotmail.com>
2021-02-15 15:44:29 +01:00
aiordache 50af8c8b01 Run unit tests in a container with no .docker/config mount
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
Stefan Scherer e7bdfe64b7 Use DOCKER_CONFIG to have creds in dind environment
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
2021-02-15 15:44:29 +01:00
Stefan Scherer 94f9627eb7 Revert back to wrappedNode
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
2021-02-15 15:44:29 +01:00
Stefan Scherer d7b6861e84 Remove wrappedNode
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
2021-02-15 15:44:29 +01:00
WojciechowskiPiotr ade4a52073 Unit and integration tests added
Signed-off-by: WojciechowskiPiotr <devel@it-playground.pl>
2021-02-15 15:44:29 +01:00
WojciechowskiPiotr 92e0e9c30d Support for docker.types.Placement.MaxReplicas (new in API 1.40) in Docker Swarm Service
Signed-off-by: WojciechowskiPiotr <devel@it-playground.pl>
2021-02-15 15:44:29 +01:00
Piotr Wojciechowski b701d5c999 Support for docker.types.Placement.MaxReplicas (new in API 1.40) in Docker Swarm Service
Signed-off-by: WojciechowskiPiotr <devel@it-playground.pl>
2021-02-15 15:44:29 +01:00
aiordache 3679ffcca8 Bump cffi to 1.14.4
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
aiordache 360be5987d Update GH action step
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
Stefan Scherer 26a8b5fc21 Update CI to ubuntu-2004
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
2021-02-15 15:44:29 +01:00
Christian Clauss 285d1a3de4 GitHub Actions: Upgrade actions/checkout
https://github.com/actions/checkout/releases
Signed-off-by: Christian Clauss <cclauss@me.com>
2021-02-15 15:44:29 +01:00
aiordache 8def46c01e Fix host trimming and remove quiet flag for the ssh connection
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
Christian Clauss 3e62517c61 setup.py: Add support for Python 3.8 and 3.9
Signed-off-by: Christian Clauss <cclauss@me.com>
2021-02-15 15:44:29 +01:00
Christian Clauss dd1d572b4f print() is a function in Python 3
Like #2740 but for the docs

Signed-off-by: Christian Clauss <cclauss@me.com>
2021-02-15 15:44:29 +01:00
aiordache c4775504a6 Update base image to `dockerpinata/docker-py` in Jenkinsfile
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
Christian Clauss 654e2d665c print() is a function in Python 3
Signed-off-by: Christian Clauss <cclauss@me.com>
2021-02-15 15:44:29 +01:00
Ulysses Souza 4ebeb36b46 Post 4.4.1 release
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-15 15:44:29 +01:00
Ulysses Souza 7f717753e9 Prepare release 4.4.1
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-15 15:44:29 +01:00
aiordache a13b72ae01 Avoid setting unsuported parameter for subprocess.Popen on Windows
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
Ulysses Souza ca3d5feb67 Trigger GHA on pull_request
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-15 15:44:29 +01:00
aiordache e61b2aabf0 Post-release v4.4.0
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
Ulysses Souza fb507738f3 Remove travis
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-15 15:44:29 +01:00
Ulysses Souza f915eef46f Add Github Actions
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-15 15:44:29 +01:00
Sebastiaan van Stijn 4f299822fd docker/api/image: replace use of deprecated "filter" argument
The "filter" argument was deprecated in docker 1.13 (API version 1.25),
and removed from API v1.41 and up. See https://github.com/docker/cli/blob/v20.10.0-rc1/docs/deprecated.md#filter-param-for-imagesjson-endpoint

This patch applies the name as "reference" filter, instead of "filter" for API
1.25 and up.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-15 15:44:29 +01:00
aiordache da26073c75 Mount docker config to DIND containers for authentication
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
aiordache c5022e1491 Update Jenkinsfile with docker registry credentials
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
aiordache 6e47f7ccf7 Syntax warning fix
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
aiordache 597f1a27b4 Fix docs typo
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
aiordache f18c038b9f Fix ssh connection - don't override the host and port of the http pool
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-15 15:44:29 +01:00
Daeseok Youn f464d9a430 Correct comments on ports_binding and host mode as network_mode
Signed-off-by: Daeseok Youn <daeseok.youn@navercorp.com>
2021-02-15 15:44:29 +01:00
Daeseok Youn 3a8565029b raise an error for binding specific ports in 'host' mode of network
The binding ports are ignored where the network mode is 'host'.
It could be a problem in case of using these options together on
Mac or Windows OS. Because the limitation that could not use
the 'host' in network_mode on Mac and Windows. When 'host' mode
is set on network_mode, the specific ports in 'ports' are ignored
 so the network is not able to be accessed through defined ports
by developer.

Signed-off-by: Daeseok Youn <daeseok.youn@navercorp.com>
2021-02-15 15:44:29 +01:00
Mariano Scazzariello 817023f9ed Add max_pool_size parameter (#2699)
* Add max_pool_size parameter

Signed-off-by: Mariano Scazzariello <marianoscazzariello@gmail.com>

* Add client version to tests

Signed-off-by: Mariano Scazzariello <marianoscazzariello@gmail.com>

* Fix parameter position

Signed-off-by: Mariano Scazzariello <marianoscazzariello@gmail.com>
2021-02-15 15:44:29 +01:00
fengbaolong 9f5e35f5fe fix docker build error when dockerfile contains unicode character.
if dockerfile contains unicode character,len(contents) will return character length,this length will less than len(contents_encoded) length,so contants data will be truncated.

Signed-off-by: fengbaolong <fengbaolong@hotmail.com>
2021-02-15 15:44:29 +01:00
dependabot[bot] d0467badfb Bump cryptography from 2.3 to 3.2
Bumps [cryptography](https://github.com/pyca/cryptography) from 2.3 to 3.2.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/2.3...3.2)

Signed-off-by: dependabot[bot] <support@github.com>
2021-02-15 15:44:29 +01:00
24 changed files with 451 additions and 162 deletions

27
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,27 @@
name: Python package
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 1
matrix:
python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r test-requirements.txt -r requirements.txt
- name: Test with pytest
run: |
docker logout
rm -rf ~/.docker
py.test -v --cov=docker tests/unit

View File

@ -1,20 +0,0 @@
sudo: false
language: python
matrix:
include:
- python: 2.7
env: TOXENV=py27
- python: 3.5
env: TOXENV=py35
- python: 3.6
env: TOXENV=py36
- python: 3.7
env: TOXENV=py37
dist: xenial
sudo: true
- env: TOXENV=flake8
install:
- pip install tox==2.9.1
script:
- tox

25
Jenkinsfile vendored
View File

@ -1,6 +1,6 @@
#!groovy #!groovy
def imageNameBase = "dockerbuildbot/docker-py" def imageNameBase = "dockerpinata/docker-py"
def imageNamePy2 def imageNamePy2
def imageNamePy3 def imageNamePy3
def imageDindSSH def imageDindSSH
@ -18,24 +18,25 @@ def buildImage = { name, buildargs, pyTag ->
} }
def buildImages = { -> def buildImages = { ->
wrappedNode(label: "amd64 && ubuntu-1804 && overlay2", cleanWorkspace: true) { wrappedNode(label: "amd64 && ubuntu-2004 && overlay2", cleanWorkspace: true) {
stage("build image") { stage("build image") {
checkout(scm) checkout(scm)
imageNamePy2 = "${imageNameBase}:py2-${gitCommit()}" imageNamePy2 = "${imageNameBase}:py2-${gitCommit()}"
imageNamePy3 = "${imageNameBase}:py3-${gitCommit()}" imageNamePy3 = "${imageNameBase}:py3-${gitCommit()}"
imageDindSSH = "${imageNameBase}:sshdind-${gitCommit()}" imageDindSSH = "${imageNameBase}:sshdind-${gitCommit()}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
buildImage(imageDindSSH, "-f tests/Dockerfile-ssh-dind .", "") buildImage(imageDindSSH, "-f tests/Dockerfile-ssh-dind .", "")
buildImage(imageNamePy2, "-f tests/Dockerfile --build-arg PYTHON_VERSION=2.7 .", "py2.7") buildImage(imageNamePy2, "-f tests/Dockerfile --build-arg PYTHON_VERSION=2.7 .", "py2.7")
buildImage(imageNamePy3, "-f tests/Dockerfile --build-arg PYTHON_VERSION=3.7 .", "py3.7") buildImage(imageNamePy3, "-f tests/Dockerfile --build-arg PYTHON_VERSION=3.7 .", "py3.7")
} }
} }
} }
}
def getDockerVersions = { -> def getDockerVersions = { ->
def dockerVersions = ["19.03.12"] def dockerVersions = ["19.03.12"]
wrappedNode(label: "amd64 && ubuntu-1804 && overlay2") { wrappedNode(label: "amd64 && ubuntu-2004 && overlay2") {
def result = sh(script: """docker run --rm \\ def result = sh(script: """docker run --rm \\
--entrypoint=python \\ --entrypoint=python \\
${imageNamePy3} \\ ${imageNamePy3} \\
@ -76,13 +77,21 @@ def runTests = { Map settings ->
} }
{ -> { ->
wrappedNode(label: "amd64 && ubuntu-1804 && overlay2", cleanWorkspace: true) { wrappedNode(label: "amd64 && ubuntu-2004 && overlay2", cleanWorkspace: true) {
stage("test python=${pythonVersion} / docker=${dockerVersion}") { stage("test python=${pythonVersion} / docker=${dockerVersion}") {
checkout(scm) checkout(scm)
def dindContainerName = "dpy-dind-\$BUILD_NUMBER-\$EXECUTOR_NUMBER-${pythonVersion}-${dockerVersion}" def dindContainerName = "dpy-dind-\$BUILD_NUMBER-\$EXECUTOR_NUMBER-${pythonVersion}-${dockerVersion}"
def testContainerName = "dpy-tests-\$BUILD_NUMBER-\$EXECUTOR_NUMBER-${pythonVersion}-${dockerVersion}" def testContainerName = "dpy-tests-\$BUILD_NUMBER-\$EXECUTOR_NUMBER-${pythonVersion}-${dockerVersion}"
def testNetwork = "dpy-testnet-\$BUILD_NUMBER-\$EXECUTOR_NUMBER-${pythonVersion}-${dockerVersion}" def testNetwork = "dpy-testnet-\$BUILD_NUMBER-\$EXECUTOR_NUMBER-${pythonVersion}-${dockerVersion}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
try { try {
// unit tests
sh """docker run --rm \\
-e 'DOCKER_TEST_API_VERSION=${apiVersion}' \\
${testImage} \\
py.test -v -rxs --cov=docker tests/unit
"""
// integration tests
sh """docker network create ${testNetwork}""" sh """docker network create ${testNetwork}"""
sh """docker run --rm -d --name ${dindContainerName} -v /tmp --privileged --network ${testNetwork} \\ sh """docker run --rm -d --name ${dindContainerName} -v /tmp --privileged --network ${testNetwork} \\
${imageDindSSH} dockerd -H tcp://0.0.0.0:2375 ${imageDindSSH} dockerd -H tcp://0.0.0.0:2375
@ -93,11 +102,11 @@ def runTests = { Map settings ->
-e 'DOCKER_TEST_API_VERSION=${apiVersion}' \\ -e 'DOCKER_TEST_API_VERSION=${apiVersion}' \\
--network ${testNetwork} \\ --network ${testNetwork} \\
--volumes-from ${dindContainerName} \\ --volumes-from ${dindContainerName} \\
-v $DOCKER_CONFIG/config.json:/root/.docker/config.json \\
${testImage} \\ ${testImage} \\
py.test -v -rxs --cov=docker --ignore=tests/ssh tests/ py.test -v -rxs --cov=docker tests/integration
""" """
sh """docker stop ${dindContainerName}""" sh """docker stop ${dindContainerName}"""
// start DIND container with SSH // start DIND container with SSH
sh """docker run --rm -d --name ${dindContainerName} -v /tmp --privileged --network ${testNetwork} \\ sh """docker run --rm -d --name ${dindContainerName} -v /tmp --privileged --network ${testNetwork} \\
${imageDindSSH} dockerd --experimental""" ${imageDindSSH} dockerd --experimental"""
@ -109,6 +118,7 @@ def runTests = { Map settings ->
-e 'DOCKER_TEST_API_VERSION=${apiVersion}' \\ -e 'DOCKER_TEST_API_VERSION=${apiVersion}' \\
--network ${testNetwork} \\ --network ${testNetwork} \\
--volumes-from ${dindContainerName} \\ --volumes-from ${dindContainerName} \\
-v $DOCKER_CONFIG/config.json:/root/.docker/config.json \\
${testImage} \\ ${testImage} \\
py.test -v -rxs --cov=docker tests/ssh py.test -v -rxs --cov=docker tests/ssh
""" """
@ -122,6 +132,7 @@ def runTests = { Map settings ->
} }
} }
} }
}
buildImages() buildImages()

View File

@ -58,7 +58,7 @@ You can stream logs:
```python ```python
>>> for line in container.logs(stream=True): >>> for line in container.logs(stream=True):
... print line.strip() ... print(line.strip())
Reticulating spline 2... Reticulating spline 2...
Reticulating spline 3... Reticulating spline 3...
... ...

View File

@ -9,9 +9,9 @@ import websocket
from .. import auth from .. import auth
from ..constants import (DEFAULT_NUM_POOLS, DEFAULT_NUM_POOLS_SSH, from ..constants import (DEFAULT_NUM_POOLS, DEFAULT_NUM_POOLS_SSH,
DEFAULT_TIMEOUT_SECONDS, DEFAULT_USER_AGENT, DEFAULT_MAX_POOL_SIZE, DEFAULT_TIMEOUT_SECONDS,
IS_WINDOWS_PLATFORM, MINIMUM_DOCKER_API_VERSION, DEFAULT_USER_AGENT, IS_WINDOWS_PLATFORM,
STREAM_HEADER_SIZE_BYTES) MINIMUM_DOCKER_API_VERSION, STREAM_HEADER_SIZE_BYTES)
from ..errors import (DockerException, InvalidVersion, TLSParameterError, from ..errors import (DockerException, InvalidVersion, TLSParameterError,
create_api_error_from_http_exception) create_api_error_from_http_exception)
from ..tls import TLSConfig from ..tls import TLSConfig
@ -92,6 +92,8 @@ class APIClient(
use_ssh_client (bool): If set to `True`, an ssh connection is made use_ssh_client (bool): If set to `True`, an ssh connection is made
via shelling out to the ssh client. Ensure the ssh client is via shelling out to the ssh client. Ensure the ssh client is
installed and configured on the host. installed and configured on the host.
max_pool_size (int): The maximum number of connections
to save in the pool.
""" """
__attrs__ = requests.Session.__attrs__ + ['_auth_configs', __attrs__ = requests.Session.__attrs__ + ['_auth_configs',
@ -103,7 +105,8 @@ class APIClient(
def __init__(self, base_url=None, version=None, def __init__(self, base_url=None, version=None,
timeout=DEFAULT_TIMEOUT_SECONDS, tls=False, timeout=DEFAULT_TIMEOUT_SECONDS, tls=False,
user_agent=DEFAULT_USER_AGENT, num_pools=None, user_agent=DEFAULT_USER_AGENT, num_pools=None,
credstore_env=None, use_ssh_client=False): credstore_env=None, use_ssh_client=False,
max_pool_size=DEFAULT_MAX_POOL_SIZE):
super(APIClient, self).__init__() super(APIClient, self).__init__()
if tls and not base_url: if tls and not base_url:
@ -139,7 +142,8 @@ class APIClient(
if base_url.startswith('http+unix://'): if base_url.startswith('http+unix://'):
self._custom_adapter = UnixHTTPAdapter( self._custom_adapter = UnixHTTPAdapter(
base_url, timeout, pool_connections=num_pools base_url, timeout, pool_connections=num_pools,
max_pool_size=max_pool_size
) )
self.mount('http+docker://', self._custom_adapter) self.mount('http+docker://', self._custom_adapter)
self._unmount('http://', 'https://') self._unmount('http://', 'https://')
@ -153,7 +157,8 @@ class APIClient(
) )
try: try:
self._custom_adapter = NpipeHTTPAdapter( self._custom_adapter = NpipeHTTPAdapter(
base_url, timeout, pool_connections=num_pools base_url, timeout, pool_connections=num_pools,
max_pool_size=max_pool_size
) )
except NameError: except NameError:
raise DockerException( raise DockerException(
@ -165,7 +170,7 @@ class APIClient(
try: try:
self._custom_adapter = SSHHTTPAdapter( self._custom_adapter = SSHHTTPAdapter(
base_url, timeout, pool_connections=num_pools, base_url, timeout, pool_connections=num_pools,
shell_out=use_ssh_client max_pool_size=max_pool_size, shell_out=use_ssh_client
) )
except NameError: except NameError:
raise DockerException( raise DockerException(

View File

@ -523,6 +523,8 @@ class ContainerApiMixin(object):
- ``container:<name|id>`` Reuse another container's network - ``container:<name|id>`` Reuse another container's network
stack. stack.
- ``host`` Use the host network stack. - ``host`` Use the host network stack.
This mode is incompatible with ``port_bindings``.
oom_kill_disable (bool): Whether to disable OOM killer. oom_kill_disable (bool): Whether to disable OOM killer.
oom_score_adj (int): An integer value containing the score given oom_score_adj (int): An integer value containing the score given
to the container in order to tune OOM killer preferences. to the container in order to tune OOM killer preferences.
@ -532,6 +534,7 @@ class ContainerApiMixin(object):
unlimited. unlimited.
port_bindings (dict): See :py:meth:`create_container` port_bindings (dict): See :py:meth:`create_container`
for more information. for more information.
Imcompatible with ``host`` in ``network_mode``.
privileged (bool): Give extended privileges to this container. privileged (bool): Give extended privileges to this container.
publish_all_ports (bool): Publish all ports to the host. publish_all_ports (bool): Publish all ports to the host.
read_only (bool): Mount the container's root filesystem as read read_only (bool): Mount the container's root filesystem as read

View File

@ -81,10 +81,18 @@ class ImageApiMixin(object):
If the server returns an error. If the server returns an error.
""" """
params = { params = {
'filter': name,
'only_ids': 1 if quiet else 0, 'only_ids': 1 if quiet else 0,
'all': 1 if all else 0, 'all': 1 if all else 0,
} }
if name:
if utils.version_lt(self._version, '1.25'):
# only use "filter" on API 1.24 and under, as it is deprecated
params['filter'] = name
else:
if filters:
filters['reference'] = name
else:
filters = {'reference': name}
if filters: if filters:
params['filters'] = utils.convert_filters(filters) params['filters'] = utils.convert_filters(filters)
res = self._result(self._get(self._url("/images/json"), params=params), res = self._result(self._get(self._url("/images/json"), params=params),

View File

@ -1,5 +1,5 @@
from .api.client import APIClient from .api.client import APIClient
from .constants import DEFAULT_TIMEOUT_SECONDS from .constants import (DEFAULT_TIMEOUT_SECONDS, DEFAULT_MAX_POOL_SIZE)
from .models.configs import ConfigCollection from .models.configs import ConfigCollection
from .models.containers import ContainerCollection from .models.containers import ContainerCollection
from .models.images import ImageCollection from .models.images import ImageCollection
@ -38,6 +38,8 @@ class DockerClient(object):
use_ssh_client (bool): If set to `True`, an ssh connection is made use_ssh_client (bool): If set to `True`, an ssh connection is made
via shelling out to the ssh client. Ensure the ssh client is via shelling out to the ssh client. Ensure the ssh client is
installed and configured on the host. installed and configured on the host.
max_pool_size (int): The maximum number of connections
to save in the pool.
""" """
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
self.api = APIClient(*args, **kwargs) self.api = APIClient(*args, **kwargs)
@ -67,6 +69,8 @@ class DockerClient(object):
version (str): The version of the API to use. Set to ``auto`` to version (str): The version of the API to use. Set to ``auto`` to
automatically detect the server's version. Default: ``auto`` automatically detect the server's version. Default: ``auto``
timeout (int): Default timeout for API calls, in seconds. timeout (int): Default timeout for API calls, in seconds.
max_pool_size (int): The maximum number of connections
to save in the pool.
ssl_version (int): A valid `SSL version`_. ssl_version (int): A valid `SSL version`_.
assert_hostname (bool): Verify the hostname of the server. assert_hostname (bool): Verify the hostname of the server.
environment (dict): The environment to read environment variables environment (dict): The environment to read environment variables
@ -86,10 +90,12 @@ class DockerClient(object):
https://docs.python.org/3.5/library/ssl.html#ssl.PROTOCOL_TLSv1 https://docs.python.org/3.5/library/ssl.html#ssl.PROTOCOL_TLSv1
""" """
timeout = kwargs.pop('timeout', DEFAULT_TIMEOUT_SECONDS) timeout = kwargs.pop('timeout', DEFAULT_TIMEOUT_SECONDS)
max_pool_size = kwargs.pop('max_pool_size', DEFAULT_MAX_POOL_SIZE)
version = kwargs.pop('version', None) version = kwargs.pop('version', None)
use_ssh_client = kwargs.pop('use_ssh_client', False) use_ssh_client = kwargs.pop('use_ssh_client', False)
return cls( return cls(
timeout=timeout, timeout=timeout,
max_pool_size=max_pool_size,
version=version, version=version,
use_ssh_client=use_ssh_client, use_ssh_client=use_ssh_client,
**kwargs_from_env(**kwargs) **kwargs_from_env(**kwargs)

View File

@ -36,6 +36,8 @@ DEFAULT_NUM_POOLS = 25
# For more details see: https://github.com/docker/docker-py/issues/2246 # For more details see: https://github.com/docker/docker-py/issues/2246
DEFAULT_NUM_POOLS_SSH = 9 DEFAULT_NUM_POOLS_SSH = 9
DEFAULT_MAX_POOL_SIZE = 10
DEFAULT_DATA_CHUNK_SIZE = 1024 * 2048 DEFAULT_DATA_CHUNK_SIZE = 1024 * 2048
DEFAULT_SWARM_ADDR_POOL = ['10.0.0.0/8'] DEFAULT_SWARM_ADDR_POOL = ['10.0.0.0/8']

View File

@ -649,6 +649,7 @@ class ContainerCollection(Collection):
- ``container:<name|id>`` Reuse another container's network - ``container:<name|id>`` Reuse another container's network
stack. stack.
- ``host`` Use the host network stack. - ``host`` Use the host network stack.
This mode is incompatible with ``ports``.
Incompatible with ``network``. Incompatible with ``network``.
oom_kill_disable (bool): Whether to disable OOM killer. oom_kill_disable (bool): Whether to disable OOM killer.
@ -682,6 +683,7 @@ class ContainerCollection(Collection):
to a single container port. For example, to a single container port. For example,
``{'1111/tcp': [1234, 4567]}``. ``{'1111/tcp': [1234, 4567]}``.
Incompatible with ``host`` network mode.
privileged (bool): Give extended privileges to this container. privileged (bool): Give extended privileges to this container.
publish_all_ports (bool): Publish all ports to the host. publish_all_ports (bool): Publish all ports to the host.
read_only (bool): Mount the container's root filesystem as read read_only (bool): Mount the container's root filesystem as read

View File

@ -73,12 +73,15 @@ class NpipeHTTPAdapter(BaseHTTPAdapter):
__attrs__ = requests.adapters.HTTPAdapter.__attrs__ + ['npipe_path', __attrs__ = requests.adapters.HTTPAdapter.__attrs__ + ['npipe_path',
'pools', 'pools',
'timeout'] 'timeout',
'max_pool_size']
def __init__(self, base_url, timeout=60, def __init__(self, base_url, timeout=60,
pool_connections=constants.DEFAULT_NUM_POOLS): pool_connections=constants.DEFAULT_NUM_POOLS,
max_pool_size=constants.DEFAULT_MAX_POOL_SIZE):
self.npipe_path = base_url.replace('npipe://', '') self.npipe_path = base_url.replace('npipe://', '')
self.timeout = timeout self.timeout = timeout
self.max_pool_size = max_pool_size
self.pools = RecentlyUsedContainer( self.pools = RecentlyUsedContainer(
pool_connections, dispose_func=lambda p: p.close() pool_connections, dispose_func=lambda p: p.close()
) )
@ -91,7 +94,8 @@ class NpipeHTTPAdapter(BaseHTTPAdapter):
return pool return pool
pool = NpipeHTTPConnectionPool( pool = NpipeHTTPConnectionPool(
self.npipe_path, self.timeout self.npipe_path, self.timeout,
maxsize=self.max_pool_size
) )
self.pools[url] = pool self.pools[url] = pool

View File

@ -1,9 +1,9 @@
import io
import paramiko import paramiko
import requests.adapters import requests.adapters
import six import six
import logging import logging
import os import os
import signal
import socket import socket
import subprocess import subprocess
@ -23,64 +23,43 @@ except ImportError:
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
def create_paramiko_client(base_url):
logging.getLogger("paramiko").setLevel(logging.WARNING)
ssh_client = paramiko.SSHClient()
base_url = six.moves.urllib_parse.urlparse(base_url)
ssh_params = {
"hostname": base_url.hostname,
"port": base_url.port,
"username": base_url.username
}
ssh_config_file = os.path.expanduser("~/.ssh/config")
if os.path.exists(ssh_config_file):
conf = paramiko.SSHConfig()
with open(ssh_config_file) as f:
conf.parse(f)
host_config = conf.lookup(base_url.hostname)
ssh_conf = host_config
if 'proxycommand' in host_config:
ssh_params["sock"] = paramiko.ProxyCommand(
ssh_conf['proxycommand']
)
if 'hostname' in host_config:
ssh_params['hostname'] = host_config['hostname']
if 'identityfile' in host_config:
ssh_params['key_filename'] = host_config['identityfile']
if base_url.port is None and 'port' in host_config:
ssh_params['port'] = ssh_conf['port']
if base_url.username is None and 'user' in host_config:
ssh_params['username'] = ssh_conf['user']
ssh_client.load_system_host_keys()
ssh_client.set_missing_host_key_policy(paramiko.WarningPolicy())
return ssh_client, ssh_params
class SSHSocket(socket.socket): class SSHSocket(socket.socket):
def __init__(self, host): def __init__(self, host):
super(SSHSocket, self).__init__( super(SSHSocket, self).__init__(
socket.AF_INET, socket.SOCK_STREAM) socket.AF_INET, socket.SOCK_STREAM)
self.host = host self.host = host
self.port = None self.port = None
self.user = None
if ':' in host: if ':' in host:
self.host, self.port = host.split(':') self.host, self.port = host.split(':')
if '@' in self.host:
self.user, self.host = host.split('@')
self.proc = None self.proc = None
def connect(self, **kwargs): def connect(self, **kwargs):
port = '' if not self.port else '-p {}'.format(self.port) args = ['ssh']
args = [ if self.user:
'ssh', args = args + ['-l', self.user]
'-q',
self.host, if self.port:
port, args = args + ['-p', self.port]
'docker system dial-stdio'
] args = args + ['--', self.host, 'docker system dial-stdio']
preexec_func = None
if not constants.IS_WINDOWS_PLATFORM:
def f():
signal.signal(signal.SIGINT, signal.SIG_IGN)
preexec_func = f
self.proc = subprocess.Popen( self.proc = subprocess.Popen(
' '.join(args), ' '.join(args),
env=os.environ,
shell=True, shell=True,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stdin=subprocess.PIPE) stdin=subprocess.PIPE,
preexec_fn=preexec_func)
def _write(self, data): def _write(self, data):
if not self.proc or self.proc.stdin.closed: if not self.proc or self.proc.stdin.closed:
@ -96,17 +75,18 @@ class SSHSocket(socket.socket):
def send(self, data): def send(self, data):
return self._write(data) return self._write(data)
def recv(self): def recv(self, n):
if not self.proc: if not self.proc:
raise Exception('SSH subprocess not initiated.' raise Exception('SSH subprocess not initiated.'
'connect() must be called first.') 'connect() must be called first.')
return self.proc.stdout.read() return self.proc.stdout.read(n)
def makefile(self, mode): def makefile(self, mode):
if not self.proc or self.proc.stdout.closed: if not self.proc:
buf = io.BytesIO() self.connect()
buf.write(b'\n\n') if six.PY3:
return buf self.proc.stdout.channel = self
return self.proc.stdout return self.proc.stdout
def close(self): def close(self):
@ -124,7 +104,7 @@ class SSHConnection(httplib.HTTPConnection, object):
) )
self.ssh_transport = ssh_transport self.ssh_transport = ssh_transport
self.timeout = timeout self.timeout = timeout
self.host = host self.ssh_host = host
def connect(self): def connect(self):
if self.ssh_transport: if self.ssh_transport:
@ -132,7 +112,7 @@ class SSHConnection(httplib.HTTPConnection, object):
sock.settimeout(self.timeout) sock.settimeout(self.timeout)
sock.exec_command('docker system dial-stdio') sock.exec_command('docker system dial-stdio')
else: else:
sock = SSHSocket(self.host) sock = SSHSocket(self.ssh_host)
sock.settimeout(self.timeout) sock.settimeout(self.timeout)
sock.connect() sock.connect()
@ -147,16 +127,13 @@ class SSHConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
'localhost', timeout=timeout, maxsize=maxsize 'localhost', timeout=timeout, maxsize=maxsize
) )
self.ssh_transport = None self.ssh_transport = None
self.timeout = timeout
if ssh_client: if ssh_client:
self.ssh_transport = ssh_client.get_transport() self.ssh_transport = ssh_client.get_transport()
self.timeout = timeout self.ssh_host = host
self.host = host
self.port = None
if ':' in host:
self.host, self.port = host.split(':')
def _new_conn(self): def _new_conn(self):
return SSHConnection(self.ssh_transport, self.timeout, self.host) return SSHConnection(self.ssh_transport, self.timeout, self.ssh_host)
# When re-using connections, urllib3 calls fileno() on our # When re-using connections, urllib3 calls fileno() on our
# SSH channel instance, quickly overloading our fd limit. To avoid this, # SSH channel instance, quickly overloading our fd limit. To avoid this,
@ -184,29 +161,71 @@ class SSHConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
class SSHHTTPAdapter(BaseHTTPAdapter): class SSHHTTPAdapter(BaseHTTPAdapter):
__attrs__ = requests.adapters.HTTPAdapter.__attrs__ + [ __attrs__ = requests.adapters.HTTPAdapter.__attrs__ + [
'pools', 'timeout', 'ssh_client', 'ssh_params' 'pools', 'timeout', 'ssh_client', 'ssh_params', 'max_pool_size'
] ]
def __init__(self, base_url, timeout=60, def __init__(self, base_url, timeout=60,
pool_connections=constants.DEFAULT_NUM_POOLS, pool_connections=constants.DEFAULT_NUM_POOLS,
max_pool_size=constants.DEFAULT_MAX_POOL_SIZE,
shell_out=True): shell_out=True):
self.ssh_client = None self.ssh_client = None
if not shell_out: if not shell_out:
self.ssh_client, self.ssh_params = create_paramiko_client(base_url) self._create_paramiko_client(base_url)
self._connect() self._connect()
base_url = base_url.lstrip('ssh://')
self.host = base_url self.ssh_host = base_url
if base_url.startswith('ssh://'):
self.ssh_host = base_url[len('ssh://'):]
self.timeout = timeout self.timeout = timeout
self.max_pool_size = max_pool_size
self.pools = RecentlyUsedContainer( self.pools = RecentlyUsedContainer(
pool_connections, dispose_func=lambda p: p.close() pool_connections, dispose_func=lambda p: p.close()
) )
super(SSHHTTPAdapter, self).__init__() super(SSHHTTPAdapter, self).__init__()
def _create_paramiko_client(self, base_url):
logging.getLogger("paramiko").setLevel(logging.WARNING)
self.ssh_client = paramiko.SSHClient()
base_url = six.moves.urllib_parse.urlparse(base_url)
self.ssh_params = {
"hostname": base_url.hostname,
"port": base_url.port,
"username": base_url.username
}
ssh_config_file = os.path.expanduser("~/.ssh/config")
if os.path.exists(ssh_config_file):
conf = paramiko.SSHConfig()
with open(ssh_config_file) as f:
conf.parse(f)
host_config = conf.lookup(base_url.hostname)
self.ssh_conf = host_config
if 'proxycommand' in host_config:
self.ssh_params["sock"] = paramiko.ProxyCommand(
self.ssh_conf['proxycommand']
)
if 'hostname' in host_config:
self.ssh_params['hostname'] = host_config['hostname']
if base_url.port is None and 'port' in host_config:
self.ssh_params['port'] = self.ssh_conf['port']
if base_url.username is None and 'user' in host_config:
self.ssh_params['username'] = self.ssh_conf['user']
self.ssh_client.load_system_host_keys()
self.ssh_client.set_missing_host_key_policy(paramiko.WarningPolicy())
def _connect(self): def _connect(self):
if self.ssh_client: if self.ssh_client:
self.ssh_client.connect(**self.ssh_params) self.ssh_client.connect(**self.ssh_params)
def get_connection(self, url, proxies=None): def get_connection(self, url, proxies=None):
if not self.ssh_client:
return SSHConnectionPool(
ssh_client=self.ssh_client,
timeout=self.timeout,
maxsize=self.max_pool_size,
host=self.ssh_host
)
with self.pools.lock: with self.pools.lock:
pool = self.pools.get(url) pool = self.pools.get(url)
if pool: if pool:
@ -219,7 +238,8 @@ class SSHHTTPAdapter(BaseHTTPAdapter):
pool = SSHConnectionPool( pool = SSHConnectionPool(
ssh_client=self.ssh_client, ssh_client=self.ssh_client,
timeout=self.timeout, timeout=self.timeout,
host=self.host maxsize=self.max_pool_size,
host=self.ssh_host
) )
self.pools[url] = pool self.pools[url] = pool

View File

@ -74,15 +74,18 @@ class UnixHTTPAdapter(BaseHTTPAdapter):
__attrs__ = requests.adapters.HTTPAdapter.__attrs__ + ['pools', __attrs__ = requests.adapters.HTTPAdapter.__attrs__ + ['pools',
'socket_path', 'socket_path',
'timeout'] 'timeout',
'max_pool_size']
def __init__(self, socket_url, timeout=60, def __init__(self, socket_url, timeout=60,
pool_connections=constants.DEFAULT_NUM_POOLS): pool_connections=constants.DEFAULT_NUM_POOLS,
max_pool_size=constants.DEFAULT_MAX_POOL_SIZE):
socket_path = socket_url.replace('http+unix://', '') socket_path = socket_url.replace('http+unix://', '')
if not socket_path.startswith('/'): if not socket_path.startswith('/'):
socket_path = '/' + socket_path socket_path = '/' + socket_path
self.socket_path = socket_path self.socket_path = socket_path
self.timeout = timeout self.timeout = timeout
self.max_pool_size = max_pool_size
self.pools = RecentlyUsedContainer( self.pools = RecentlyUsedContainer(
pool_connections, dispose_func=lambda p: p.close() pool_connections, dispose_func=lambda p: p.close()
) )
@ -95,7 +98,8 @@ class UnixHTTPAdapter(BaseHTTPAdapter):
return pool return pool
pool = UnixHTTPConnectionPool( pool = UnixHTTPConnectionPool(
url, self.socket_path, self.timeout url, self.socket_path, self.timeout,
maxsize=self.max_pool_size
) )
self.pools[url] = pool self.pools[url] = pool

View File

@ -334,10 +334,11 @@ class HostConfig(dict):
if dns_search: if dns_search:
self['DnsSearch'] = dns_search self['DnsSearch'] = dns_search
if network_mode: if network_mode == 'host' and port_bindings:
self['NetworkMode'] = network_mode raise host_config_incompatible_error(
elif network_mode is None: 'network_mode', 'host', 'port_bindings'
self['NetworkMode'] = 'default' )
self['NetworkMode'] = network_mode or 'default'
if restart_policy: if restart_policy:
if not isinstance(restart_policy, dict): if not isinstance(restart_policy, dict):
@ -664,6 +665,13 @@ def host_config_value_error(param, param_value):
return ValueError(error_msg.format(param, param_value)) return ValueError(error_msg.format(param, param_value))
def host_config_incompatible_error(param, param_value, incompatible_param):
error_msg = '\"{1}\" {0} is incompatible with {2}'
return errors.InvalidArgument(
error_msg.format(param, param_value, incompatible_param)
)
class ContainerConfig(dict): class ContainerConfig(dict):
def __init__( def __init__(
self, version, image, command, hostname=None, user=None, detach=False, self, version, image, command, hostname=None, user=None, detach=False,

View File

@ -105,8 +105,9 @@ def create_archive(root, files=None, fileobj=None, gzip=False,
for name, contents in extra_files: for name, contents in extra_files:
info = tarfile.TarInfo(name) info = tarfile.TarInfo(name)
info.size = len(contents) contents_encoded = contents.encode('utf-8')
t.addfile(info, io.BytesIO(contents.encode('utf-8'))) info.size = len(contents_encoded)
t.addfile(info, io.BytesIO(contents_encoded))
t.close() t.close()
fileobj.seek(0) fileobj.seek(0)

View File

@ -1,2 +1,2 @@
version = "4.4.0-dev" version = "4.4.2"
version_info = tuple([int(d) for d in version.split("-")[0].split(".")]) version_info = tuple([int(d) for d in version.split("-")[0].split(".")])

View File

@ -1,6 +1,47 @@
Change log Change log
========== ==========
4.4.2
-----
[List of PRs / issues for this release](https://github.com/docker/docker-py/milestone/71?closed=1)
### Bugfixes
- Fix SSH connection bug where the hostname was incorrectly trimmed and the error was hidden
- Fix docs example
### Miscellaneous
- Add Python3.8 and 3.9 in setup.py classifier list
4.4.1
-----
[List of PRs / issues for this release](https://github.com/docker/docker-py/milestone/69?closed=1)
### Bugfixes
- Avoid setting unsuported parameter for subprocess.Popen on Windows
- Replace use of deprecated "filter" argument on ""docker/api/image"
4.4.0
-----
[List of PRs / issues for this release](https://github.com/docker/docker-py/milestone/67?closed=1)
### Features
- Add an alternative SSH connection to the paramiko one, based on shelling out to the SSh client. Similar to the behaviour of Docker cli
- Default image tag to `latest` on `pull`
### Bugfixes
- Fix plugin model upgrade
- Fix examples URL in ulimits
### Miscellaneous
- Improve exception messages for server and client errors
- Bump cryptography from 2.3 to 3.2
4.3.1 4.3.1
----- -----

View File

@ -58,7 +58,7 @@ You can stream logs:
.. code-block:: python .. code-block:: python
>>> for line in container.logs(stream=True): >>> for line in container.logs(stream=True):
... print line.strip() ... print(line.strip())
Reticulating spline 2... Reticulating spline 2...
Reticulating spline 3... Reticulating spline 3...
... ...

View File

@ -1,8 +1,8 @@
appdirs==1.4.3 appdirs==1.4.3
asn1crypto==0.22.0 asn1crypto==0.22.0
backports.ssl-match-hostname==3.5.0.1 backports.ssl-match-hostname==3.5.0.1
cffi==1.10.0 cffi==1.14.4
cryptography==2.3 cryptography==3.2
enum34==1.1.6 enum34==1.1.6
idna==2.5 idna==2.5
ipaddress==1.0.18 ipaddress==1.0.18

View File

@ -84,6 +84,8 @@ setup(
'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Topic :: Software Development', 'Topic :: Software Development',
'Topic :: Utilities', 'Topic :: Utilities',
'License :: OSI Approved :: Apache Software License', 'License :: OSI Approved :: Apache Software License',

View File

@ -10,7 +10,7 @@ RUN apk add --no-cache \
RUN ssh-keygen -A RUN ssh-keygen -A
# copy the test SSH config # copy the test SSH config
RUN echo "IgnoreUserKnownHosts yes" >> /etc/ssh/sshd_config && \ RUN echo "IgnoreUserKnownHosts yes" > /etc/ssh/sshd_config && \
echo "PubkeyAuthentication yes" >> /etc/ssh/sshd_config && \ echo "PubkeyAuthentication yes" >> /etc/ssh/sshd_config && \
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config echo "PermitRootLogin yes" >> /etc/ssh/sshd_config

View File

@ -26,7 +26,18 @@ class ImageTest(BaseAPIClientTest):
fake_request.assert_called_with( fake_request.assert_called_with(
'GET', 'GET',
url_prefix + 'images/json', url_prefix + 'images/json',
params={'filter': None, 'only_ids': 0, 'all': 1}, params={'only_ids': 0, 'all': 1},
timeout=DEFAULT_TIMEOUT_SECONDS
)
def test_images_name(self):
self.client.images('foo:bar')
fake_request.assert_called_with(
'GET',
url_prefix + 'images/json',
params={'only_ids': 0, 'all': 0,
'filters': '{"reference": ["foo:bar"]}'},
timeout=DEFAULT_TIMEOUT_SECONDS timeout=DEFAULT_TIMEOUT_SECONDS
) )
@ -36,7 +47,7 @@ class ImageTest(BaseAPIClientTest):
fake_request.assert_called_with( fake_request.assert_called_with(
'GET', 'GET',
url_prefix + 'images/json', url_prefix + 'images/json',
params={'filter': None, 'only_ids': 1, 'all': 1}, params={'only_ids': 1, 'all': 1},
timeout=DEFAULT_TIMEOUT_SECONDS timeout=DEFAULT_TIMEOUT_SECONDS
) )
@ -46,7 +57,7 @@ class ImageTest(BaseAPIClientTest):
fake_request.assert_called_with( fake_request.assert_called_with(
'GET', 'GET',
url_prefix + 'images/json', url_prefix + 'images/json',
params={'filter': None, 'only_ids': 1, 'all': 0}, params={'only_ids': 1, 'all': 0},
timeout=DEFAULT_TIMEOUT_SECONDS timeout=DEFAULT_TIMEOUT_SECONDS
) )
@ -56,7 +67,7 @@ class ImageTest(BaseAPIClientTest):
fake_request.assert_called_with( fake_request.assert_called_with(
'GET', 'GET',
url_prefix + 'images/json', url_prefix + 'images/json',
params={'filter': None, 'only_ids': 0, 'all': 0, params={'only_ids': 0, 'all': 0,
'filters': '{"dangling": ["true"]}'}, 'filters': '{"dangling": ["true"]}'},
timeout=DEFAULT_TIMEOUT_SECONDS timeout=DEFAULT_TIMEOUT_SECONDS
) )

View File

@ -5,7 +5,9 @@ import unittest
import docker import docker
import pytest import pytest
from docker.constants import ( from docker.constants import (
DEFAULT_DOCKER_API_VERSION, DEFAULT_TIMEOUT_SECONDS) DEFAULT_DOCKER_API_VERSION, DEFAULT_TIMEOUT_SECONDS,
DEFAULT_MAX_POOL_SIZE, IS_WINDOWS_PLATFORM
)
from docker.utils import kwargs_from_env from docker.utils import kwargs_from_env
from . import fake_api from . import fake_api
@ -15,8 +17,8 @@ try:
except ImportError: except ImportError:
import mock import mock
TEST_CERT_DIR = os.path.join(os.path.dirname(__file__), 'testdata/certs') TEST_CERT_DIR = os.path.join(os.path.dirname(__file__), 'testdata/certs')
POOL_SIZE = 20
class ClientTest(unittest.TestCase): class ClientTest(unittest.TestCase):
@ -76,6 +78,84 @@ class ClientTest(unittest.TestCase):
assert "'ContainerCollection' object is not callable" in s assert "'ContainerCollection' object is not callable" in s
assert "docker.APIClient" in s assert "docker.APIClient" in s
@pytest.mark.skipif(
IS_WINDOWS_PLATFORM, reason='Unix Connection Pool only on Linux'
)
@mock.patch("docker.transport.unixconn.UnixHTTPConnectionPool")
def test_default_pool_size_unix(self, mock_obj):
client = docker.DockerClient(
version=DEFAULT_DOCKER_API_VERSION
)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
base_url = "{base_url}/v{version}/_ping".format(
base_url=client.api.base_url,
version=client.api._version
)
mock_obj.assert_called_once_with(base_url,
"/var/run/docker.sock",
60,
maxsize=DEFAULT_MAX_POOL_SIZE
)
@pytest.mark.skipif(
not IS_WINDOWS_PLATFORM, reason='Npipe Connection Pool only on Windows'
)
@mock.patch("docker.transport.npipeconn.NpipeHTTPConnectionPool")
def test_default_pool_size_win(self, mock_obj):
client = docker.DockerClient(
version=DEFAULT_DOCKER_API_VERSION
)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
mock_obj.assert_called_once_with("//./pipe/docker_engine",
60,
maxsize=DEFAULT_MAX_POOL_SIZE
)
@pytest.mark.skipif(
IS_WINDOWS_PLATFORM, reason='Unix Connection Pool only on Linux'
)
@mock.patch("docker.transport.unixconn.UnixHTTPConnectionPool")
def test_pool_size_unix(self, mock_obj):
client = docker.DockerClient(
version=DEFAULT_DOCKER_API_VERSION,
max_pool_size=POOL_SIZE
)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
base_url = "{base_url}/v{version}/_ping".format(
base_url=client.api.base_url,
version=client.api._version
)
mock_obj.assert_called_once_with(base_url,
"/var/run/docker.sock",
60,
maxsize=POOL_SIZE
)
@pytest.mark.skipif(
not IS_WINDOWS_PLATFORM, reason='Npipe Connection Pool only on Windows'
)
@mock.patch("docker.transport.npipeconn.NpipeHTTPConnectionPool")
def test_pool_size_win(self, mock_obj):
client = docker.DockerClient(
version=DEFAULT_DOCKER_API_VERSION,
max_pool_size=POOL_SIZE
)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
mock_obj.assert_called_once_with("//./pipe/docker_engine",
60,
maxsize=POOL_SIZE
)
class FromEnvTest(unittest.TestCase): class FromEnvTest(unittest.TestCase):
@ -112,3 +192,77 @@ class FromEnvTest(unittest.TestCase):
client = docker.from_env(version=DEFAULT_DOCKER_API_VERSION) client = docker.from_env(version=DEFAULT_DOCKER_API_VERSION)
assert client.api.timeout == DEFAULT_TIMEOUT_SECONDS assert client.api.timeout == DEFAULT_TIMEOUT_SECONDS
@pytest.mark.skipif(
IS_WINDOWS_PLATFORM, reason='Unix Connection Pool only on Linux'
)
@mock.patch("docker.transport.unixconn.UnixHTTPConnectionPool")
def test_default_pool_size_from_env_unix(self, mock_obj):
client = docker.from_env(version=DEFAULT_DOCKER_API_VERSION)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
base_url = "{base_url}/v{version}/_ping".format(
base_url=client.api.base_url,
version=client.api._version
)
mock_obj.assert_called_once_with(base_url,
"/var/run/docker.sock",
60,
maxsize=DEFAULT_MAX_POOL_SIZE
)
@pytest.mark.skipif(
not IS_WINDOWS_PLATFORM, reason='Npipe Connection Pool only on Windows'
)
@mock.patch("docker.transport.npipeconn.NpipeHTTPConnectionPool")
def test_default_pool_size_from_env_win(self, mock_obj):
client = docker.from_env(version=DEFAULT_DOCKER_API_VERSION)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
mock_obj.assert_called_once_with("//./pipe/docker_engine",
60,
maxsize=DEFAULT_MAX_POOL_SIZE
)
@pytest.mark.skipif(
IS_WINDOWS_PLATFORM, reason='Unix Connection Pool only on Linux'
)
@mock.patch("docker.transport.unixconn.UnixHTTPConnectionPool")
def test_pool_size_from_env_unix(self, mock_obj):
client = docker.from_env(
version=DEFAULT_DOCKER_API_VERSION,
max_pool_size=POOL_SIZE
)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
base_url = "{base_url}/v{version}/_ping".format(
base_url=client.api.base_url,
version=client.api._version
)
mock_obj.assert_called_once_with(base_url,
"/var/run/docker.sock",
60,
maxsize=POOL_SIZE
)
@pytest.mark.skipif(
not IS_WINDOWS_PLATFORM, reason='Npipe Connection Pool only on Windows'
)
@mock.patch("docker.transport.npipeconn.NpipeHTTPConnectionPool")
def test_pool_size_from_env_win(self, mock_obj):
client = docker.from_env(
version=DEFAULT_DOCKER_API_VERSION,
max_pool_size=POOL_SIZE
)
mock_obj.return_value.urlopen.return_value.status = 200
client.ping()
mock_obj.assert_called_once_with("//./pipe/docker_engine",
60,
maxsize=POOL_SIZE
)