I'm fairly consistently seeing an error in
DockerSuite.TestContainerApiRestartNotimeoutParam:
docker_api_containers_test.go:969:
c.Assert(status, check.Equals, http.StatusNoContent)
... obtained int = 500
... expected int = 204
And in the daemon logs I see:
INFO[0003] Container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971 failed to exit within 0 seconds of SIGTERM - using the force
ERRO[0003] Handler for POST /containers/{name:.*}/restart returned error: Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
ERRO[0003] HTTP Error err=Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
statusCode=500
Note the "container destroyed" error message. This is being generatd by
the libcontainer code and bubbled up in container.Kill() as a result of the
call to `container.killPossiblyDeadProcess(9)` on line 439.
See the comment in the code, but what I think is going on is that because we
don't have any timeout on the Stop() call we immediate try to force things to
stop. And by the time we get into libcontainer code the process just finished
stopping due to the initial signal, so this secondary sig-9 fails due to the
container no longer running (ie. its 'destroyed').
Since we can't look for "container destroyed" to just ignore the error, because
some other driver might have different text, I opted to just ignore the error
and keep going - with the assumption that if it couldnt send a sig-9 to the
process then it MUST be because its already dead and not something else.
To reproduce this I just run:
curl -v -X POST http://127.0.0.1:2375/v1.19/containers/8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971/restart
a few times and then it fails with the HTTP 500.
Would like to hear some other ideas on to handle this since I'm not
thrilled with the proposed solution.
Signed-off-by: Doug Davis <dug@us.ibm.com>
* Don't AllocateNetwork when network is disabled
* Don't createNetwork in execdriver when network is disabled
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
We should let user create container even if the container he wants
join is not running, that check should be done at start time.
In this case, the running check is done by getIpcContainer() when
we start container.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
Signed by all authors:
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Signed-off-by: David Calavera <david.calavera@gmail.com>
Signed-off-by: Jeff Lindsay <progrium@gmail.com>
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Signed-off-by: Luke Marsden <luke@clusterhq.com>
Signed-off-by: David Calavera <david.calavera@gmail.com>
Sometimes container.cleanup() can be called from multiple paths
for the same container during error conditions from monitor and
regular startup path. So if the container network has been already
released do not try to release it again.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
The DOCKER_EXPERIMENTAL environment variable drives the activation of
the 'experimental' build tag.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
As part of this some generic packages like iptables, etchosts and resolvconf
have also been moved to libnetwork. Even though they can still be
consumed in a generic fashion they will reside and be maintained
from within the libnetwork project.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Updated Dockerfile to satisfy libnetwork GOPATH requirements.
- Reworked daemon to allocate network resources using libnetwork.
- Reworked remove link code to also update network resources in libnetwork.
- Adjusted the exec driver command population to reflect libnetwork design.
- Adjusted the exec driver create command steps.
- Updated a few test cases to reflect the change in design.
- Removed the dns setup code from docker as resolv.conf is entirely managed
in libnetwork.
- Integrated with lxc exec driver.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Prior to this patch, the response of
- GET /images/json
- GET /containers/json
- GET /images/(name)/history
display the Created Time as UNIX format which doesn't make sense.
These should be more readable as CLI command `docker inspect` shows.
Due to the case that an older client with a newer version daemon, we
need the version check for now.
Signed-off-by: Hu Keping <hukeping@huawei.com>
- noplog driver pkg for '--log-driver=none' (null object pattern)
- centralized factory for log drivers (instead of case/switch)
- logging drivers registers themselves to factory upon import
(easy plug/unplug of drivers in daemon/logdrivers.go)
- daemon now doesn't start with an invalid log driver
- Name() method of loggers is actually now their cli names (made it useful)
- generalized Read() logic, made it unsupported except json-file (preserves
existing behavior)
Spotted some duplication code around processing of legacy json-file
format, didn't touch that and refactored in both places.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Previously the cache was only updated once on startup, because the graph
code only check for filesystems on startup. However this breaks the API as it
was supposed and so unit tests.
Fixes#13142
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Add handler for SIGUSR1 based on feedback regarding when to dump
goroutine stacks. This will also dump goroutine stack traces on SIGQUIT
followed by a hard-exit from the daemon.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Added --since argument to `docker logs` command. Accept unix
timestamps and shows logs only created after the specified date.
Default value is 0 and passing default value or not specifying
the value in the request causes parameter to be ignored (behavior
prior to this change).
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
If firewalld is not installed (or I suppose not running), firewalld was
producing an error in the daemon init logs, even though firewalld is not
required for iptables stuff to function.
The firewalld library code was also logging directly to logrus instead
of returning errors.
Moved logging code higher up in the stack and changed firewalld code to
return errors where appropriate.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Generation based on CAP_LAST_CAP, I hardcoded
capability.CAP_BLOCK_SUSPEND as last for systems which has no
/proc/sys/kernel/cap_last_cap
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
The docker graph call driver.Exists() on initialisation for each filesystem in
the graph. This results will results in a lot `zfs get all` commands. To reduce
this, retrieve all descend filesystem at startup and cache it for later checks
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
instead of let zfs automaticly mount datasets, mount them on demand using mount(2).
This speed up this graph driver in 2 ways:
- less zfs processes needed to start a container
- /proc/mounts get smaller, so zfs userspace tools has less to read (which can
a significant amount of data as the number of layer grows)
This ways it can be also ensured that the correct mountpoint is always used.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>