container.config.NetworkDisabled is set for both daemon's
DisableNetwork and --networking=false case. Hence using
this flag instead to fix#13725.
There is an existing integration-test to catch this issue,
but it is working for the wrong reasons.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Move some calls to container.LogEvent down lower so that there's
less of a chance of them being missed. Also add a few more events
that appear to have been missed.
Added testcases for new events: commit, copy, resize, attach, rename, top
Signed-off-by: Doug Davis <dug@us.ibm.com>
Merge user specified devices correctly with default devices.
Otherwise the user specified devices end up without permissions.
Signed-off-by: David R. Jenni <david.r.jenni@gmail.com>
When using a scanner, log lines over 64K will crash the Copier with
bufio.ErrTooLong. Subsequently, the ioutils.bufReader will grow without
bound as the logs are no longer being flushed to disk.
Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
daemon.Diff already implements mounting for naivegraphdriver and
aufs which does diffing on its owns does not need the container to be mounted.
So new filesystem driver should mount filesystems on their own if it is needed
to implement Diff(). This issue was reported by @kvasdopil while working on a
freebsd port, because freebsd does not allow mount an already mounted
filesystem. Also it saves some cycles for other operating systems as well.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
I ran into a situation where I was trying:
`docker rmi busybox`
and it kept failing saying:
`could not find image: Prefix can't be empty`
While I have no idea how I got into this situation, it turns out this is
error message is from `daemon.canDeleteImage()`. In that func we loop over
all containers checking to see if they're using the image we're trying to
delete. In my case though, I had a container with no ImageID. So the code
would die tryig to find that image (hence the "Prefix can't be empty" err).
This would stop all processing despite the fact that the container we're
checking had nothing to do with 'busybox'.
My change logs the bad situation in the logs and then skips that container.
There's no reason to fail all `docker rmi ...` calls just because of one
bad container.
Will continue to try to figure out how I got a container w/o an ImageID
but as of now I have no idea, I didn't do anything but normal docker cli
commands.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Fixes a regression from the volumes refactor where the vfs graphdriver
was setting labels for volumes to `s0` so that they can both be written
to by the container and shared with other containers.
When moving away from vfs this was never re-introduced.
Since this needs to happen regardless of volume driver, this is
implemented outside of the driver.
Fixes issue where `z` and `Z` labels are not set for bind-mounts.
Don't lock while creating volumes
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
UnmountVolumes used to also unmount 'specialMounts' but it no longer does after
a recent refactor of volumes. This patch corrects this behavior to include
unmounting of `networkMounts` which replaces `specialMounts` (now dead code).
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
I'm fairly consistently seeing an error in
DockerSuite.TestContainerApiRestartNotimeoutParam:
docker_api_containers_test.go:969:
c.Assert(status, check.Equals, http.StatusNoContent)
... obtained int = 500
... expected int = 204
And in the daemon logs I see:
INFO[0003] Container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971 failed to exit within 0 seconds of SIGTERM - using the force
ERRO[0003] Handler for POST /containers/{name:.*}/restart returned error: Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
ERRO[0003] HTTP Error err=Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
statusCode=500
Note the "container destroyed" error message. This is being generatd by
the libcontainer code and bubbled up in container.Kill() as a result of the
call to `container.killPossiblyDeadProcess(9)` on line 439.
See the comment in the code, but what I think is going on is that because we
don't have any timeout on the Stop() call we immediate try to force things to
stop. And by the time we get into libcontainer code the process just finished
stopping due to the initial signal, so this secondary sig-9 fails due to the
container no longer running (ie. its 'destroyed').
Since we can't look for "container destroyed" to just ignore the error, because
some other driver might have different text, I opted to just ignore the error
and keep going - with the assumption that if it couldnt send a sig-9 to the
process then it MUST be because its already dead and not something else.
To reproduce this I just run:
curl -v -X POST http://127.0.0.1:2375/v1.19/containers/8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971/restart
a few times and then it fails with the HTTP 500.
Would like to hear some other ideas on to handle this since I'm not
thrilled with the proposed solution.
Signed-off-by: Doug Davis <dug@us.ibm.com>
* Don't AllocateNetwork when network is disabled
* Don't createNetwork in execdriver when network is disabled
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
We should let user create container even if the container he wants
join is not running, that check should be done at start time.
In this case, the running check is done by getIpcContainer() when
we start container.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
Signed by all authors:
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Signed-off-by: David Calavera <david.calavera@gmail.com>
Signed-off-by: Jeff Lindsay <progrium@gmail.com>
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Signed-off-by: Luke Marsden <luke@clusterhq.com>
Signed-off-by: David Calavera <david.calavera@gmail.com>
Sometimes container.cleanup() can be called from multiple paths
for the same container during error conditions from monitor and
regular startup path. So if the container network has been already
released do not try to release it again.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
The DOCKER_EXPERIMENTAL environment variable drives the activation of
the 'experimental' build tag.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
As part of this some generic packages like iptables, etchosts and resolvconf
have also been moved to libnetwork. Even though they can still be
consumed in a generic fashion they will reside and be maintained
from within the libnetwork project.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Updated Dockerfile to satisfy libnetwork GOPATH requirements.
- Reworked daemon to allocate network resources using libnetwork.
- Reworked remove link code to also update network resources in libnetwork.
- Adjusted the exec driver command population to reflect libnetwork design.
- Adjusted the exec driver create command steps.
- Updated a few test cases to reflect the change in design.
- Removed the dns setup code from docker as resolv.conf is entirely managed
in libnetwork.
- Integrated with lxc exec driver.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>