container.config.NetworkDisabled is set for both daemon's
DisableNetwork and --networking=false case. Hence using
this flag instead to fix#13725.
There is an existing integration-test to catch this issue,
but it is working for the wrong reasons.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Move some calls to container.LogEvent down lower so that there's
less of a chance of them being missed. Also add a few more events
that appear to have been missed.
Added testcases for new events: commit, copy, resize, attach, rename, top
Signed-off-by: Doug Davis <dug@us.ibm.com>
Merge user specified devices correctly with default devices.
Otherwise the user specified devices end up without permissions.
Signed-off-by: David R. Jenni <david.r.jenni@gmail.com>
When using a scanner, log lines over 64K will crash the Copier with
bufio.ErrTooLong. Subsequently, the ioutils.bufReader will grow without
bound as the logs are no longer being flushed to disk.
Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
daemon.Diff already implements mounting for naivegraphdriver and
aufs which does diffing on its owns does not need the container to be mounted.
So new filesystem driver should mount filesystems on their own if it is needed
to implement Diff(). This issue was reported by @kvasdopil while working on a
freebsd port, because freebsd does not allow mount an already mounted
filesystem. Also it saves some cycles for other operating systems as well.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
I ran into a situation where I was trying:
`docker rmi busybox`
and it kept failing saying:
`could not find image: Prefix can't be empty`
While I have no idea how I got into this situation, it turns out this is
error message is from `daemon.canDeleteImage()`. In that func we loop over
all containers checking to see if they're using the image we're trying to
delete. In my case though, I had a container with no ImageID. So the code
would die tryig to find that image (hence the "Prefix can't be empty" err).
This would stop all processing despite the fact that the container we're
checking had nothing to do with 'busybox'.
My change logs the bad situation in the logs and then skips that container.
There's no reason to fail all `docker rmi ...` calls just because of one
bad container.
Will continue to try to figure out how I got a container w/o an ImageID
but as of now I have no idea, I didn't do anything but normal docker cli
commands.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Fixes a regression from the volumes refactor where the vfs graphdriver
was setting labels for volumes to `s0` so that they can both be written
to by the container and shared with other containers.
When moving away from vfs this was never re-introduced.
Since this needs to happen regardless of volume driver, this is
implemented outside of the driver.
Fixes issue where `z` and `Z` labels are not set for bind-mounts.
Don't lock while creating volumes
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
UnmountVolumes used to also unmount 'specialMounts' but it no longer does after
a recent refactor of volumes. This patch corrects this behavior to include
unmounting of `networkMounts` which replaces `specialMounts` (now dead code).
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
I'm fairly consistently seeing an error in
DockerSuite.TestContainerApiRestartNotimeoutParam:
docker_api_containers_test.go:969:
c.Assert(status, check.Equals, http.StatusNoContent)
... obtained int = 500
... expected int = 204
And in the daemon logs I see:
INFO[0003] Container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971 failed to exit within 0 seconds of SIGTERM - using the force
ERRO[0003] Handler for POST /containers/{name:.*}/restart returned error: Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
ERRO[0003] HTTP Error err=Cannot restart container 8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971: [2] Container does not exist: container destroyed
statusCode=500
Note the "container destroyed" error message. This is being generatd by
the libcontainer code and bubbled up in container.Kill() as a result of the
call to `container.killPossiblyDeadProcess(9)` on line 439.
See the comment in the code, but what I think is going on is that because we
don't have any timeout on the Stop() call we immediate try to force things to
stop. And by the time we get into libcontainer code the process just finished
stopping due to the initial signal, so this secondary sig-9 fails due to the
container no longer running (ie. its 'destroyed').
Since we can't look for "container destroyed" to just ignore the error, because
some other driver might have different text, I opted to just ignore the error
and keep going - with the assumption that if it couldnt send a sig-9 to the
process then it MUST be because its already dead and not something else.
To reproduce this I just run:
curl -v -X POST http://127.0.0.1:2375/v1.19/containers/8cf77c20275586b36c5095613159cf73babf92ba42ed4a2954bd55dca6b08971/restart
a few times and then it fails with the HTTP 500.
Would like to hear some other ideas on to handle this since I'm not
thrilled with the proposed solution.
Signed-off-by: Doug Davis <dug@us.ibm.com>
* Don't AllocateNetwork when network is disabled
* Don't createNetwork in execdriver when network is disabled
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
We should let user create container even if the container he wants
join is not running, that check should be done at start time.
In this case, the running check is done by getIpcContainer() when
we start container.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
Signed by all authors:
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Signed-off-by: David Calavera <david.calavera@gmail.com>
Signed-off-by: Jeff Lindsay <progrium@gmail.com>
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Signed-off-by: Luke Marsden <luke@clusterhq.com>
Signed-off-by: David Calavera <david.calavera@gmail.com>
Sometimes container.cleanup() can be called from multiple paths
for the same container during error conditions from monitor and
regular startup path. So if the container network has been already
released do not try to release it again.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
The DOCKER_EXPERIMENTAL environment variable drives the activation of
the 'experimental' build tag.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
As part of this some generic packages like iptables, etchosts and resolvconf
have also been moved to libnetwork. Even though they can still be
consumed in a generic fashion they will reside and be maintained
from within the libnetwork project.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Updated Dockerfile to satisfy libnetwork GOPATH requirements.
- Reworked daemon to allocate network resources using libnetwork.
- Reworked remove link code to also update network resources in libnetwork.
- Adjusted the exec driver command population to reflect libnetwork design.
- Adjusted the exec driver create command steps.
- Updated a few test cases to reflect the change in design.
- Removed the dns setup code from docker as resolv.conf is entirely managed
in libnetwork.
- Integrated with lxc exec driver.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Prior to this patch, the response of
- GET /images/json
- GET /containers/json
- GET /images/(name)/history
display the Created Time as UNIX format which doesn't make sense.
These should be more readable as CLI command `docker inspect` shows.
Due to the case that an older client with a newer version daemon, we
need the version check for now.
Signed-off-by: Hu Keping <hukeping@huawei.com>
- noplog driver pkg for '--log-driver=none' (null object pattern)
- centralized factory for log drivers (instead of case/switch)
- logging drivers registers themselves to factory upon import
(easy plug/unplug of drivers in daemon/logdrivers.go)
- daemon now doesn't start with an invalid log driver
- Name() method of loggers is actually now their cli names (made it useful)
- generalized Read() logic, made it unsupported except json-file (preserves
existing behavior)
Spotted some duplication code around processing of legacy json-file
format, didn't touch that and refactored in both places.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Previously the cache was only updated once on startup, because the graph
code only check for filesystems on startup. However this breaks the API as it
was supposed and so unit tests.
Fixes#13142
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Add handler for SIGUSR1 based on feedback regarding when to dump
goroutine stacks. This will also dump goroutine stack traces on SIGQUIT
followed by a hard-exit from the daemon.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Added --since argument to `docker logs` command. Accept unix
timestamps and shows logs only created after the specified date.
Default value is 0 and passing default value or not specifying
the value in the request causes parameter to be ignored (behavior
prior to this change).
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
If firewalld is not installed (or I suppose not running), firewalld was
producing an error in the daemon init logs, even though firewalld is not
required for iptables stuff to function.
The firewalld library code was also logging directly to logrus instead
of returning errors.
Moved logging code higher up in the stack and changed firewalld code to
return errors where appropriate.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Generation based on CAP_LAST_CAP, I hardcoded
capability.CAP_BLOCK_SUSPEND as last for systems which has no
/proc/sys/kernel/cap_last_cap
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
The docker graph call driver.Exists() on initialisation for each filesystem in
the graph. This results will results in a lot `zfs get all` commands. To reduce
this, retrieve all descend filesystem at startup and cache it for later checks
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
instead of let zfs automaticly mount datasets, mount them on demand using mount(2).
This speed up this graph driver in 2 ways:
- less zfs processes needed to start a container
- /proc/mounts get smaller, so zfs userspace tools has less to read (which can
a significant amount of data as the number of layer grows)
This ways it can be also ensured that the correct mountpoint is always used.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Add tests for mounting into /proc and /sys
These two locations should be prohibited from mounting volumes into
those destinations.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
The lxc code here is doing the exact same thing on calling
execdriver.Terminate, so let's just use that.
Also removes some dead comments originally introduced
50144aeb42 but no longer relevant since we
have restart policies.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
`lxc-stop` does not support sending arbitrary signals.
By default, `lxc-stop -n <id>` would send `SIGPWR`.
The lxc driver was always sending `lxc-stop -n <id> -k`, which always
sends `SIGKILL`. In this case `lxc-start` returns an exit code of `0`,
regardless of what the container actually exited with.
Because of this we must send signals directly to the process when we
can.
Also need to set quiet mode on `lxc-start` otherwise it reports an error
on `stderr` when the container exits cleanly (ie, we didn't SIGKILL it),
this error is picked up in the container logs... and isn't really an
error.
Also cleaned up some potential races for waitblocked test.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
The `--userland-proxy` daemon flag makes it possible to rely on hairpin
NAT and additional iptables routes instead of userland proxy for port
publishing and inter-container communication.
Usage of the userland proxy remains the default as hairpin NAT is
unsupported by older kernels.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>