Fixes an issue where a `Dead` container has no names so the API returns
`null` instead of an empty array.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Exec start was sending HTTP 500 for every error.
Fixed an error where pausing a container and then calling exec start
caused the daemon to freeze.
Updated API docs which incorrectly showed that a successful exec start
was an HTTP 201, in reality it is HTTP 200.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Right now we seem to have 3 locks.
- devinfo.lock
This is a per device lock
- metaData.devicesLock
This is supposedely protecting map of devices.
- Global DeviceSet lock
This is protecting map of devices as well as serializing calls to libdevmapper.
Semantics of per devices lock and global deviceset lock seem to be very clear.
Even ordering between these two locks has been defined properly.
What is not clear is the need and ordering of metaData.devicesLock. Looks like
this lock is not necessary and global DeviceSet lock should be used to
protect map of devices as it is part of DeviceSet.
This patchset gets rid of metaData.devicesLock and instead uses DeviceSet
lock to protect map of devices.
Also at couple of places during initialization takes devices.Lock(). That
is not strictly necessary as there is supposed to be one thread of execution
during initializaiton. Still it makes the code clearer.
I think this makes code more clear and easier to understand and easier to
make further changes.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
maxDeviceID is upper limit on device Id thin pool can support. Right now
we have this check only during startup. It is a good idea to move this
check in loadMetadata so that any time a device file is loaded and if it
is corrupted and device Id is more than maxDevieceID, it will be detected
right then and there.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Use deactivateDevice() instead of removeDevice() directly. This will make
sure for device deletion, deferred removal is used if user has configured
it in. Also this makes reading code litle easier as there is single function
to remove a device and that is deactivateDevice().
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
If a device is still mounted at the time of DeleteDevice(), that means
higher layers have not called Put() properly on the device and are trying
to delete it. This is a bug in the code where Get() and Put() have not been
properly paired up. Fail device deletion if it is still mounted.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Exists() and HasDevice() just check if device file exists or not. It does
not say anything about if device is mounted or not. Fix comments.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
device has map (device.Devices), contains valid devices and we skip all
the files which are not device files. transaction metadata file is not
device file. Skip this file when devices files are being read and loaded
into map.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Implement basic interfaces to write custom routers that can be plugged
to the server. Remove server coupling with the daemon.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Although having a request ID available throughout the codebase is very
valuable, the impact of requiring a Context as an argument to every
function in the codepath of an API request, is too significant and was
not properly understood at the time of the review.
Furthermore, mixing API-layer code with non-API-layer code makes the
latter usable only by API-layer code (one that has a notion of Context).
This reverts commit de41640435, reversing
changes made to 7daeecd42d.
Signed-off-by: Tibor Vass <tibor@docker.com>
Conflicts:
api/server/container.go
builder/internals.go
daemon/container_unix.go
daemon/create.go
This reverts commit ff92f45be4, reversing
changes made to 80e31df3b6.
Reverting to make the next revert easier.
Signed-off-by: Tibor Vass <tibor@docker.com>
This fixes the case where directory is removed in
aufs and then the same layer is imported to a
different graphdriver.
Currently when you do `rm -rf /foo && mkdir /foo`
in a layer in aufs the files under `foo` would
only be be hidden on aufs.
The problems with this fix:
1) When a new diff is recreated from non-aufs driver
the `opq` files would not be there. This should not
mean layer differences for the user but still
different content in the tar (one would have one
`opq` file, the others would have `.wh.*` for every
file inside that folder). This difference also only
happens if the tar-split file isn’t stored for the
layer.
2) New files that have the filenames before `.wh..wh..opq`
when they are sorted do not get picked up by non-aufs
graphdrivers. Fixing this would require a bigger
refactoring that is planned in the future.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Fixes an issue where `VOLUME some_name:/foo` would be parsed as a named
volume, allowing access from the builder to any volume on the host.
This makes sure that named volumes must always be passed in as a bind.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Before this patch libcontainer badly errored out with `invalid
argument` or `numerical result out of range` while trying to write
to cpuset.cpus or cpuset.mems with an invalid value provided.
This patch adds validation to --cpuset-cpus and --cpuset-mems flag along with
validation based on system's available cpus/mems before starting a container.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
Use `pkg/discovery` to provide nodes discovery between daemon instances.
The functionality is driven by two different command-line flags: the
experimental `--cluster-store` (previously `--kv-store`) and
`--cluster-advertise`. It can be used in two ways by interested
components:
1. Externally by calling the `/info` API and examining the cluster store
field. The `pkg/discovery` package can then be used to hit the same
endpoint and watch for appearing or disappearing nodes. That is the
method that will for example be used by Swarm.
2. Internally by using the `Daemon.discoveryWatcher` instance. That is
the method that will for example be used by libnetwork.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
* Thanks to the Default gateway service in libnetwork, we dont have to add
containers explicitly to secondary public network. This is handled
automatically regardless of the primary network driver.
* Fixed the URL convention for kv-store to be aligned with the upcoming
changes to discovery URL
* Also, in order to bring consistency between external and internal network
drivers, we moved the driver configs via controller Init.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Avoid creating a global context object that will be used while the daemon is running.
Not only this object won't ever be garbage collected, but it won't ever be used for anything else than creating other contexts in each request. I think it's a bad practive to have something like this sprawling aroud the code.
This change removes that global object and initializes a context in the cases we don't have already one, like shutting down the server.
This also removes a bunch of context arguments from functions that did nothing with it.
Signed-off-by: David Calavera <david.calavera@gmail.com>
This PR adds a "request ID" to each event generated, the 'docker events'
stream now looks like this:
```
2015-09-10T15:02:50.000000000-07:00 [reqid: c01e3534ddca] de7c5d4ca927253cf4e978ee9c4545161e406e9b5a14617efb52c658b249174a: (from ubuntu) create
```
Note the `[reqID: c01e3534ddca]` part, that's new.
Each HTTP request will generate its own unique ID. So, if you do a
`docker build` you'll see a series of events all with the same reqID.
This allow for log processing tools to determine which events are all related
to the same http request.
I didn't propigate the context to all possible funcs in the daemon,
I decided to just do the ones that needed it in order to get the reqID
into the events. I'd like to have people review this direction first, and
if we're ok with it then I'll make sure we're consistent about when
we pass around the context - IOW, make sure that all funcs at the same level
have a context passed in even if they don't call the log funcs - this will
ensure we're consistent w/o passing it around for all calls unnecessarily.
ping @icecrime @calavera @crosbymichael
Signed-off-by: Doug Davis <dug@us.ibm.com>