This change will allow us to run SELinux in a container with
BTRFS back end. We continue to work on fixing the kernel/BTRFS
but this change will allow SELinux Security separation on BTRFS.
It basically relabels the content on container creation.
Just relabling -init directory in BTRFS use case. Everything looks like it
works. I don't believe tar/achive stores the SELinux labels, so we are good
as far as docker commit.
Tested Speed on startup with BTRFS on top of loopback directory. BTRFS
not on loopback should get even better perfomance on startup time. The
more inodes inside of the container image will increase the relabel time.
This patch will give people who care more about security the option of
runnin BTRFS with SELinux. Those who don't want to take the slow down
can disable SELinux either in individual containers or for all containers
by continuing to disable SELinux in the daemon.
Without relabel:
> time docker run --security-opt label:disable fedora echo test
test
real 0m0.918s
user 0m0.009s
sys 0m0.026s
With Relabel
test
real 0m1.942s
user 0m0.007s
sys 0m0.030s
Signed-off-by: Dan Walsh <dwalsh@redhat.com>
Signed-off-by: Dan Walsh <dwalsh@redhat.com>
If platform supports xfs filesystem then use xfs as default filesystem
for container rootfs instead of ext4. Reason being that ext4 is pre-allcating
lot of metadata (around 1.8GB on 100G thin volume) and that can take long
enough on AWS storage that systemd times out and docker fails to start.
If one disables pre-allocation of ext4 metadata, then it will be allocated
when containers are mounted and we will have multiple copies of metadata
per container. For a 100G thin device, it was around 1.5GB of metadata
per container.
ext4 has an optimization to skip zeroing if discards are issued and
underlying device guarantees that zero will be returned when discarded
blocks are read back. devicemapper thin devices don't offer that guarantee
so ext4 optimization does not kick in. In fact given discards are optional
and can be dropped on the floor if need be, it looks like it might not be
possible to guarantee that all the blocks got discarded and if read back
zero will be returned.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
If user wants to use a filesystem it can be specified using dm.fs=<filesystem>
option. It is possible that docker already had base image and a filesystem
on that. Later if user wants to change file system using dm.fs= option
and restarts docker, that's not possible. Warn user about it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Right now if blkid fails we are just logging a debug message and don;t return
the actual error to caller. Caller gets the error message that thin pool
base device UUID verification failed and it might give impression that thin
pool changed. But that's not the case. Thin pool is in such a state that we
could not even query the thin device UUID. Retrun error message appropriately
to make situation more clear.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
The LXC driver was deprecated in Docker 1.8.
Following the deprecation rules, we can remove a deprecated feature
after two major releases. LXC won't be supported anymore starting on Docker 1.10.
Signed-off-by: David Calavera <david.calavera@gmail.com>
This reverts commit d5cd032a86.
Commit caused issues on systems with case-insensitive filesystems.
Revert for now
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
- Move autogen/dockerversion to version
- Update autogen and "builds" to use this package and a build flag
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
cleanupDeleted() takes devices.Lock() but does not drop it if there are
no deleted devices. Hence docker deadlocks if one is using deferred
device deletion feature. (--storage-opt dm.use_deferred_deletion=true).
Fix it. Drop the lock before returning.
Also added a unit test case to make sure in future this can be easily
detected if somebody changes the function.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Generate a hash chain involving the image configuration, layer digests,
and parent image hashes. Use the digests to compute IDs for each image
in a manifest, instead of using the remotely specified IDs.
To avoid breaking users' caches, check for images already in the graph
under old IDs, and avoid repulling an image if the version on disk under
the legacy ID ends up with the same digest that was computed from the
manifest for that image.
When a calculated ID already exists in the graph but can't be verified,
continue trying SHA256(digest) until a suitable ID is found.
"save" and "load" are not changed to use a similar scheme. "load" will
preserve the IDs present in the tar file.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Adds support for the daemon to handle user namespace maps as a
per-daemon setting.
Support for handling uid/gid mapping is added to the builder,
archive/unarchive packages and functions, all graphdrivers (except
Windows), and the test suite is updated to handle user namespace daemon
rootgraph changes.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
When `-s` is not specified, there is no need to ask if there is a plugin
with the specified name.
This speeds up unit tests dramatically since they don't need to wait the
timeout period for each call to `graphdriver.New`.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
There is no need to call `os.Stat` on the driver filesystem path of a
container as `os.RemoveAll` already handles (properly) the case where
the path no longer exists.
Given the results of the stat() were not even being used, there is no
value in erroring out because of the stat call failure, and worse, it
prevents daemon cleanup of containers in "Dead" state unless you re-create
directories that were already removed via a manual cleanup after a
failure. This brings removal in overlay in line with aufs/devicemapper
drivers which don't error out if the filesystem path no longer exists.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Right now we check for the existence of device but don't make sure it is
a thin pool device. We assume it is a thin pool device and call poolStatus()
on the device which returns an error EOF. And that error does not tell
anything.
So before we reach the stage of calling poolStatus() make sure we are working
with a thin pool device otherwise error out.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Start a goroutine which runs every 30 seconds and if there are deferred
deleted devices, it tries to clean those up.
Also it moves the call to cleanupDeletedDevices() into goroutine and
moves the locking completely inside the function. Now function does not
assume that device lock is held at the time of entry.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Finally here is the patch to implement deferred deletion functionality.
Deferred deleted devices are marked as "Deleted" in device meta file.
First we try to delete the device and only if deletion fails and user has
enabled deferred deletion, device is marked for deferred deletion.
When docker starts up again, we go through list of deleted devices and
try to delete these again.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Provide a command line option dm.use_deferred_deletion to enable deferred
device deletion feature. By default feature will be turned off.
Not sure if there is much value in deferred deletion being turned on
without deferred removal being turned on. So for now, this feature can
be enabled only if deferred removal is on.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently during startup we walk through all the device files and read
their device ID and mark in a bitmap that device id is used.
We are anyway going through all device files. So we can as well load all
that data into device hash map. This will save us little time when
container is actually launched later.
Also this will help with later patches where cleanup deferred device
wants to go through all the devices and see which have been marked for
deletion and delete these.
So re-organize the code a bit.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Simplify setupBaseImage() even further. Move some more code in a separate
function. Pure code reorganization. No functionality change.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Move thin pool related checks in a separate function. Pure code reorganization.
Makes reading code easier.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This moves base device creation function in a separate function. Pure
code reorganization. Makes reading code little easier.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This patch does three things. Following are the descriptions.
===
Create a separate function for delete transactions so that parent function
is little smaller.
Also close transaction if an error happens.
===
When docker is being shutdown, save deviceset metadata first before
trying to remove the devices. Generally caller gives only 10 seconds
for shutdown to complete and then kills it after that. So if some device
is busy, we will wait 20 seconds for it removal and never be able to save
metadata. So first save metadata and then deal with device removal.
===
Move issue discard operation in a separate function. This makes reading code
little easier.
Also don't issue discards if device is still open. That means devices is
still probably being used and issuing discards is not a good idea.
This is especially true in case of deferred deletion. We want to issue
discards when device is not open. At that time device can be deleted too.
Otherwise we will issue discards and deletion will actually fail. Later
we will try deletion again and issue discards again and deletion will
fail again as device is open and busy.
So this will ensure that discards are issued once when device is not open
and it can actually be deleted.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Right now we seem to have 3 locks.
- devinfo.lock
This is a per device lock
- metaData.devicesLock
This is supposedely protecting map of devices.
- Global DeviceSet lock
This is protecting map of devices as well as serializing calls to libdevmapper.
Semantics of per devices lock and global deviceset lock seem to be very clear.
Even ordering between these two locks has been defined properly.
What is not clear is the need and ordering of metaData.devicesLock. Looks like
this lock is not necessary and global DeviceSet lock should be used to
protect map of devices as it is part of DeviceSet.
This patchset gets rid of metaData.devicesLock and instead uses DeviceSet
lock to protect map of devices.
Also at couple of places during initialization takes devices.Lock(). That
is not strictly necessary as there is supposed to be one thread of execution
during initializaiton. Still it makes the code clearer.
I think this makes code more clear and easier to understand and easier to
make further changes.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
maxDeviceID is upper limit on device Id thin pool can support. Right now
we have this check only during startup. It is a good idea to move this
check in loadMetadata so that any time a device file is loaded and if it
is corrupted and device Id is more than maxDevieceID, it will be detected
right then and there.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Use deactivateDevice() instead of removeDevice() directly. This will make
sure for device deletion, deferred removal is used if user has configured
it in. Also this makes reading code litle easier as there is single function
to remove a device and that is deactivateDevice().
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
If a device is still mounted at the time of DeleteDevice(), that means
higher layers have not called Put() properly on the device and are trying
to delete it. This is a bug in the code where Get() and Put() have not been
properly paired up. Fail device deletion if it is still mounted.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Exists() and HasDevice() just check if device file exists or not. It does
not say anything about if device is mounted or not. Fix comments.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
device has map (device.Devices), contains valid devices and we skip all
the files which are not device files. transaction metadata file is not
device file. Skip this file when devices files are being read and loaded
into map.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This fixes the case where directory is removed in
aufs and then the same layer is imported to a
different graphdriver.
Currently when you do `rm -rf /foo && mkdir /foo`
in a layer in aufs the files under `foo` would
only be be hidden on aufs.
The problems with this fix:
1) When a new diff is recreated from non-aufs driver
the `opq` files would not be there. This should not
mean layer differences for the user but still
different content in the tar (one would have one
`opq` file, the others would have `.wh.*` for every
file inside that folder). This difference also only
happens if the tar-split file isn’t stored for the
layer.
2) New files that have the filenames before `.wh..wh..opq`
when they are sorted do not get picked up by non-aufs
graphdrivers. Fixing this would require a bigger
refactoring that is planned in the future.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
The mount syscall does not handle string flags like "noatime",
we must use bitmasks like MS_NOATIME instead.
pkg/mount.Mount already handles this work.
Signed-off-by: Carl Henrik Lunde <chlunde@ping.uio.no>
Allows people to create out-of-process graphdrivers that can be used
with Docker.
Extensions must be started before Docker otherwise Docker will fail to
start.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This adds a data struct in the aufs driver for including more
information about active mounts along with their reference count.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
Windows: add support for images stored in alternate location.
Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
Commit e27c904 added a wrong and misleading comment
to GetMetadata(). Fix it using the wording from
commit 407a626 which introduced GetMetadata().
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
These functions are not part of the graphdriver.Driver
interface and should therefore be private.
Also, remove comments added by commit e27c904 as they are
* pretty obvious
* no longer required by golint
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
Ploop graph driver provides its own ext4 filesystem to every
container. It so happens that ext4 root comes with lost+found
directory, causing failures from DriverTestCreateEmpty() and
DriverTestCreateBase() tests on ploop.
While I am not yet ready to submit ploop graph driver for review,
this change looks simple enough to push.
Note that filtering is done without any additional allocations,
as described in https://github.com/golang/go/wiki/SliceTricks.
[v2: added a comment about lost+found]
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
TL;DR: check for IsExist(err) after a failed MkdirAll() is both
redundant and wrong -- so two reasons to remove it.
Quoting MkdirAll documentation:
> MkdirAll creates a directory named path, along with any necessary
> parents, and returns nil, or else returns an error. If path
> is already a directory, MkdirAll does nothing and returns nil.
This means two things:
1. If a directory to be created already exists, no error is returned.
2. If the error returned is IsExist (EEXIST), it means there exists
a non-directory with the same name as MkdirAll need to use for
directory. Example: we want to MkdirAll("a/b"), but file "a"
(or "a/b") already exists, so MkdirAll fails.
The above is a theory, based on quoted documentation and my UNIX
knowledge.
3. In practice, though, current MkdirAll implementation [1] returns
ENOTDIR in most of cases described in #2, with the exception when
there is a race between MkdirAll and someone else creating the
last component of MkdirAll argument as a file. In this very case
MkdirAll() will indeed return EEXIST.
Because of #1, IsExist check after MkdirAll is not needed.
Because of #2 and #3, ignoring IsExist error is just plain wrong,
as directory we require is not created. It's cleaner to report
the error now.
Note this error is all over the tree, I guess due to copy-paste,
or trying to follow the same usage pattern as for Mkdir(),
or some not quite correct examples on the Internet.
[v2: a separate aufs commit is merged into this one]
[1] https://github.com/golang/go/blob/f9ed2f75/src/os/path.go
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
The `ApplyDiff` function takes a tar archive stream that is
automagically decompressed later. This was causing a double
decompression, and when the layer was empty, that causes an early EOF.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
The ZFS driver should raise proper errors when the ZFS utility is
missing or when there's no zfs partition active on the system. Raising the
proper errors make possible to silently ignore the ZFS storage
driver when no default storage driver is specified.
Previous to this commit it was no longer possible to start the
docker daemon in that way:
docker -d --storage-opt dm.loopdatasize=2GB
The above command resulted in an exit error because the ZFS driver
tried to use the storage options.
Signed-off-by: Flavio Castelli <fcastelli@suse.com>
Current default basesize is 10G. Change it to 100G. Reason being that for
some people 10G is turning out to be too small and we don't have capabilities
to grow it dyamically.
This is just overcommitting and no real space is allocated till container
actually writes data. And this is no different then fs based graphdrivers
where virtual size of a container root is unlimited.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Replaced github.com/docker/libcontainer with
github.com/opencontainers/runc/libcontaier.
Also I moved AppArmor profile generation to docker.
Main idea of this update is to fix mounting cgroups inside containers.
After updating docker on CI we can even remove dind.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
The ability to save and verify base device UUID (#13896) introduced a
situation where the initialization would panic when removing the device
returns EBUSY.
Functions `verifyBaseDeviceUUID` and `saveBaseDeviceUUID` now take the
lock on the `DeviceSet`, which solves the problem as `removeDevice`
assumes it owns the lock.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Often it happens that docker is not able to shutdown/remove the thin
pool it created because some device has leaked into some mount name
space. That means device is in use and that means pool can't be removed.
Docker will leave pool as it is and exit. Later when user starts the
docker, it finds pool is already there and docker uses it. But docker
does not know it is same pool which is using the loop devices. Now
docker thinks loop devices are not being used. That means it does not
display the data correctly in "docker info", giving user wrong information.
This patch tries to detect if loop devices as created by docker are
being used for pool and fills in the right details in "docker info".
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
DeviceMapper must be explicitly selected because the Docker binary might not be linked to the right devmapper library.
With this change, Docker fails fast if the driver detection finds the devicemapper directory but the driver is not the default option.
The option `override_udev_sync_check` doesn't make sense anymore, since the user must be explicit to select devicemapper, so it's being removed.
Docker fails to use devicemapper only if Docker has been built statically unless the option was explicit.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Export metadata for container and image in docker-inspect when overlay
graphdriver is in use. Right now it is done only for devicemapper graph
driver.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
It is easy for one to use docker for a while, shut it down and restart
docker with different set of storage options for device mapper driver
which will effectively change the thin pool. That means any of the
metadata stored in /var/lib/docker/devicemapper/metadata/ is not valid
for the new pool and user will run into various kind of issues like
container not found in the pool etc.
Users think that their images or containers are lost but it might just
be the case of configuration issue. People might use wrong metadata
with wrong pool.
To detect such situations, save UUID of base image and once docker
starts later, query and compare the UUID of base image with the
stored one. If they don't match, fail the initialization with the
error that UUID failed to match.
That way user will be forced to cleanup /var/lib/docker/ directory
and start docker again.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Export image/container metadata stored in graph driver. Right now 3 fields
DeviceId, DeviceSize and DeviceName are being exported from devicemapper.
Other graph drivers can export fields as they see fit.
This data can be used to mount the thin device outside of docker and tools
can look into image/container and do some kind of inspection.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Previously the cache was only updated once on startup, because the graph
code only check for filesystems on startup. However this breaks the API as it
was supposed and so unit tests.
Fixes#13142
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
The docker graph call driver.Exists() on initialisation for each filesystem in
the graph. This results will results in a lot `zfs get all` commands. To reduce
this, retrieve all descend filesystem at startup and cache it for later checks
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
instead of let zfs automaticly mount datasets, mount them on demand using mount(2).
This speed up this graph driver in 2 ways:
- less zfs processes needed to start a container
- /proc/mounts get smaller, so zfs userspace tools has less to read (which can
a significant amount of data as the number of layer grows)
This ways it can be also ensured that the correct mountpoint is always used.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Right now devicemapper mounts thin device using online discards by default
and passes mount option "discard". Generally people discourage usage of
online discards as they can be a drain on performance. Instead it is
recommended to use fstrim once in a while to reclaim the space.
In case of containers, we recommend to keep data volumes separate. So
there might not be lot of rm, unlink operations going on and there might
not be lot of space being freed by containers. So it might not matter
much if we don't reclaim that free space in pool.
User can still pass mount option explicitly using dm.mountopt=discard to
enable discards if they would like to.
So this is more like setting the containers by default for better performance
instead of better space efficiency in pool. And user can change the behavior
if they don't like default behavior.
Reported-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
If device is being reactivated before it could go away and deferred
deactivation is scheduled on it, cancel it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This will help with debugging as one could just do "docker info" and figure
out of deferred removal is enabled or not.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Provide a new command line knob dm.deferred_device_removal which will enable
deferred device deactivation if driver and library support it.
This patch also checks for library support and driver version.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Before this, a storage driver would be defaulted to based on the
priority list, and only print a warning if there is state from other
drivers.
This meant a reordering of priority list would "break" users in an
upgrade of docker, such that there images in the prior driver's state
were now invisible.
With this change, prior state is scanned, and if present that driver is
preferred.
As such, we can reorder the priority list, and after an upgrade,
existing installs with prior drivers can have a contiguous experience,
while fresh installs may default to a driver in the new priority list.
Ref: https://github.com/docker/docker/pull/11962#issuecomment-88274858
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Signed-off-by: Megan Kostick <mkostick@us.ibm.com>
Alphabetize FSMagic list to make more human-readable.
Signed-off-by: Megan Kostick <mkostick@us.ibm.com>
This provides an override for forcing the daemon to still attempt
running the devicemapper driver even when udev sync is not supported.
Intended to be a very clear impairment for those choosing to use it. If
udev sync is false, there will still be an error in the daemon logs,
even when the override is in place. The docs have an explicit WARNING.
Including link to the docs for users that encounter this daemon error
during an upgrade.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Right now we try device removal at the interval of 10ms and keep on trying
till either device is removed or 10 seconds are over. That means if device
is busy, we will try 1000 times in those 10 seconds.
Sounds too high a frequency of deivce removal retrial. All the logs are
filled easily. I think it is a good idea to slow down a bit and retry at
the interval of 100ms instead of 10ms.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
During device removal, we are first waiting for device to close() in a tight
loop for 10 seconds. I am not sure why do we need it. First of all we come
here once the umount() is successful so device should be free. For some reason
of device is temporarily busy, then removeDevice() logic retries device removal
logic in a loop for 10 seconds and that should cover it. Can't see why one
more 10 seoncds loop is required before attempting device removal.
One loop should be able to cover all the temporary device busy conditions and
if condition is not temporary then 10 seconds loop is not going to help anyway.
So instead of two loops of 10 seconds each, I am converting it to a single
loop of 20 seconds. May be 10 second loop is good enough but for now I am
keeping it 20 seconds to avoid any regressions.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently in device removal path (device deactivation), we wait
for 10 seconds for devive to actually go away. waitRemove().
In current code this is not required. If dm removal task has completed
and one has done the wait on udev cookie, then device is gone and there
is no need to write another loop to wait for device removal.
This patch removes the waitRemove() which waits for 10 seconds after
device removal. This seems unnecessary.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
devmapper graph driver retries device removal 1000 times in case of failure
and if this fills up console with 1000 messages (when daemon is running in
debug mode). So remove these debug messages.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
There are issues with libdm logging. Right now if docker daemon is run
in debug mode, logging by libdm is too verbose. And if a device can't
be removed, thousands of messages fill the console and one can not see
what's going on.
This patch removes devicemapper.LogInitVerbose() call as that call will
only work if docker was not registering its own log handler with libdm.
For some reason docker registers one with libdm and libdm hands over
all the messages to docker (including debug ones). And now it is up to
devmapper backend to figure out which ones should go to console and
which ones should not.
So by default log only fatal messages from libdm. One can easily modify
the code to change it for debugging purposes.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
It's about time to let folks not hit 'vfs', when 'overlay' is supported
on their kernel. Especially now that v3.18.y is a long-term kernel.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Automatically detect support for aufs `dirperm1` option and apply it.
`dirperm1` tells aufs to check the permission bits of the directory on the
topmost branch and ignore the permission bits on all lower branches.
It can be used to fix aufs' permission bug (i.e., upper layer having
broader mask than the lower layer).
More information about the bug can be found at https://github.com/docker/docker/issues/783
`dirperm1` man page is at: http://aufs.sourceforge.net/aufs3/man.html
Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
We removed it, because upstream removed it. But now it will be coming
back, so work with it either way.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
They say we should only use the BTRFS_LIB_VERSION
They will no longer support this since it had to be managed manually
Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)
In several cases graphdriver were just returning the low-level syscall
error and that was making it all the way up to the daemon logs and in
many cases was difficult to tell it was even coming from the graphdriver
at all.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
daemon/volumes.go
This SetFileCon call made no sense, it was changing the labels of any
directory mounted into the containers SELinux label. If it came from me,
then I apologize since it is a huge bug.
The Volumes Mount code should optionally do this, but it should not always
happen, and should never happen on a --privileged container.
The change to
daemon/graphdriver/vfs/driver.go, is a simplification since this it not
a relabel, it is only a setting of the shared label for docker volumes.
Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)