From 4870fb36d4ab1ef221cdcd95644cb19f6da0217a Mon Sep 17 00:00:00 2001 From: Shishir Mahajan Date: Tue, 4 Aug 2015 14:33:00 -0400 Subject: [PATCH] Warning message for lvm devmapper running on top of loopback devices Signed-off-by: Shishir Mahajan --- daemon/graphdriver/devmapper/README.md | 31 +++++++++++++++-------- daemon/graphdriver/devmapper/deviceset.go | 6 +++++ docs/reference/commandline/daemon.md | 6 +++++ 3 files changed, 33 insertions(+), 10 deletions(-) diff --git a/daemon/graphdriver/devmapper/README.md b/daemon/graphdriver/devmapper/README.md index fa130ac5e2..2f7fe2af53 100644 --- a/daemon/graphdriver/devmapper/README.md +++ b/daemon/graphdriver/devmapper/README.md @@ -3,22 +3,33 @@ ### Theory of operation The device mapper graphdriver uses the device mapper thin provisioning -module (dm-thinp) to implement CoW snapshots. For each devicemapper -graph location (typically `/var/lib/docker/devicemapper`, $graph below) -a thin pool is created based on two block devices, one for data and -one for metadata. By default these block devices are created -automatically by using loopback mounts of automatically created sparse +module (dm-thinp) to implement CoW snapshots. The preferred model is +to have a thin pool reserved outside of Docker and passed to the +daemon via the `--storage-opt dm.thinpooldev` option. + +As a fallback if no thin pool is provided, loopback files will be +created. Loopback is very slow, but can be used without any +pre-configuration of storage. It is strongly recommended that you do +not use loopback in production. Ensure your Docker daemon has a +`--storage-opt dm.thinpooldev` argument provided. + +In loopback, a thin pool is created at `/var/lib/docker/devicemapper` +(devicemapper graph location) based on two block devices, one for +data and one for metadata. By default these block devices are created +automatically by using loopback mounts of automatically created sparse files. -The default loopback files used are `$graph/devicemapper/data` and -`$graph/devicemapper/metadata`. Additional metadata required to map -from docker entities to the corresponding devicemapper volumes is -stored in the `$graph/devicemapper/json` file (encoded as Json). +The default loopback files used are +`/var/lib/docker/devicemapper/devicemapper/data` and +`/var/lib/docker/devicemapper/devicemapper/metadata`. Additional metadata +required to map from docker entities to the corresponding devicemapper +volumes is stored in the `/var/lib/docker/devicemapper/devicemapper/json` +file (encoded as Json). In order to support multiple devicemapper graphs on a system, the thin pool will be named something like: `docker-0:33-19478248-pool`, where the `0:33` part is the minor/major device nr and `19478248` is the -inode number of the $graph directory. +inode number of the `/var/lib/docker/devicemapper` directory. On the thin pool, docker automatically creates a base thin device, called something like `docker-0:33-19478248-base` of a fixed diff --git a/daemon/graphdriver/devmapper/deviceset.go b/daemon/graphdriver/devmapper/deviceset.go index e1905d1dc6..70710082e4 100644 --- a/daemon/graphdriver/devmapper/deviceset.go +++ b/daemon/graphdriver/devmapper/deviceset.go @@ -1397,6 +1397,12 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error { } } + if devices.thinPoolDevice == "" { + if devices.metadataLoopFile != "" || devices.dataLoopFile != "" { + logrus.Warnf("Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.") + } + } + // Right now this loads only NextDeviceID. If there is more metadata // down the line, we might have to move it earlier. if err := devices.loadDeviceSetMetaData(); err != nil { diff --git a/docs/reference/commandline/daemon.md b/docs/reference/commandline/daemon.md index 6ad15ede4c..632857c94c 100644 --- a/docs/reference/commandline/daemon.md +++ b/docs/reference/commandline/daemon.md @@ -192,6 +192,12 @@ options for `zfs` start with `zfs`. resize support, dynamically changing thin-pool features, automatic thinp metadata checking when lvm activates the thin-pool, etc. + As a fallback if no thin pool is provided, loopback files will be + created. Loopback is very slow, but can be used without any + pre-configuration of storage. It is strongly recommended that you do + not use loopback in production. Ensure your Docker daemon has a + `--storage-opt dm.thinpooldev` argument provided. + Example use: docker daemon --storage-opt dm.thinpooldev=/dev/mapper/thin-pool