Copyedits

This commit is contained in:
Misty Stanley-Jones 2017-06-28 08:21:10 -07:00
parent de19792eb3
commit 5dc4bd40ea
2 changed files with 274 additions and 75 deletions

View File

@ -8,116 +8,252 @@ title: Docker for AWS persistent data volumes
## What is Cloudstor?
Cloudstor is a modern volume plugin built by Docker. It comes pre-installed and pre-configured in Docker Swarms deployed through Docker for AWS. Docker Swarm mode tasks as well as regular Docker containers can use a volume created with Cloudstor to mount a persistent data volume. In Docker for AWS, Cloudstor has two `backing` options: `relocatable` (that uses EBS) and `shared` (that uses EFS) described below. Using the Docker swarm CLI (to create a service along with the persistent volumes for the tasks), it is possible to create the following:
1. Unique `relocatable` Cloudstor volumes mounted by each task in a swarm mode service.
2. Global `shared` Cloudstor volumes mounted by all tasks in a swarm mode service.
3. Unique `shared` Cloudstor volumes mounted by each task in a swarm mode service.
Example for the above are detailed down below.
Cloudstor is a modern volume plugin built by Docker. It comes pre-installed and
pre-configured in Docker swarms deployed through Docker for AWS. Docker swarm
mode tasks and regular Docker containers can use a volume created with
Cloudstor to mount a persistent data volume. In Docker for AWS, Cloudstor has
two `backing` options:
- `relocatable` data volumes are backed by EBS.
- `shared` data volumes are backed by EFS.
When you use the Docker CLI to create a swarm service along with the persistent
volumes used by the service tasks, you can create three different types of
persistent volumes:
- Unique `relocatable` Cloudstor volumes mounted by each task in a swarm service.
- Global `shared` Cloudstor volumes mounted by all tasks in a swarm service.
- Unique `shared` Cloudstor volumes mounted by each task in a swarm service.
Examples of each type of volume are described below.
## Relocatable Cloudstor volumes
Workloads running in a Docker Service that require access to low latency/high IOPs persistent storage (e.g. a database engine) can use a `relocatable` Cloudstor volume backed by EBS. The type of EBS volume (e.g. gp2, io1, st1, sc1) that the workload requires can be specified during volume creation. Each `relocatable` Cloudstor volume is backed by a single EBS volume. If a swarm task using a `relocatable` Cloudstor volume gets rescheduled to another node within the same availability zone (as the node where the task was running on), Cloudstor takes care of detaching and re-attaching the backing EBS volume to the target node. If the swarm task gets rescheduled to a node in a different availability zone (from the node where the task was originally running), Cloudstor will transfer the contents of the backing EBS volume using a snapshot to the destination availability zone as well as clean up the EBS volume in the original Availability Zone. To minimize the time necessary to create the snapshot to transfer data across availability zones, Cloudstor periodically takes snapshots of the EBS volumes to ensure there is never a large amount of diff that will need to get transferred as part of the snapshot necessary during a cross availability zone task reschedule. Typically the snapshot based transfer process across availability zones takes about 2 to 5 minutes unless the work load is write heavy. For extremely write-heavy workloads generating several GBs of fresh/new data within minutes, the transfer may take more than 5 minutes. The time required to snapshot and transfer increases sharply beyond 10 minutes if more than 20 GB of diff data has been generated since the last snapshot interval. Note that a swarm task is not started until the volume it mounts becomes available.
Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks is not a supported scenario and will lead to data loss. If you need a Cloudstor volume to share data between tasks please read below for EFS backed `shared` volume options. `relocatable` Cloudstor backed by EBS is supported on all AWS regions that support EBS. The default `backing` option is `relocatable` if EFS support is not selected during setup/installation or if EFS is not supported in a region.
Workloads running in a Docker service that require access to low latency/high
IOPs persistent storage, such as a database engine, can use a `relocatable`
Cloudstor volume backed by EBS. When you create the volume, you can specify the
type of EBS volume appropriate for the workload (such as `gp2`, `io1`, `st1`,
`sc1`). Each `relocatable` Cloudstor volume is backed by a single EBS volume.
## Shared Cloudstor volumes
If a swarm task using a `relocatable` Cloudstor volume gets rescheduled to
another node within the same availability zone as the original node where the
task was running, Cloudstor detaches the backing EBS volume from the original
node and attaches it to the new target node automatically.
Workloads running in multiple swarm tasks that need to share data in persistent storage volumes can use a `shared` Cloudstor volume backed by EFS. Such a volume and it's contents can be mounted by multiple tasks on multiple swarm nodes simultaneously since EFS makes the data available to all Swarm nodes over NFS. When swarm tasks using a `shared` Cloudstor volume gets rescheduled from one node to another within the same or across different availability zones, the persistent data backed by EFS volumes is always available. `shared` Cloudstor volumes will only work in those AWS regions where EFS is supported (e.g. Ohio, Oregon, N.Virginia, Sydney). If EFS Support is selected during setup/installation, the default "backing" option for Cloudstor volumes is set to Shared so that EFS is used by default. Please note that `shared` Cloudstor volumes backed by EFS (or even EFS MaxIO) may not be ideal for workloads that have very low latency and high IOPSs requirements. For performance details of EFS backed `shared` Cloudstor volumes, please refer to: http://docs.aws.amazon.com/efs/latest/ug/performance.html
If the swarm task gets rescheduled to a node in a different availability zone,
Cloudstor transfers the contents of the backing EBS volume to the destination
availability zone using a snapshot, and cleans up the EBS volume in the
original availability zone. To minimize the time necessary to create the
snapshot to transfer data across availability zones, Cloudstor periodically
takes snapshots of EBS volumes to ensure there is never a large number of
writes that need to be transferred as part of the final snapshot when
transferring the EBS volume across availability zones.
Typically the snapshot-based transfer process across availability zones takes
between 2 and 5 minutes unless the work load is write-heavy. For extremely
write-heavy workloads generating several GBs of fresh/new data every few
minutes, the transfer may take longer than 5 minutes. The time required to
snapshot and transfer increases sharply beyond 10 minutes if more than 20 GB of
writes have been generated since the last snapshot interval. A swarm task is
not started until the volume it mounts becomes available.
Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks
is not a supported scenario and will lead to data loss. If you need a Cloudstor
volume to share data between tasks, choose the appropriate EFS backed `shared`
volume option. Using a `relocatable` Cloudstor volume backed by EBS is
supported on all AWS regions that support EBS. The default `backing` option is
`relocatable` if EFS support is not selected during setup/installation or if
EFS is not supported in a region.
## Shared Cloudstor volumes
When multiple swarm service tasks need to share data in a persistent storage
volume, you can use a `shared` Cloudstor volume backed by EFS. Such a volume and
its contents can be mounted by multiple swarm service tasks without the risk of
data loss, since EFS makes the data available to all swarm nodes over NFS.
When swarm tasks using a `shared` Cloudstor volume get rescheduled from one node
to another within the same or across different availability zones, the
persistent data backed by EFS volumes is always available. `shared` Cloudstor
volumes will only work in those AWS regions where EFS is supported. If EFS
Support is selected during setup/installation, the default "backing" option for
Cloudstor volumes is set to `shared` so that EFS is used by default.
`shared` Cloudstor volumes backed by EFS (or even EFS MaxIO) may not be ideal
for workloads that require very low latency and high IOPSs. For performance
details of EFS backed `shared` Cloudstor volumes, see [the AWS performance
guidelines](http://docs.aws.amazon.com/efs/latest/ug/performance.html).
## Use Cloudstor
After creating a swarm on Docker for AWS and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group:
After initializing or joining a swarm on Docker for AWS, connect to any swarm
manager using SSH. Verify that the CloudStor plugin is already installed and
configured for the stack or resource group:
```bash
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
f416c95c0dcc cloudstor:aws cloud storage plugin for Docker true
```
The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver.
The following examples show how to create swarm services that require data
persistence using the `--mount` flag and specifying Cloudstor as the volume
driver.
### Share the same volume between tasks (EFS support required and enabled):
### Share the same volume among tasks using EFS
In those regions where EFS is supported and EFS support is enabled during deployment of the Cloud Formation template, `shared` Cloudstor volumes can be created to share access to persistent data across all tasks in a swarm service running in multiple nodes. Example usage:
In those regions where EFS is supported and EFS support is enabled during
deployment of the Cloud Formation template, you can use `shared` Cloudstor
volumes to share access to persistent data across all tasks in a swarm service
running in multiple nodes, as in the following example:
```bash
docker service create --replicas 5 --name ping1 \
--mount type=volume,volume-driver=cloudstor:aws,source=sharedvol1,destination=/shareddata \
alpine ping docker.com
$ docker service create \
--replicas 5 \
--name ping1 \
--mount type=volume,volume-driver=cloudstor:aws,source=sharedvol1,destination=/shareddata \
alpine ping docker.com
```
All replicas/tasks of the service `ping1` above share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure EFS is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time as that may lead to data corruption since the volume is shared.
All replicas/tasks of the service `ping1` share the same persistent volume
`sharedvol1` mounted at `/shareddata` path within the container. Docker takes
care of interacting with the Cloudstor plugin to ensure that EFS is mounted on
all nodes in the swarm where service tasks are scheduled. Your application needs
to be designed to ensure that tasks do not write concurrently on the same file
at the same time, to protect against data corruption.
In the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one Swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in a different node).
You can verify that the same volume is shared among all the tasks by logging
into one of the task containers, writing a file under `/shareddata/`, and
logging into another task container to verify that the file is available there
as well.
The only option available for EFS currently is `perfmode`. If high IO throughput is desired, please set the `perfmode` volume-opt to `maxio`. Example:
The only option available for EFS is `perfmode`. You can set `perfmode` to
`maxio` for high IO throughput:
```bash
{% raw %}
docker service create --replicas 5 --name ping3 \
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v{{ edition_version }},source={{.Service.Name}}-{{.Task.Slot}}-vol5,destination=/mydata,volume-opt=perfmode=maxio \
alpine ping docker.com
$ docker service create \
--replicas 5 \
--name ping3 \
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v{{ edition_version }},source={{.Service.Name}}-{{.Task.Slot}}-vol5,destination=/mydata,volume-opt=perfmode=maxio \
alpine ping docker.com
{% endraw %}
```
It is also possible to create `shared` Cloudstor volumes using the `docker volume create` CLI. For example:
You can also create `shared` Cloudstor volumes using the
`docker volume create` CLI:
```bash
{% raw %}
docker volume create -d "cloudstor:aws" --opt backing=shared mysharedvol1
$ docker volume create -d "cloudstor:aws" --opt backing=shared mysharedvol1
{% endraw %}
```
### Use a unique volume per task (using EBS):
### Use a unique volume per task using EBS
A unique `relocatable` Cloudstor volume backed by a specified type of EBS can be created and mounted for each task in a swarm service using a templatized notation with the `docker service create` CLI. Creation of new EBS volumes typically takes a few minutes. The volume options (besides `backing=local` indicating `relocatable` EBS backed volumes) are:
1. `size` : Required parameter that indicates the size of the EBS volumes to create in GB.
2. `ebstype` : Optional parameter that indicates the type of the EBS volumes to create, for example gp2, io1, st1, sc1. The default `ebstype` is Standard/Magnetic. For further details about EBS volume types, please see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html.
3. `iops` : Required if `ebstype` specified is `io1` i.e. provisioned IOPs. Needs to be in the appropriate range as required by EBS.
If EBS is available and enabled, you can use a templatized notation with the
`docker service create` CLI to create and mount a unique `relocatable` Cloudstor
volume backed by a specified type of EBS for each task in a swarm service. New
EBS volumes typically take a few minutes to be created. Besides `backing=local`,
the following volume options are available:
| Option | Description |
|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `size` | Required parameter that indicates the size of the EBS volumes to create in GB. |
| `ebstype` | Optional parameter that indicates the type of the EBS volumes to create (`gp2`, `io1`, `st1`, `sc1`}. The default `ebstype` is Standard/Magnetic. For further details about EBS volume types, see the [EBS volume type documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). |
| `iops` | Required if `ebstype` specified is `io1` i.e. provisioned IOPs. Needs to be in the appropriate range as required by EBS. |
Example usage:
```bash
{% raw %}
docker service create --replicas 5 --name ping3 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata,volume-opt=backing=local,volume-opt=size=25,volume-opt=ebstype=gp2 \
alpine ping docker.com
$ docker service create \
--replicas 5 \
--name ping3 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata,volume-opt=backing=local,volume-opt=size=25,volume-opt=ebstype=gp2 \
alpine ping docker.com
{% endraw %}
```
The above indicates to Docker Swarm that a distinct Cloudstor volume backed by 25 GB EBS volumes of type gp2 be created and mounted for each replica/task of the service `ping3`. Each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume
It is highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to. It is recommended that the total number of EBS volumes in the swarm be kept below 12 * (minimum number of nodes that are expected to be present at any time) to ensure EC2 can properly attach EBS volumes to a node when another node fails. Thus use EBS volumes only for those workloads where low latency and high IOPs is absolutely necessary.
This example creates and mounts a distinct Cloudstor volume backed by 25 GB EBS
volumes of type `gp2` for each task of the `ping3` service. Each task mounts its
own volume at `/mydata/` and all files under that mountpoint are unique to the
task mounting the volume.
It is also possible to create EBS backed volumes using the `docker volume create` CLI. For example:
It is highly recommended that you use the `.Task.Slot` template to ensure that
task `N` always gets access to volume `N`, no matter which node it is executing
on/scheduled to. The total number of EBS volumes in the swarm should be kept
below `12 * (minimum number of nodes that are expected to be present at any
time)` to ensure that EC2 can properly attach EBS volumes to a node when another
node fails. Use EBS volumes only for those workloads where low latency and high
IOPs is absolutely necessary.
You can also create EBS backed volumes using the `docker volume create` CLI:
```bash
{% raw %}
docker volume create -d "cloudstor:aws" --opt ebstype=io1 --opt size=25 --opt iops=1000 --opt backing=local mylocalvol1
$ docker volume create \
-d "cloudstor:aws" \
--opt ebstype=io1 \
--opt size=25 \
--opt iops=1000 \
--opt backing=local \
mylocalvol1
{% endraw %}
```
Sharing the same `relocatable` Cloudstor volume across multiple tasks of a service or across multiple independent containers is not supported when `backing=local` is specified. Trying to do so in containers across multiple nodes will result in IO errors.
Sharing the same `relocatable` Cloudstor volume across multiple tasks of a
service or across multiple independent containers is not supported when
`backing=local` is specified. Attempting to do so will result in IO errors.
### Use a unique volume per task (using EFS with EFS support present and enabled):
### Use a unique volume per task using EFS
It is possible to use the templatized notation to indicate to Docker Swarm that a unique EFS backed volume be created and mounted for each replica/task of a service. This is a useful option if you already have too many EBS volumes or want to reduce the amount of time it takes to transfer volume data across availability zones. Example:
If EFS is available and enabled, you can use templatized notation to create and
mount a unique EFS-backed volume into each task of a service. This is useful if
you already have too many EBS volumes or want to reduce the amount of time it
takes to transfer volume data across availability zones.
```bash
{% raw %}
docker service create --replicas 5 --name ping2 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
$ docker service create \
--replicas 5 \
--name ping2 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
{% endraw %}
```
In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume.
Here, each task has mounts its own volume at `/mydata/` and the files under that
mountpoint are unique to that task.
When a task with only `shared` EFS volumes mounted is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. Since data on EFS is available to all Swarm nodes and can be quickly mounted and accessed, the rescheduling process for tasks using EFS backed volumes typically takes a few seconds compared to several minutes in case of EBS
When a task with only `shared` EFS volumes mounted is rescheduled on a different
node, Docker interacts with the Cloudstor plugin to create and mount the volume
corresponding to the task on the node where the task is rescheduled. Since data
on EFS is available to all swarm nodes and can be quickly mounted and accessed,
the rescheduling process for tasks using EFS-backed volumes typically takes a
few seconds, as compared to several minutes when using EBS.
It is highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to.
It is highly recommended that you use the `.Task.Slot` template to ensure that
task `N` always gets access to volume `N` no matter which node it is executing
on/scheduled to.
### List or remove volumes created by Cloudstor
You can use `docker volume ls` on any node to enumerate all volumes created by Cloudstor across the swarm. You can use `docker volume rm [volume name]` to remove a cloudstor volume from any node. Please be aware that if you remove a volume from one node please make sure it is not under active usage in another node as those tasks/containers in another node will lose access to their data.
You can use `docker volume ls` on any node to enumerate all volumes created by
Cloudstor across the swarm.
Before deleting a Docker4AWS stack through CloudFormation, it is recommended that you remove all `relocatable` Cloudstor volumes using `docker volume rm` from within the stack. EBS volumes corresponding to `relocatable` Cloudstor volumes will not be deleted as part of the CloudFormation stack deletion. To list any `relocatable` Cloudstor volumes and delete them after a Docker4AWS stack (in which the volumes where created) has been deleted, go to the AWS portal or CLI and set a filter with tag key set to `StackID` and the tag value set to the md5 hash of the CloudFormation Stack ID (typical format: arn:aws:cloudformation:us-west-2:ID:stack/swarmname/GUID)
You can use `docker volume rm [volume name]` to remove a Cloudstor volume from
any node. If you remove a volume from one node, make sure it is not being used
by another active node, since those tasks/containers in another node will lose
access to their data.
Before deleting a Docker4AWS stack through CloudFormation, you should remove all
`relocatable` Cloudstor volumes using `docker volume rm` from within the stack.
EBS volumes corresponding to `relocatable` Cloudstor volumes are not
automatically deleted as part of the CloudFormation stack deletion. To list any
`relocatable` Cloudstor volumes and delete them after removing the Docker4AWS
stack where the volumes were created, go to the AWS portal or CLI and set a
filter with tag key set to `StackID` and the tag value set to the md5 hash of
the CloudFormation Stack ID (typical format:
`arn:aws:cloudformation:us-west-2:ID:stack/swarmname/GUID`).

View File

@ -8,69 +8,132 @@ title: Docker for Azure persistent data volumes
## What is Cloudstor?
Cloudstor is a modern volume plugin built by Docker. It comes pre-installed and pre-configured in Docker Swarms deployed through Docker for Azure. Docker Swarm mode tasks as well as regular Docker containers can use a volume created with Cloudstor to mount a persistent data volume. The volume stays attached to the swarm tasks no matter which swarm node they get scheduled on or migrated to. Cloudstor relies on shared storage infrastructure provided by Azure (specifically File Storage shares exposed over SMB) to allow swarm tasks to create/mount their persistent volumes on any node in the Docker Swarm. In a future release we will introduce support for direct attached/relocatable storage to satisfy very low latency/high IOPs requirements.
Cloudstor a modern volume plugin managed by Docker. It comes pre-installed and
pre-configured in Docker Swarms deployed on Docker for Azure. Docker swarm mode
tasks and regular Docker containers can use a volume created with Cloudstor to
mount a persistent data volume. The volume stays attached to the swarm tasks no
matter which swarm node they get scheduled on or migrated to. Cloudstor relies
on shared storage infrastructure provided by Azure (specifically File Storage
shares exposed over SMB) to allow swarm tasks to create/mount their persistent
volumes on any node in the swarm.
> **Note**: Direct attached storage, which is used to satisfy very low latency /
> high IOPS requirements, is not yet supported.
You can share the same volume among tasks running the same service, or you can
use a unique volume for each task.
## Use Cloudstor
After creating a swarm on Docker for Azure and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group:
After initializing or joining a swarm on Docker for Azure, connect to any swarm
manager using SSH. Verify that the CloudStor plugin is already installed and
configured for the stack or resource group:
```bash
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
f416c95c0dcc cloudstor:azure cloud storage plugin for Docker true
```
The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver.
The following examples show how to create swarm services that require data
persistence using the `--mount` flag and specifying Cloudstor as the volume
driver.
### Share the same volume between tasks:
### Share the same volume among tasks
If you specify a static value for the `source` option to the `--mount` flag, a
single volume is shared among the tasks participating in the service.
Cloudstor volumes can be created to share access to persistent data across all tasks in a swarm service running in multiple nodes. Example:
```bash
docker service create --replicas 5 --name ping1 \
--mount type=volume,volume-driver=cloudstor:azure,source=sharedvol1,destination=/shareddata \
alpine ping docker.com
$ docker service create \
--replicas 5 \
--name ping1 \
--mount type=volume,volume-driver=cloudstor:azure,source=sharedvol1,destination=/shareddata \
alpine ping docker.com
```
Here all replicas/tasks of the service `ping1` share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure the common backing store is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time and cause corruptions since the volume is shared.
In this example, all replicas/tasks of the service `ping1` share the same
persistent volume `sharedvol1` mounted at `/shareddata` path within the
container. Docker takes care of interacting with the Cloudstor plugin to ensure
that the common backing store is mounted on all nodes in the swarm where service
tasks are scheduled. Your application needs to be designed to ensure that tasks
do not write concurrently on the same file at the same time, to protect against
data corruption.
With the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in the same node or a different node).
You can verify that the same volume is shared among all the tasks by logging
into one of the task containers, writing a file under `/shareddata/`, and
logging into another task container to verify that the file is available there
as well.
### Use a unique volume per task:
### Use a unique volume per task
You can use a templatized notation with the `docker service create` CLI to
create and mount a unique Cloudstor volume for each task in a swarm service.
It is possible to use the templatized notation to indicate to Docker Swarm that a unique Cloudstor volume be created and mounted for each replica/task of a service. This may be useful if the tasks write to the same file under the same path which may lead to corruption in case of shared storage. Example:
```bash
{% raw %}
docker service create --replicas 5 --name ping2 \
--mount type=volume,volume-driver=cloudstor:azure,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
$ docker service create \
--replicas 5 \
--name ping2 \
--mount type=volume,volume-driver=cloudstor:azure,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
{% endraw %}
```
Here the templatized notation is used to indicate to Docker Swarm that a unique volume be created and mounted for each replica/task of the service `ping2`. After initial creation of the volumes corresponding to the tasks that will use them (in the nodes the tasks are scheduled in), if a task is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. It's highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to.
A unique volume is created and mounted for each task participating in the
`ping2` service. Each task mounts its own volume at `/mydata/` and all files
under that mountpoint are unique to the task mounting the volume.
In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume.
If a task is rescheduled on a different node after the volume is created and
mounted, Docker interacts with the Cloudstor plugin to create and mount the
volume corresponding to the task on the new node for the task.
It is highly recommended that you use the `.Task.Slot` template to ensure that
task `N` always gets access to volume `N`, no matter which node it is executing
on/scheduled to.
### Volume options
Cloudstor creates a new File Share in Azure File Storage for each volume and uses SMB to mount them. SMB however is fairly limited in the area of being compatible with generic Unix file ownership and permissions related operations. Certain workloads (e.g. Jenkins, Gitlab) define specific users and groups that perform different file operations and requires the Cloudstor volume to be mounted with the corresponding UID/GID. To allow for this scenario as well as greater control over default file permissions, Cloudstor exposes the following volume options that map to SMB parameters used for mounting the backing file share.
Cloudstor creates a new File Share in Azure File Storage for each volume and
uses SMB to mount these File Shares. SMB has limited compatibility with generic
Unix file ownership and permissions-related operations. Certain workloads, such
as Jenkins and Gitlab, define specific users and groups which perform different
file operations, and these types of workloads require the Cloudstor volume to be
mounted with the corresponding UID/GID. Cloudstor allows for this scenario and
provides greater control over default file permissions by exposing the following
volume options that map to SMB parameters used for mounting the backing file
share.
1. `uid` : User ID that will own all files on the volume. Default: 0 = root
2. `gid` : Group ID that will own all files on the volume. Default: 0 = root
3. `filemode` : Permissions for all files on the volume. Default: 0777
4. `dirmode` : Permissions for all directories on the volume. Default: 0777
5. `share` : Name to associate with file share so that the share can be easily located in the Azure Storage Account. Default: MD5 hash of volume name
| Option | Description | Default |
|:-----------|:--------------------------------------------------------------------------------------------------------|:------------------------|
| `uid` | User ID that will own all files on the volume. | `0` = `root` |
| `gid` | Group ID that will own all files on the volume. | `0` = `root` |
| `filemode` | Permissions for all files on the volume. | `0777` |
| `dirmode` | Permissions for all directories on the volume. | `0777` |
| `share` | Name to associate with file share so that the share can be easily located in the Azure Storage Account. | MD5 hash of volume name |
Example usage with `uid` set to 1000 and share name set to `sharedvol` rather than a md5 hash:
This example sets `uid` set to 1000 and `share` to `sharedvol` rather than a md5 hash:
```bash
docker service create --replicas 5 --name ping1 \
--mount type=volume,volume-driver=cloudstor:azure,source=sharedvol1,destination=/shareddata,volume-opt=uid=1000,volume-opt=share=s
haredvol \
alpine ping docker.com
$ docker service create \
--replicas 5 \
--name ping1 \
--mount type=volume,volume-driver=cloudstor:azure,source=sharedvol1,destination=/shareddata,volume-opt=uid=1000,volume-opt=share=sharedvol \
alpine ping docker.com
```
#### List or remove volumes created by Cloudstor
You can use `docker volume ls` on any node to enumerate all volumes created by Cloudstor across the swarm. You can use `docker volume rm [volume name]` to remove a cloudstor volume from any node. Please be aware that if you remove a volume from one node it may still be under active usage in another node and those tasks in the other node will lose access to their data.
You can use `docker volume ls` on any node to enumerate all volumes created by
Cloudstor across the swarm.
You can use `docker volume rm [volume name]` to remove a Cloudstor volume from
any node. If you remove a volume from one node, make sure it is not being used
by another active node, since those tasks/containers in another node will lose
access to their data.