Reorg service management topic (#3462)

Move some things that were previously in the CLI ref
Some rewrites and reorganization
Add placement pref image
This commit is contained in:
Misty Stanley-Jones 2017-06-02 14:46:24 -07:00 committed by GitHub
parent 3a7abe1394
commit e11009b304
2 changed files with 393 additions and 188 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

View File

@ -4,78 +4,131 @@ keywords: guide, swarm mode, swarm, service
title: Deploy services to a swarm
---
When you are running Docker Engine in swarm mode, you run
`docker service create` to deploy your application in the swarm. The swarm
manager accepts the service description as the desired state for your
application. The built-in swarm orchestrator and scheduler deploy your
application to nodes in your swarm to achieve and maintain the desired state.
Swarm services uses a *declarative* model, which means that you define the
desired state of the service, and rely upon Docker to maintain this state. The
state includes information such as (but not limited to):
For an overview of how services work, refer to [How services work](how-swarm-mode-works/services.md).
- the image name and tag the service containers should run
- how many containers participate in the service
- whether any ports are exposed to clients outside the swarm
- whether the service should start automatically when Docker starts
- the specific behavior that happens when the service is restarted (such as
whether a rolling restart is used)
- characteristics of the nodes where the service can run (such as resource
constraints and placement preferences)
This guide assumes you are working with the Docker Engine running in swarm
mode. You must run all `docker service` commands from a manager node.
If you haven't already, read through [Swarm mode key concepts](key-concepts.md)
and [How services work](how-swarm-mode-works/services.md).
For an overview of swarm mode, see [Swarm mode key concepts](key-concepts.md).
For an overview of how services work, see
[How services work](how-swarm-mode-works/services.md).
## Create a service
To create the simplest type of service in a swarm, you only need to supply
a container image:
To create a single-replica service with no extra configuration, you only need
to supply the image name. This command starts an Nginx service with a
randomly-generated name and no published ports. This is a naive example, since
you won't be able to interact with the Nginx service.
```bash
$ docker service create <IMAGE>
$ docker service create nginx
```
The swarm orchestrator schedules one task on an available node. The task invokes
a container based upon the image. For example, you could run the following
command to create a service of one instance of an nginx web server:
```bash
$ docker service create --name my_web nginx
anixjtol6wdfn6yylbkrbj2nx
```
In this example the `--name` flag names the service `my_web`.
To list the service, run `docker service ls` from a manager node:
The service is scheduled on an available node. To confirm that the service
was created and started successfully, use the `docker service ls` command:
```bash
$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
anixjtol6wdf my_web 1/1 nginx
ID NAME MODE REPLICAS IMAGE PORTS
a3iixnklxuem quizzical_lamarr replicated 1/1 docker.io/library/nginx@sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268
```
To make the web server accessible from outside the swarm, you need to
[publish the port](#publish-ports) where the swarm listens for web requests.
Created services do not always run right away. A service can be in a pending
state if its image is unavailable, if no node meets the requirements you
configure for the service, or other reasons. See
[Pending services](how-swarm-mode-works/services.md#pending-services) for more
information.
You can include a command to run inside containers after the image:
To provide a name for your service, use the `--name` flag:
```bash
$ docker service create <IMAGE> <COMMAND>
$ docker service create --name my_web nginx
```
For example to start an `alpine` image that runs `ping docker.com`:
Just like with standalone containers, you can specify a command that the
service's containers should run, by adding it after the image name. This example
starts a service called `helloworld` which uses an `alpine` image and runs the
command `ping docker.com`:
```bash
$ docker service create --name helloworld alpine ping docker.com
9uk4639qpg7npwf3fn2aasksr
```
## Configure services
You can also specify an image tag for the service to use. This example modifies
the previous one to use the `alpine:3.6` tag:
When you create a service, you can specify many different configuration options
and constraints. See the output of `docker service create --help` for a full
listing of them. Some common configuration options are described below.
```bash
$ docker service create --name helloworld alpine:3.6 ping docker.com
```
Created services do not always run right away. A service can be in a pending
state if its image is unavailable, no node meets the requirements you configure
for the service, or other reasons. See
[Pending services](how-swarm-mode-works/services.md#pending-services) for more
information.
For more details about image tag resolution, see
[Specify the image version the service should use](#specify-the-image-version-the-service-should-use).
## Update a service
You can change almost everything about an existing service using the
`docker service update` command. When you update a service, Docker stops its
containers and restarts them with the new configuration.
Since Nginx is a web service, it will work much better if you publish port 80
to clients outside the swarm. You can specify this when you create the service,
using the `-p` or `--publish` flag. When updating an existing service, the flag
is `--publish-add`. There is also a `--publish-rm` flag to remove a port that
was previously published.
Assuming that the `my_web` service from the previous section still exists, use
the following command to update it to publish port 80.
```bash
$ docker service update --publish-add 80 my_web
```
To verify that it worked, use `docker service ls`:
```bash
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
4nhxl7oxw5vz my_web replicated 1/1 docker.io/library/nginx@sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268 *:0->80/tcp
```
For more information on how publishing ports works, see
[publish ports](#publish-ports).
You can update almost every configuration detail about an existing service,
including the image name and tag it runs. See
[Update a service's image after creation](#update-a-services-image-after-creation).
## Remove a service
To remove a service, use the `docker service remove` command. You can remove a
service by its ID or name, as shown in the output of the `docker service ls`
command. The following command removes the `my_web` service.
```bash
$ docker service remove my_web
```
## Service configuration details
The following sections provide details about service configuration. This topic
does not cover every flag or scenario. In almost every instance where you can
define a configuration at service creation, you can also update an existing
service's configuration in a similar way.
See the command-line references for
[`docker service create`](/engine/reference/commandline/service_create/) and
[`docker service update`](/engine/reference/commandline/service_update/), or run
one of those commands with the `--help` flag.
### Configure the runtime environment
@ -86,7 +139,9 @@ container:
* the working directory inside the container using the `--workdir` flag
* the username or UID using the `--user` flag
For example:
The following service's containers will have an environment variable `$MYVAR`
set to `myvalue`, will run from the `/tmp/` directory, and will run as the
`my_user` user.
```bash
$ docker service create --name helloworld \
@ -94,17 +149,20 @@ $ docker service create --name helloworld \
--workdir /tmp \
--user my_user \
alpine ping docker.com
9uk4639qpg7npwf3fn2aasksr
```
### Grant a service access to secrets
### Update the command an existing service runs
To create a service with access to Docker-managed secrets, use the `--secret`
flag. For more information, see
[Manage sensitive strings (secrets) for Docker services](secrets.md)
To update the command an existing service runs, you can use the `--args` flag.
The following example updates an existing service called `helloworld` so that
it runs the command `ping docker.com` instead of whatever command it was running
before:
### Specify the image version the service should use
```bash
$ docker service update --args "ping docker.com" helloworld
```
### Specify the image version a service should use
When you create a service without specifying any details about the version of
the image to use, the service uses the version tagged with the `latest` tag.
@ -255,70 +313,6 @@ If the swarm manager cannot resolve the image to a digest, all is not lost:
- If this fails, the task fails to deploy and the manager tries again to deploy
the task, possibly on a different worker node.
### Control service scale and placement
{% include edge_only.md section="options" %}
Swarm mode has two types of services, replicated and global. For replicated
services, you specify the number of replica tasks for the swarm manager to
schedule onto available nodes. For global services, the scheduler places one
task on each available node.
You control the type of service using the `--mode` flag. If you don't specify a
mode, the service defaults to `replicated`. For replicated services, you specify
the number of replica tasks you want to start using the `--replicas` flag. For
example, to start a replicated nginx service with 3 replica tasks:
```bash
$ docker service create \
--name my_web \
--replicas 3 \
nginx
```
To start a global service on each available node, pass `--mode global` to
`docker service create`. Every time a new node becomes available, the scheduler
places a task for the global service on the new node. For example to start a
service that runs alpine on every node in the swarm:
```bash
$ docker service create \
--name myservice \
--mode global \
alpine top
```
Service constraints let you set criteria for a node to meet before the scheduler
deploys a service to the node. You can apply constraints to the
service based upon node attributes and metadata or engine metadata. For more
information on constraints, refer to the `docker service create`
[CLI reference](/engine/reference/commandline/service_create.md).
Use placement preferences to divide tasks evenly over different categories of
nodes. An example of where this may be useful is balancing tasks between
multiple datacenters or availability zones. In this case, you can use a
placement preference to spread out tasks to multiple datacenters and make the
service more resilient in the face of a localized outage. You can use
additional placement preferences to further divide tasks over groups of nodes.
For example, you can balance them over multiple racks within each datacenter.
For more information on constraints, refer to the `docker service create`
[CLI reference](/engine/reference/commandline/service_create.md).
### Reserving memory or number of CPUs for a service
To reserve a given amount of memory or number of CPUs for a service, use the
`--reserve-memory` or `--reserve-cpu` flags. If no available nodes can satisfy
the requirement (for instance, if you request 4 CPUs and no node in the swarm
has 4 CPUs), the service remains in a pending state until a node is available to
run its tasks.
### Configure service networking options
Swarm mode lets you network services in a couple of ways:
* publish ports externally to the swarm using ingress networking or directly on
each swarm node
* connect services and tasks within the swarm using overlay networks
### Publish ports
When you create a swarm service, you can publish that service's ports to hosts
@ -341,13 +335,15 @@ Keep reading for more information and use cases for each of these methods.
#### Publish a service's ports using the routing mesh
To publish a service's ports externally to the swarm, use the `--publish <TARGET-PORT>:<SERVICE-PORT>` flag. The swarm
makes the service accessible at the target port **on every swarm node**. If an
external host connects to that port on any swarm node, the routing mesh routes
it to a task. The external host does not need to know the IP addresses or
internally-used ports of the service tasks to interact with the service. When
a user or process connects to a service, any worker node running a service task
may respond.
To publish a service's ports externally to the swarm, use the
`--publish <TARGET-PORT>:<SERVICE-PORT>` flag. The swarm makes the service
accessible at the target port **on every swarm node**. If an external host
connects to that port on any swarm node, the routing mesh routes it to a task.
The external host does not need to know the IP addresses or internally-used
ports of the service tasks to interact with the service. When a user or process
connects to a service, any worker node running a service task may respond. For
more details about swarm service networking, see
[Manage swarm service networks](/engine/swarm/networking/).
##### Example: Run a three-task Nginx service on 10-node swarm
@ -396,9 +392,10 @@ option to the `--publish` flag.
##### Example: Run a `nginx` web server service on every swarm node
[nginx](https://hub.docker.com/_/nginx/) is an open source reverse proxy, load balancer, HTTP cache, and a web server. If you run nginx as a service using the routing mesh, connecting
to the nginx port on any swarm node will show you the web page for
(effectively) **a random swarm node** running the service.
[nginx](https://hub.docker.com/_/nginx/) is an open source reverse proxy, load
balancer, HTTP cache, and a web server. If you run nginx as a service using the
routing mesh, connecting to the nginx port on any swarm node will show you the
web page for (effectively) **a random swarm node** running the service.
The following example runs nginx as a service on each node in your swarm and
exposes nginx port locally on each swarm node.
@ -411,32 +408,30 @@ $ docker service create \
nginx:latest
```
You can reach the nginx server on port 8080 of every swarm node. If you add a node to
the swarm, a nginx task will be started on it. You cannot start another
You can reach the nginx server on port 8080 of every swarm node. If you add a
node to the swarm, a nginx task will be started on it. You cannot start another
service or container on any swarm node which binds to port 8080.
> **Note**: This is a naive example. Creating an application-layer
> routing framework for a multi-tiered service is complex and out of scope for
> this topic.
### Add an overlay network
### Connect the service to an overlay network
Use overlay networks to connect one or more services within the swarm.
You can use overlay networks to connect one or more services within the swarm.
First, create an overlay network on a manager node the `docker network create`
command:
First, create overlay network on a manager node using the `docker network create`
command with the `--driver overlay` flag.
```bash
$ docker network create --driver overlay my-network
etjpu59cykrptrgw0z0hk5snf
```
After you create an overlay network in swarm mode, all manager nodes have access
to the network.
When you create a service and pass the `--network` flag to attach the service to
the overlay network:
You can create a new service and pass the `--network` flag to attach the service
to the overlay network:
```bash
$ docker service create \
@ -444,17 +439,167 @@ $ docker service create \
--network my-network \
--name my-web \
nginx
716thylsndqma81j6kkkb5aus
```
The swarm extends `my-network` to each node running the service.
For more information on overlay networking and service discovery, refer to
[Attach services to an overlay network](networking.md). See also
[Docker swarm mode overlay network security model](../userguide/networking/overlay-security-model.md).
You can also connect an existing service to an overlay network using the
`--network-add` flag.
### Configure update behavior
```bash
$ docker service update --network-add my-network my-web
```
To disconnect a running service from a network, use the `--network-rm` flag.
```bash
$ docker service update --network-rm my-network my-web
```
For more information on overlay networking and service discovery, refer to
[Attach services to an overlay network](networking.md) and
[Docker swarm mode overlay network security model](/engine/userguide/networking/overlay-security-model.md).
### Grant a service access to secrets
To create a service with access to Docker-managed secrets, use the `--secret`
flag. For more information, see
[Manage sensitive strings (secrets) for Docker services](secrets.md)
### Control service scale and placement
{% include edge_only.md section="options" %}
Swarm mode has two types of services: replicated and global. For replicated
services, you specify the number of replica tasks for the swarm manager to
schedule onto available nodes. For global services, the scheduler places one
task on each available node.
You control the type of service using the `--mode` flag. If you don't specify a
mode, the service defaults to `replicated`. For replicated services, you specify
the number of replica tasks you want to start using the `--replicas` flag. For
example, to start a replicated nginx service with 3 replica tasks:
```bash
$ docker service create \
--name my_web \
--replicas 3 \
nginx
```
To start a global service on each available node, pass `--mode global` to
`docker service create`. Every time a new node becomes available, the scheduler
places a task for the global service on the new node. For example to start a
service that runs alpine on every node in the swarm:
```bash
$ docker service create \
--name myservice \
--mode global \
alpine top
```
Service constraints let you set criteria for a node to meet before the scheduler
deploys a service to the node. You can apply constraints to the
service based upon node attributes and metadata or engine metadata. For more
information on constraints, refer to the `docker service create`
[CLI reference](/engine/reference/commandline/service_create.md).
Use placement preferences to divide tasks evenly over different categories of
nodes. An example of where this may be useful is balancing tasks between
multiple datacenters or availability zones. In this case, you can use a
placement preference to spread out tasks to multiple datacenters and make the
service more resilient in the face of a localized outage. You can use
additional placement preferences to further divide tasks over groups of nodes.
For example, you can balance them over multiple racks within each datacenter.
For more information on constraints, refer to the `docker service create`
[CLI reference](/engine/reference/commandline/service_create.md).
### Reserve memory or CPUs for a service
To reserve a given amount of memory or number of CPUs for a service, use the
`--reserve-memory` or `--reserve-cpu` flags. If no available nodes can satisfy
the requirement (for instance, if you request 4 CPUs and no node in the swarm
has 4 CPUs), the service remains in a pending state until a node is available to
run its tasks.
### Specify service placement preferences (--placement-pref)
{% include edge_only.md section="feature" %}
You can set up the service to divide tasks evenly over different categories of
nodes. One example of where this can be useful is to balance tasks over a set
of datacenters or availability zones. The example below illustrates this:
```bash
$ docker service create \
--replicas 9 \
--name redis_2 \
--placement-pref 'spread=node.labels.datacenter' \
redis:3.0.6
```
This uses `--placement-pref` with a `spread` strategy (currently the only
supported strategy) to spread tasks evenly over the values of the `datacenter`
node label. In this example, we assume that every node has a `datacenter` node
label attached to it. If there are three different values of this label among
nodes in the swarm, one third of the tasks will be placed on the nodes
associated with each value. This is true even if there are more nodes with one
value than another. For example, consider the following set of nodes:
- Three nodes with `node.labels.datacenter=east`
- Two nodes with `node.labels.datacenter=south`
- One node with `node.labels.datacenter=west`
Since we are spreading over the values of the `datacenter` label and the
service has 9 replicas, 3 replicas will end up in each datacenter. There are
three nodes associated with the value `east`, so each one will get one of the
three replicas reserved for this value. There are two nodes with the value
`south`, and the three replicas for this value will be divided between them,
with one receiving two replicas and another receiving just one. Finally, `west`
has a single node that will get all three replicas reserved for `west`.
If the nodes in one category (for example, those with
`node.labels.datacenter=south`) can't handle their fair share of tasks due to
constraints or resource limitations, the extra tasks will be assigned to other
nodes instead, if possible.
Both engine labels and node labels are supported by placement preferences. The
example above uses a node label, because the label is referenced with
`node.labels.datacenter`. To spread over the values of an engine label, use
`--placement-pref spread=engine.labels.<labelname>`.
It is possible to add multiple placement preferences to a service. This
establishes a hierarchy of preferences, so that tasks are first divided over
one category, and then further divided over additional categories. One example
of where this may be useful is dividing tasks fairly between datacenters, and
then splitting the tasks within each datacenter over a choice of racks. To add
multiple placement preferences, specify the `--placement-pref` flag multiple
times. The order is significant, and the placement preferences will be applied
in the order given when making scheduling decisions.
The following example sets up a service with multiple placement preferences.
Tasks are spread first over the various datacenters, and then over racks
(as indicated by the respective labels):
```bash
$ docker service create \
--replicas 9 \
--name redis_2 \
--placement-pref 'spread=node.labels.datacenter' \
--placement-pref 'spread=node.labels.rack' \
redis:3.0.6
```
This diagram illustrates how placement preferences work:
![placement preferences example](images/placement_prefs.png)
When updating a service with `docker service update`, `--placement-pref-add`
appends a new placement preference after all existing placement preferences.
`--placement-pref-rm` removes an existing placement preference that matches the
argument.
### Configure a service's update behavior
When you create a service, you can specify a rolling update behavior for how the
swarm should apply changes to the service when you run `docker service update`.
@ -488,8 +633,6 @@ $ docker service create \
--update-parallelism 2 \
--update-failure-action continue \
alpine
0u6a4s31ybk7yw2wyvtikmu50
```
The `--update-max-failure-ratio` flag controls what fraction of tasks can fail
@ -504,7 +647,7 @@ seconds, which means that a task failing in the first 30 seconds after its
started counts towards the service update failure threshold, and a failure
after that is not counted.
## Roll back to the previous version of a service
### Roll back to the previous version of a service
In case the updated version of a service doesn't function as expected, it's
possible to manually roll back to the previous version of the service using
@ -520,8 +663,6 @@ $ docker service update \
--rollback \
--update-delay 0s
my_web
my_web
```
In Docker 17.04 and higher, you can configure a service to roll back
@ -537,7 +678,7 @@ so it will still use the old method against an older daemon.
Finally, in Docker 17.04 and higher, `--rollback` cannot be used in conjunction
with other flags to `docker service update`.
## Automatically roll back if an update fails
### Automatically roll back if an update fails
You can configure a service in such a way that if an update to the service
causes redeployment to fail, the service can automatically roll back to the
@ -545,13 +686,13 @@ previous configuration. This helps protect service availability. You can set
one or more of the following flags at service creation or update. If you do not
set a value, the default is used.
| Flag | Default | Description |
|------------------------------|---------|-----------------------------------------------|
|`--rollback-delay` | `0s` | Amount of time to wait after rolling back a task before rolling back the next one. A value of `0` means to roll back the second task immediately after the first rolled-back task deploys. |
|`--rollback-failure-action` | `pause` | When a task fails to roll back, whether to `pause` or `continue` trying to roll back other tasks. |
|`--rollback-max-failure-ratio`| `0` | The failure rate to tolerate during a rollback, specified as a floating-point number between 0 and 1. For instance, given 5 tasks, a failure ratio of `.2` would tolerate one task failing to roll back. A value of `0` means no failure are tolerated, while a value of `1` means any number of failure are tolerated. |
|`--rollback-monitor` | `5s` | Duration after each task rollback to monitor for failure. If a task stops before this time period has elapsed, the rollback is considered to have failed. |
|`--rollback-parallelism` | `1` | The maximum number of tasks to roll back in parallel. By default, one task is rolled back at a time. A value of `0` causes all tasks to be rolled back in parallel. |
| Flag | Default | Description |
|:-------------------------------|:--------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--rollback-delay` | `0s` | Amount of time to wait after rolling back a task before rolling back the next one. A value of `0` means to roll back the second task immediately after the first rolled-back task deploys. |
| `--rollback-failure-action` | `pause` | When a task fails to roll back, whether to `pause` or `continue` trying to roll back other tasks. |
| `--rollback-max-failure-ratio` | `0` | The failure rate to tolerate during a rollback, specified as a floating-point number between 0 and 1. For instance, given 5 tasks, a failure ratio of `.2` would tolerate one task failing to roll back. A value of `0` means no failure are tolerated, while a value of `1` means any number of failure are tolerated. |
| `--rollback-monitor` | `5s` | Duration after each task rollback to monitor for failure. If a task stops before this time period has elapsed, the rollback is considered to have failed. |
| `--rollback-parallelism` | `1` | The maximum number of tasks to roll back in parallel. By default, one task is rolled back at a time. A value of `0` causes all tasks to be rolled back in parallel. |
The following example configures a `redis` service to roll back automatically
if a `docker service update` fails to deploy. Two tasks can be rolled back in
@ -568,13 +709,21 @@ $ docker service create --name=my_redis \
redis:latest
```
## Configure mounts
### Give a service access to volumes or bind mounts
For best performance and portability, you should avoid writing important data
directly into a container's writable layer, instead using data volumes or bind
mounts. This principle also applies to services.
You can create two types of mounts for services in a swarm, `volume` mounts or
`bind` mounts. You pass the `--mount` flag when you create a service. The
default is a volume mount if you don't specify a type.
`bind` mounts. Regardless of which type of mount you use, configure it using the
`--mount` flag when you create a service, or the `--mount-add` or `--mount-rm`
flag when updating an existing service.. The default is a data volume if you
don't specify a type.
* Volumes are storage that remain alive after a container for a task has
#### Data volumes
Data volumes are storage that remain alive after a container for a task has
been removed. The preferred method to mount volumes is to leverage an existing
volume:
@ -585,7 +734,8 @@ $ docker service create \
<IMAGE>
```
For more information on how to create a volume, see the `volume create` [CLI reference](../reference/commandline/volume_create.md).
For more information on how to create a volume, see the `volume create`
[CLI reference](/engine/reference/commandline/volume_create.md).
The following method creates the volume at deployment time when the scheduler
dispatches a task, just before starting the container:
@ -611,43 +761,98 @@ $ docker service create \
> <IMAGE>
* Bind mounts are file system paths from the host where the scheduler deploys
#### Bind mounts
Bind mounts are file system paths from the host where the scheduler deploys
the container for the task. Docker mounts the path into the container. The
file system path must exist before the swarm initializes the container for the
task.
The following examples show bind mount syntax:
```bash
# Mount a read-write bind
$ docker service create \
--mount type=bind,src=<HOST-PATH>,dst=<CONTAINER-PATH> \
--name myservice \
<IMAGE>
- To mount a read-write bind:
```bash
$ docker service create \
--mount type=bind,src=<HOST-PATH>,dst=<CONTAINER-PATH> \
--name myservice \
<IMAGE>
```
- To mount a read-only bind:
```bash
$ docker service create \
--mount type=bind,src=<HOST-PATH>,dst=<CONTAINER-PATH>,readonly \
--name myservice \
<IMAGE>
```
> **Important**: Bind mounts can be useful but they can also cause problems. In
> most cases, it is recommended that you architect your application such that
> mounting paths from the host is unnecessary. The main risks include the
> following:
>
> - If you bind mount a host path into your services containers, the path
> must exist on every swarm node. The Docker swarm mode scheduler can schedule
> containers on any machine that meets resource availability requirements
> and satisfies all constraints and placement preferences you specify.
>
> - The Docker swarm mode scheduler may reschedule your running service
> containers at any time if they become unhealthy or unreachable.
>
> - Host bind mounts are completely non-portable. When you use bind mounts,
> there is no guarantee that your application will run the same way in
> development as it does in production.
### Create services using templates
You can use templates for some flags of `service create`, using the syntax
provided by the Go's [text/template](http://golang.org/pkg/text/template/)
package.
The following flags are supported:
- `--hostname`
- `--mount`
- `--env`
Valid placeholders for the Go template are:
| Placeholder | Description |
|:------------------|:---------------|
| `.Service.ID` | Service ID |
| `.Service.Name` | Service name |
| `.Service.Labels` | Service labels |
| `.Node.ID` | Node ID |
| `.Task.Name` | Task name |
| `.Task.Slot` | Task slot |
#### Template example
This example sets the template of the created containers based on the
service's name and the ID of the node where the container is running:
```bash
$ docker service create --name hosttempl \
--hostname="{{.Node.ID}}-{{.Service.Name}}"\
busybox top
# Mount a read-only bind
$ docker service create \
--mount type=bind,src=<HOST-PATH>,dst=<CONTAINER-PATH>,readonly \
--name myservice \
<IMAGE>
```
>**Important note:** Bind mounts can be useful but they are also dangerous. In
most cases, we recommend that you architect your application such that mounting
paths from the host is unnecessary. The main risks include the following:<br />
> <br />
> If you bind mount a host path into your services containers, the path
> must exist on every machine. The Docker swarm mode scheduler can schedule
> containers on any machine that meets resource availability requirements
> and satisfies all `--constraint`s you specify.<br />
> <br />
> The Docker swarm mode scheduler may reschedule your running service
> containers at any time if they become unhealthy or unreachable.<br />
> <br />
> Host bind mounts are completely non-portable. When you use bind mounts,
> there is no guarantee that your application will run the same way in
> development as it does in production.
To see the result of using the template, use the `docker service ps` and
`docker inspect` commands.
```bash
$ docker service ps va8ew30grofhjoychbr6iot8c
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wo41w8hg8qan hosttempl.1 busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 2e7a8a9c4da2 Running Running about a minute ago
```
```bash
$ docker inspect --format="{{.Config.Hostname}}" hosttempl.1.wo41w8hg8qanxwjwsg4kxpprj
```
## Learn More