Sync vnext-engine branch to docker/docker SHA 2f12d2808464dcfdf45e0920fd508ce0ff12bd29

This branch will contain forward-looking Engine-specific docs
and be the equivalent of docker/docker master for docs
This commit is contained in:
Misty Stanley-Jones 2016-10-10 09:51:23 -07:00
parent 0658ba7292
commit e3a3145cd9
111 changed files with 1136 additions and 645 deletions

View File

@ -150,4 +150,4 @@ case `192.168.1.52:6379`.
apk add socat && \
rm -r /var/cache/
CMD env | grep _TCP= | (sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat -t 100000000 TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/' && echo wait) | sh
CMD env | grep _TCP= | (sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat -t 100000000 TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/' && echo wait) | sh

View File

@ -158,4 +158,4 @@ VirtualBox.
<img src="/articles/b2d_volume_images/verify.png">
You're done!
You're done!

View File

@ -67,4 +67,4 @@ docker_container 'my_nginx' do
env 'FOO=bar'
subscribes :redeploy, 'docker_image[nginx]'
end
```
```

View File

@ -166,4 +166,4 @@ container:
```powershell
$containerProps = @{Name="web"; Image="node:latest"; Port="80:80"; `
Env="PORT=80"; Link="db:db"; Command="grunt"}
```
```

View File

@ -13,6 +13,7 @@ list of elements they support in their templates:
- [Docker Log Tag formatting](logging/log_tags.md)
- [Docker Network Inspect formatting](../reference/commandline/network_inspect.md)
- [Docker PS formatting](../reference/commandline/ps.md#formatting)
- [Docker Stats formatting](../reference/commandline/stats.md#formatting)
- [Docker Volume Inspect formatting](../reference/commandline/volume_inspect.md)
- [Docker Version formatting](../reference/commandline/version.md#examples)
@ -26,46 +27,36 @@ This is the complete list of the available functions with examples:
Join concatenates a list of strings to create a single string.
It puts a separator between each element in the list.
{% raw %}
$ docker ps --format '{{join .Names " or "}}'
{% endraw %}
### Json
Json encodes an element as a json string.
{% raw %}
$ docker inspect --format '{{json .Mounts}}' container
{% endraw %}
### Lower
Lower turns a string into its lower case representation.
{% raw %}
$ docker inspect --format "{{lower .Name}}" container
{% endraw %}
### Split
Split slices a string into a list of strings separated by a separator.
{% raw %}
$ docker inspect --format '{{split (join .Names "/") "/"}}' container
{% endraw %}
# docker inspect --format '{{split (join .Names "/") "/"}}' container
{% endraw %}
### Title
Title capitalizes a string.
{% raw %}
$ docker inspect --format "{{title .Name}}" container
{% endraw %}
### Upper
Upper turns a string into its upper case representation.
Upper turms a string into its upper case representation.
{% raw %}
$ docker inspect --format "{{upper .Name}}" container
{% endraw %}

View File

@ -82,4 +82,4 @@ and `logs:PutLogEvents` actions, as shown in the following example.
"Resource": "*"
}
]
}
}

View File

@ -1,8 +1,6 @@
---
description: Describes how to use the etwlogs logging driver.
keywords: ETW, docker, logging, driver
title: ETW logging driver
---
The ETW logging driver forwards container logs as ETW events.
ETW stands for Event Tracing in Windows, and is the common framework
@ -58,4 +56,4 @@ context information. Note that the time stamp is also available within the ETW e
**Note** This ETW provider emits only a message string, and not a specially
structured ETW event. Therefore, it is not required to register a manifest file
with the system to read and interpret its ETW events.
with the system to read and interpret its ETW events.

View File

@ -28,10 +28,8 @@ The `docker logs` command is not available for this logging driver.
Some options are supported by specifying `--log-opt` as many times as needed:
{% raw %}
- `fluentd-address`: specify `host:port` to connect `localhost:24224`
- `tag`: specify tag for fluentd message, which interpret some markup, ex `{{.ID}}`, `{{.FullID}}` or `{{.Name}}` `docker.{{.ID}}`
{% endraw %}
Configure the default logging driver by passing the
@ -109,4 +107,4 @@ aggregate store.
3. Start one or more containers with the `fluentd` logging driver:
$ docker run --log-driver=fluentd your/application
$ docker run --log-driver=fluentd your/application

View File

@ -34,6 +34,10 @@ takes precedence over information discovered from the metadata server so a
Docker daemon running in a Google Cloud Project can be overridden to log to a
different Google Cloud Project using `--gcp-project`.
Docker fetches the values for zone, instance name and instance id from Google
Cloud metadata server. Those values can be provided via options if metadata
server is not available. They will not override the values from metadata server.
## gcplogs options
You can use the `--log-opt NAME=VALUE` flag to specify these additional Google
@ -45,6 +49,9 @@ Cloud Logging driver options:
| `gcp-log-cmd` | optional | Whether to log the command that the container was started with. Defaults to false. |
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
| `env` | optional | Comma-separated list of keys of environment variables, which should be included in message, if these variables are specified for container. |
| `gcp-meta-zone` | optional | Zone name for the instance. |
| `gcp-meta-name` | optional | Instance name. |
| `gcp-meta-id` | optional | Instance ID. |
If there is collision between `label` and `env` keys, the value of the `env`
takes precedence. Both options add additional fields to the attributes of a
@ -54,6 +61,8 @@ Below is an example of the logging options required to log to the default
logging destination which is discovered by querying the GCE metadata server.
docker run --log-driver=gcplogs \
--log-opt labels=location \
--log-opt env=TEST \
--log-opt gcp-log-cmd=true \
--env "TEST=false" \
--label location=west \
@ -62,3 +71,12 @@ logging destination which is discovered by querying the GCE metadata server.
This configuration also directs the driver to include in the payload the label
`location`, the environment variable `ENV`, and the command used to start the
container.
An example of the logging options for running outside of GCE (the daemon must be
configured with GOOGLE_APPLICATION_CREDENTIALS):
docker run --log-driver=gcplogs \
--log-opt gcp-project=test-project
--log-opt gcp-meta-zone=west1 \
--log-opt gcp-meta-name=`hostname` \
your/application

View File

@ -97,18 +97,5 @@ import systemd.journal
reader = systemd.journal.Reader()
reader.add_match('CONTAINER_NAME=web')
for msg in reader:
print '{CONTAINER_ID_FULL}: {MESSAGE}'.format(**msg)
```
## `journald` configuration
Docker hosts with many containers may produce large amounts of logging data.
By default, `journald` limits the number of messages stored per service per
time-unit.
If your application needs large-scale logging, configure `RateLimitIntervalSec`
and `RateLimitBurst` in the `journald` configuration file. By default,
`systemd` drops messages in excess of 1000 messages per service per 30 seconds.
For more information about configuring `journald`, see the
[`journald` documentation](https://www.freedesktop.org/software/systemd/man/journald.conf.html).
for msg in reader:
print '{CONTAINER_ID_FULL}: {MESSAGE}'.format(**msg)

View File

@ -16,7 +16,6 @@ docker run --log-driver=fluentd --log-opt fluentd-address=myhost.local:24224 --l
Docker supports some special template markup you can use when specifying a tag's value:
{% raw %}
| Markup | Description |
|--------------------|------------------------------------------------------|
| `{{.ID}}` | The first 12 characters of the container id. |
@ -28,18 +27,15 @@ Docker supports some special template markup you can use when specifying a tag's
| `{{.DaemonName}}` | The name of the docker program (`docker`). |
For example, specifying a `--log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}"` value yields `syslog` log lines like:
{% endraw %}
```
Aug 7 18:33:19 HOSTNAME docker/hello-world/foobar/5790672ab6a0[9103]: Hello from Docker.
```
{% raw %}
At startup time, the system sets the `container_name` field and `{{.Name}}` in
the tags. If you use `docker rename` to rename a container, the new name is not
reflected in the log messages. Instead, these messages continue to use the
original container name.
{% endraw %}
For advanced usage, the generated tag's use [go
templates](http://golang.org/pkg/text/template/) and the container's [logging
@ -47,18 +43,18 @@ context](https://github.com/docker/docker/blob/master/daemon/logger/context.go).
As an example of what is possible with the syslog logger:
```{% raw %}
```
$ docker run -it --rm \
--log-driver syslog \
--log-opt tag="{{ (.ExtraAttributes nil).SOME_ENV_VAR }}" \
--log-opt env=SOME_ENV_VAR \
-e SOME_ENV_VAR=logtester.1234 \
flyinprogrammer/logtester
{% endraw %}```
```
Results in logs like this:
```
Apr 1 15:22:17 ip-10-27-39-73 docker/logtester.1234[45499]: + exec app
Apr 1 15:22:17 ip-10-27-39-73 docker/logtester.1234[45499]: 2016-04-01 15:22:17.075416751 +0000 UTC stderr msg: 1
```
```

View File

@ -2,7 +2,6 @@
description: Configure logging driver.
keywords: docker, logging, driver, Fluentd
redirect_from:
- /engine/reference/logging/overview/
title: Configure logging drivers
---
@ -251,7 +250,10 @@ $ docker run -dit \
## `fluentd`
### Options
The `gelf-compression-level` option can be used to change the level of
compresssion when `gzip` or `zlib` is selected as `gelf-compression-type`.
Accepted value must be from -1 to 9 (BestCompression). Higher levels
typically run slower but compress more. Default value is 1 (BestSpeed).
The `fluentd` logging driver supports the following options:
@ -280,11 +282,6 @@ $ docker run -dit \
{% endraw %}
```
> **Note**: If the container cannot connect to the Fluentd daemon on the
specified address and `fluentd-async-connect` is set to `false`, the container
stops immediately.
For detailed information on working with the `fluentd` logging driver, see
[the fluentd logging driver](fluentd.md)

View File

@ -27,21 +27,23 @@ You can set the logging driver for a specific container by using the
You can use the `--log-opt NAME=VALUE` flag to specify these additional Splunk
logging driver options:
{% raw %}
| Option | Required | Description |
|-----------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `splunk-token` | required | Splunk HTTP Event Collector token. |
| `splunk-url` | required | Path to your Splunk Enterprise or Splunk Cloud instance (including port and scheme used by HTTP Event Collector) `https://your_splunk_instance:8088`. |
| `splunk-source` | optional | Event source. |
| `splunk-sourcetype` | optional | Event source type. |
| `splunk-index` | optional | Event index. |
| `splunk-capath` | optional | Path to root certificate. |
| `splunk-caname` | optional | Name to use for validating server certificate; by default the hostname of the `splunk-url` will be used. |
| `splunk-insecureskipverify` | optional | Ignore server certificate validation. |
| `tag` | optional | Specify tag for message, which interpret some markup. Default value is `{{.ID}}` (12 characters of the container ID). Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. |
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
| `env` | optional | Comma-separated list of keys of environment variables, which should be included in message, if these variables are specified for container. |
{% endraw %}
| Option | Required | Description |
|-----------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `splunk-token` | required | Splunk HTTP Event Collector token. |
| `splunk-url` | required | Path to your Splunk Enterprise or Splunk Cloud instance (including port and scheme used by HTTP Event Collector) `https://your_splunk_instance:8088`. |
| `splunk-source` | optional | Event source. |
| `splunk-sourcetype` | optional | Event source type. |
| `splunk-index` | optional | Event index. |
| `splunk-capath` | optional | Path to root certificate. |
| `splunk-caname` | optional | Name to use for validating server certificate; by default the hostname of the `splunk-url` will be used. |
| `splunk-insecureskipverify` | optional | Ignore server certificate validation. |
| `splunk-format` | optional | Message format. Can be `inline`, `json` or `raw`. Defaults to `inline`. |
| `splunk-verify-connection` | optional | Verify on start, that docker can connect to Splunk server. Defaults to true. |
| `splunk-gzip` | optional | Enable/disable gzip compression to send events to Splunk Enterprise or Splunk Cloud instance. Defaults to false. |
| `splunk-gzip-level` | optional | Set compression level for gzip. Valid values are -1 (default), 0 (no compression), 1 (best speed) ... 9 (best compression). Defaults to [DefaultCompression](https://golang.org/pkg/compress/gzip/#DefaultCompression). |
| `tag` | optional | Specify tag for message, which interpret some markup. Default value is `{{.ID}}` (12 characters of the container ID). Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. |
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
| `env` | optional | Comma-separated list of keys of environment variables, which should be included in message, if these variables are specified for container. |
If there is collision between `label` and `env` keys, the value of the `env` takes precedence.
Both options add additional fields to the attributes of a logging message.
@ -52,16 +54,93 @@ Docker daemon is running. The path to the root certificate and Common Name is
specified using an HTTPS scheme. This is used for verification.
The `SplunkServerDefaultCert` is automatically generated by Splunk certificates.
{% raw %}
docker run --log-driver=splunk \
--log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \
--log-opt splunk-url=https://splunkhost:8088 \
--log-opt splunk-capath=/path/to/cert/cacert.pem \
--log-opt splunk-caname=SplunkServerDefaultCert \
--log-opt tag="{{.Name}}/{{.FullID}}" \
--log-opt labels=location \
--log-opt env=TEST \
--env "TEST=false" \
--label location=west \
your/application
{% endraw %}
```bash
{% raw %}
$ docker run --log-driver=splunk \
--log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \
--log-opt splunk-url=https://splunkhost:8088 \
--log-opt splunk-capath=/path/to/cert/cacert.pem \
--log-opt splunk-caname=SplunkServerDefaultCert \
--log-opt tag="{{.Name}}/{{.FullID}}" \
--log-opt labels=location \
--log-opt env=TEST \
--env "TEST=false" \
--label location=west \
your/application
{% endraw %}
```
### Message formats
By default Logging Driver sends messages as `inline` format, where each message
will be embedded as a string, for example
```
{
"attrs": {
"env1": "val1",
"label1": "label1"
},
"tag": "MyImage/MyContainer",
"source": "stdout",
"line": "my message"
}
{
"attrs": {
"env1": "val1",
"label1": "label1"
},
"tag": "MyImage/MyContainer",
"source": "stdout",
"line": "{\"foo\": \"bar\"}"
}
```
In case if your messages are JSON objects you may want to embed them in the
message we send to Splunk. By specifying `--log-opt splunk-format=json` driver
will try to parse every line as a JSON object and send it as embedded object. In
case if it cannot parse it - message will be send as `inline`. For example
```
{
"attrs": {
"env1": "val1",
"label1": "label1"
},
"tag": "MyImage/MyContainer",
"source": "stdout",
"line": "my message"
}
{
"attrs": {
"env1": "val1",
"label1": "label1"
},
"tag": "MyImage/MyContainer",
"source": "stdout",
"line": {
"foo": "bar"
}
}
```
Third format is a `raw` message. You can specify it by using
`--log-opt splunk-format=raw`. Attributes (environment variables and labels) and
tag will be prefixed to the message. For example
```
MyImage/MyContainer env1=val1 label1=label1 my message
MyImage/MyContainer env1=val1 label1=label1 {"foo": "bar"}
```
## Advanced options
Splunk Logging Driver allows you to configure few advanced options by specifying next environment variables for the Docker daemon.
| Environment variable name | Default value | Description |
|--------------------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| `SPLUNK_LOGGING_DRIVER_POST_MESSAGES_FREQUENCY` | `5s` | If there is nothing to batch how often driver will post messages. You can think about this as the maximum time to wait for more messages to batch. |
| `SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE` | `1000` | How many messages driver should wait before sending them in one batch. |
| `SPLUNK_LOGGING_DRIVER_BUFFER_MAX` | `10 * 1000` | If driver cannot connect to remote server, what is the maximum amount of messages it can hold in buffer for retries. |
| `SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE` | `4 * 1000` | How many pending messages can be in the channel which is used to send messages to background logger worker, which batches them. |

View File

@ -92,4 +92,4 @@ Run also contains a number of optional parameters:
> *Note:*
> The `ports`, `env`, `dns` and `volumes` attributes can be set with either a single
> string or as above with an array of values.
> string or as above with an array of values.

View File

@ -10,4 +10,4 @@ The original content was deprecated. [An archived
version](/v1.6/articles/registry_mirror) is available in
the 1.7 documentation. For information about configuring mirrors with the latest
Docker Registry version, please file a support request with [the Distribution
project](https://github.com/docker/distribution/issues).
project](https://github.com/docker/distribution/issues).

View File

@ -59,8 +59,8 @@ known to the system, the hierarchy they belong to, and how many groups they cont
You can also look at `/proc/<pid>/cgroup` to see which control groups a process
belongs to. The control group will be shown as a path relative to the root of
the hierarchy mountpoint; e.g., `/` means this process has not been assigned into
a particular group, while `/lxc/pumpkin` means that the process is likely to be
the hierarchy mountpoint; e.g., `/` means "this process has not been assigned into
a particular group", while `/lxc/pumpkin` means that the process is likely to be
a member of a container named `pumpkin`.
## Finding the cgroup for a given container
@ -273,7 +273,7 @@ program (present in the host system) within any network namespace
visible to the current process. This means that your host will be able
to enter the network namespace of your containers, but your containers
won't be able to access the host, nor their sibling containers.
Containers will be able to “see” and affect their sub-containers,
Containers will be able to "see" and affect their sub-containers,
though.
The exact format of the command is:
@ -307,7 +307,7 @@ container, we need to:
- Create a symlink from `/var/run/netns/<somename>` to `/proc/<thepid>/ns/net`
- Execute `ip netns exec <somename> ....`
Please review [Enumerating Cgroups](runmetrics.md#enumerating-cgroups) to learn how to find
Please review [Enumerating Cgroups](#enumerating-cgroups) to learn how to find
the cgroup of a process running in the container of which you want to
measure network usage. From there, you can examine the pseudo-file named
`tasks`, which contains the PIDs that are in the
@ -382,4 +382,4 @@ and remove the container control group. To remove a control group, just
`rmdir` its directory. It's counter-intuitive to
`rmdir` a directory as it still contains files; but
remember that this is a pseudo-filesystem, so usual rules don't apply.
After the cleanup is done, the collection process can exit safely.
After the cleanup is done, the collection process can exit safely.

View File

@ -200,4 +200,4 @@ When installing the binary without a package, you may want
to integrate Docker with systemd. For this, simply install the two unit files
(service and socket) from [the github
repository](https://github.com/docker/docker/tree/master/contrib/init/systemd)
to `/etc/systemd/system`.
to `/etc/systemd/system`.

View File

@ -147,4 +147,4 @@ You launched a new container interactively using the `docker run` command.
That container has run Supervisor and launched the SSH and Apache daemons with
it. We've specified the `-p` flag to expose ports 22 and 80. From here we can
now identify the exposed ports and connect to one or both of the SSH and Apache
daemons.
daemons.

View File

@ -120,4 +120,4 @@ container, and then removing the image.
$ docker stop test_apt_cacher_ng
$ docker rm test_apt_cacher_ng
$ docker rmi eg_apt_cacher_ng
$ docker rmi eg_apt_cacher_ng

View File

@ -274,4 +274,4 @@ is a console that allows to manage a Couchbase instance. It can be seen at:
Make sure to replace the IP address with the IP address of your Docker Machine
or `localhost` if Docker is running locally.
![Couchbase Web Console](couchbase/web-console.png)
![Couchbase Web Console](couchbase/web-console.png)

View File

@ -40,4 +40,4 @@ This time, we're requesting shared access to `$COUCH1`'s volumes.
$ echo "Navigate to $URL in your browser. You should see the same data as in the first database"'!'
Congratulations, you are now running two Couchdb containers, completely
isolated from each other *except* for their data.
isolated from each other *except* for their data.

View File

@ -1,7 +1,3 @@
---
{}
---
# Dockerizing MongoDB: Dockerfile for building MongoDB images
# Based on ubuntu:16.04, installs MongoDB following the instructions from:
# http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

View File

@ -98,4 +98,4 @@ $ redis 172.17.0.33:6379> exit
```
We could easily use this or other environment variables in our web
application to make a connection to our `redis`
container.
container.

View File

@ -99,4 +99,4 @@ Riak is a distributed database. Many production deployments consist of
[at least five nodes](
http://basho.com/why-your-riak-cluster-should-have-at-least-five-nodes/).
See the [docker-riak](https://github.com/hectcastro/docker-riak) project
details on how to deploy a Riak cluster using Docker and Pipework.
details on how to deploy a Riak cluster using Docker and Pipework.

View File

@ -80,4 +80,4 @@ container, and then removing the image.
$ docker stop test_sshd
$ docker rm test_sshd
$ docker rmi eg_sshd
```
```

View File

@ -21,23 +21,23 @@ Depending on your interest, the Docker documentation contains a wealth of inform
</tr>
<tr>
<td class="tg-031e">More about Docker for Mac, features, examples, FAQs, relationship to Docker Machine and Docker Toolbox, and how this fits in the Docker ecosystem</td>
<td class="tg-031e"><a href="/docker-for-mac/">Getting Started with Docker for Mac</a></td>
<td class="tg-031e">[Getting Started with Docker for Mac](https://docs.docker.com/docker-for-mac/)</td>
</tr>
<tr>
<td class="tg-031e">More about Docker for Windows, features, examples, FAQs, relationship to Docker Machine and Docker Toolbox, and how this fits in the Docker ecosystem</td>
<td class="tg-031e"><a href="/docker-for-windows/">Getting Started with Docker for Windows</a></td>
<td class="tg-031e">[Getting Started with Docker for Windows](https://docs.docker.com/docker-for-windows/)</td>
</tr>
<tr>
<td class="tg-031e">More about Docker Toolbox</td>
<td class="tg-031e"><a href="/toolbox/overview/">Docker Toolbox Overview</a></td>
<td class="tg-031e">[Docker Toolbox Overview](/toolbox/overview.md)</td>
</tr>
<tr>
<td class="tg-031e">More about Docker for Linux distributions</td>
<td class="tg-031e"><a href="/engine/installation/linux/">Install Docker Engine on Linux</a></td>
<td class="tg-031e">[Install Docker Engine on Linux](/engine/installation/linux/index.md)</td>
</tr>
<tr>
<td class="tg-031e">More advanced tutorials on running containers, building your own images, networking containers, managing data for containers, and storing images on Docker Hub</td>
<td class="tg-031e"><a href="/engine/tutorials/">Learn by example</a></td>
<td class="tg-031e"> [Learn by example](/engine/tutorials/index.md)</a></td>
</tr>
<tr>
<td class="tg-031e">Information about the Docker product line</td>
@ -46,11 +46,12 @@ Depending on your interest, the Docker documentation contains a wealth of inform
<tr>
<td class="tg-031e">How to set up an automated build on Docker Hub</td>
<td class="tg-031e"><a href="/docker-hub/">Docker Hub documentation</a></td>
<td class="tg-031e"><a href="https://docs.docker.com/docker-hub/">Docker Hub documentation</a></td>
</tr>
<tr>
<td class="tg-031e">How to run a multi-container application with Compose</td>
<td class="tg-031e"><a href="/compose/">Docker Compose documentation</a></td>
<td class="tg-031e"> [Docker Compose documentation](/compose/overview.md)
</td>
</tr>
<tr>
<td class="tg-031e">A tutorial on Docker Swarm, which provides clustering capabilities to scale applications across multiple Docker nodes </td>

View File

@ -35,4 +35,4 @@ target="_blank">repositories instead for your installation</a>.
>command fails for the Docker repo during installation. To work around this,
>add the key directly using the following:
>
> $ curl -fsSL https://get.docker.com/gpg | sudo apt-key add -
> $ curl -fsSL https://get.docker.com/gpg | sudo apt-key add -

View File

@ -8,9 +8,9 @@ redirect_from:
title: Install Docker and run hello-world
---
- [Step 1: Get Docker](step_one.md#step-1-get-docker)
- [Step 2: Install Docker](step_one.md#step-2-install-docker)
- [Step 3: Verify your installation](step_one.md#step-3-verify-your-installation)
- [Step 1: Get Docker](#step-1-get-docker)
- [Step 2: Install Docker](#step-2-install-docker)
- [Step 3: Verify your installation](#step-3-verify-your-installation)
## Step 1: Get Docker

View File

@ -14,7 +14,7 @@ image you'll use in the rest of this getting started.
## Step 1: Locate the whalesay image
1. Open your browser and [browse to the Docker Hub](https://hub.docker.com/?utm_source=getting_started_guide&utm_medium=embedded_MacOSX&utm_campaign=find_whalesay).
1. Open your browser and <a href="https://hub.docker.com/?utm_source=getting_started_guide&utm_medium=embedded_MacOSX&utm_campaign=find_whalesay" target=_blank> browse to the Docker Hub</a>.
![Browse Docker Hub](tutimg/browse_and_search.png)
@ -66,15 +66,15 @@ Make sure Docker is running. On Docker for Mac and Docker for Windows, this is i
-----
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
The first time you run a software image, the `docker` command looks for it
on your local system. If the image isn't there, then `docker` gets it from
@ -108,15 +108,15 @@ Make sure Docker is running. On Docker for Mac and Docker for Windows, this is i
---------
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
## Where to go next
@ -127,4 +127,4 @@ it on your computer. Now, you are ready to create your own Docker image.
Go on to the next part [to build your own image](step_four.md).
&nbsp;
&nbsp;

View File

@ -112,4 +112,4 @@ The complete list of deprecated features can be found on the
Docker is licensed under the Apache License, Version 2.0. See
[LICENSE](https://github.com/docker/docker/blob/master/LICENSE) for the full
license text.
license text.

View File

@ -125,9 +125,7 @@ is `https://get.docker.com/builds/Linux/x86_64/docker-1.11.0.tgz`.
> **Note** These instructions are for Docker Engine 1.11 and up. Engine 1.10 and
> under consists of a single binary, and instructions for those versions are
> different. To install version 1.10 or below, follow the instructions in the
> [1.10 documentation](/v1.10/engine/installation/binaries/){:target="_blank"}.
#### Verify downloaded files
> [1.10 documentation](/v1.10/engine/installation/binaries/){: target="_blank" class="_" }.
To verify the integrity of downloaded files, you can get an MD5 or SHA256
checksum by adding `.md5` or `.sha256` to the end of the URL. For instance,
@ -160,6 +158,9 @@ For example, to install the binaries in `/usr/bin`:
$ mv docker/* /usr/bin/
```
> **Note**: Depending on your current setup, you can specify custom paths
> for some of the binaries provided.
> **Note**: If you already have Engine installed on your host, make sure you
> stop Engine before installing (`killall docker`), and install the binaries
> in the same location. You can find the location of the current installation

View File

@ -20,19 +20,19 @@ If you have not done so already, go to <a href="https://digitalocean.com" target
To generate your access token:
1. Go to the Digital Ocean administrator console and click **API** in the header.
1. Go to the Digital Ocean administrator console and click **API** in the header.
![Click API in Digital Ocean console](../images/ocean_click_api.png)
2. Click **Generate New Token** to get to the token generator.
2. Click **Generate New Token** to get to the token generator.
![Generate token](../images/ocean_gen_token.png)
![Generate token](../images/ocean_gen_token.png)
3. Give the token a clever name (e.g. "machine"), make sure the **Write (Optional)** checkbox is checked, and click **Generate Token**.
3. Give the token a clever name (e.g. "machine"), make sure the **Write (Optional)** checkbox is checked, and click **Generate Token**.
![Name and generate token](../images/ocean_token_create.png)
4. Grab (copy to clipboard) the generated big long hex string and store it somewhere safe.
4. Grab (copy to clipboard) the generated big long hex string and store it somewhere safe.
![Copy and save personal access token](../images/ocean_save_token.png)
@ -50,55 +50,49 @@ To generate your access token:
2. At a command terminal, use `docker-machine ls` to get a list of Docker Machines and their status.
```
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp:////xxx.xxx.xx.xxx:xxxx
```
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp:////xxx.xxx.xx.xxx:xxxx
6. Run some Docker commands to make sure that Docker Engine is also up-and-running.
6. Run some Docker commands to make sure that Docker Engine is also up-and-running.
We'll run `docker run hello-world` again, but you could try `docker ps`, `docker run docker/whalesay cowsay boo`, or another command to verify that Docker is running.
```
$ docker run hello-world
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.
...
```
Hello from Docker.
This message shows that your installation appears to be working correctly.
...
### Step 4. Use Machine to Create the Droplet
1. Run `docker-machine create` with the `digitalocean` driver and pass your key to the `--digitalocean-access-token` flag, along with a name for the new cloud server.
1. Run `docker-machine create` with the `digitalocean` driver and pass your key to the `--digitalocean-access-token` flag, along with a name for the new cloud server.
For this example, we'll call our new Droplet "docker-sandbox".
```
$ docker-machine create --driver digitalocean --digitalocean-access-token xxxxx docker-sandbox
Running pre-create checks...
Creating machine...
(docker-sandbox) OUT | Creating SSH key...
(docker-sandbox) OUT | Creating Digital Ocean droplet...
(docker-sandbox) OUT | Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning created instance...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
To see how to connect Docker to this machine, run: docker-machine env docker-sandbox
```
$ docker-machine create --driver digitalocean --digitalocean-access-token xxxxx docker-sandbox
Running pre-create checks...
Creating machine...
(docker-sandbox) OUT | Creating SSH key...
(docker-sandbox) OUT | Creating Digital Ocean droplet...
(docker-sandbox) OUT | Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning created instance...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
To see how to connect Docker to this machine, run: docker-machine env docker-sandbox
When the Droplet is created, Docker generates a unique SSH key and stores it on your local system in `~/.docker/machines`. Initially, this is used to provision the host. Later, it's used under the hood to access the Droplet directly with the `docker-machine ssh` command. Docker Engine is installed on the cloud server and the daemon is configured to accept remote connections over TCP using TLS for authentication.
2. Go to the Digital Ocean console to view the new Droplet.
2. Go to the Digital Ocean console to view the new Droplet.
![Droplet in Digital Ocean created with Machine](../images/ocean_droplet.png)
3. At the command terminal, run `docker-machine ls`.
3. At the command terminal, run `docker-machine ls`.
```
$ docker-machine ls
@ -111,19 +105,17 @@ To generate your access token:
4. Run `docker-machine env docker-sandbox` to get the environment commands for the new remote host, then run `eval` as directed to re-configure the shell to connect to `docker-sandbox`.
```
$ docker-machine env docker-sandbox
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://45.55.222.72:2376"
export DOCKER_CERT_PATH="/Users/victoriabialas/.docker/machine/machines/docker-sandbox"
export DOCKER_MACHINE_NAME="docker-sandbox"
# Run this command to configure your shell:
# eval "$(docker-machine env docker-sandbox)"
$ docker-machine env docker-sandbox
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://45.55.222.72:2376"
export DOCKER_CERT_PATH="/Users/victoriabialas/.docker/machine/machines/docker-sandbox"
export DOCKER_MACHINE_NAME="docker-sandbox"
# Run this command to configure your shell:
# eval "$(docker-machine env docker-sandbox)"
$ eval "$(docker-machine env docker-sandbox)"
```
$ eval "$(docker-machine env docker-sandbox)"
5. Re-run `docker-machine ls` to verify that our new server is the active machine, as indicated by the asterisk (*) in the ACTIVE column.
5. Re-run `docker-machine ls` to verify that our new server is the active machine, as indicated by the asterisk (*) in the ACTIVE column.
```
$ docker-machine ls
@ -132,45 +124,41 @@ To generate your access token:
docker-sandbox * digitalocean Running tcp://45.55.222.72:2376
```
6. Run some `docker-machine` commands to inspect the remote host. For example, `docker-machine ip <machine>` gets the host IP address and `docker-machine inspect <machine>` lists all the details.
6. Run some `docker-machine` commands to inspect the remote host. For example, `docker-machine ip <machine>` gets the host IP address and `docker-machine inspect <machine>` lists all the details.
```
$ docker-machine ip docker-sandbox
104.131.43.236
$ docker-machine ip docker-sandbox
104.131.43.236
$ docker-machine inspect docker-sandbox
{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "104.131.43.236",
"MachineName": "docker-sandbox",
"SSHUser": "root",
"SSHPort": 22,
"SSHKeyPath": "/Users/samanthastevens/.docker/machine/machines/docker-sandbox/id_rsa",
"StorePath": "/Users/samanthastevens/.docker/machine",
"SwarmMaster": false,
"SwarmHost": "tcp://0.0.0.0:3376",
"SwarmDiscovery": "",
...
```
$ docker-machine inspect docker-sandbox
{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "104.131.43.236",
"MachineName": "docker-sandbox",
"SSHUser": "root",
"SSHPort": 22,
"SSHKeyPath": "/Users/samanthastevens/.docker/machine/machines/docker-sandbox/id_rsa",
"StorePath": "/Users/samanthastevens/.docker/machine",
"SwarmMaster": false,
"SwarmHost": "tcp://0.0.0.0:3376",
"SwarmDiscovery": "",
...
7. Verify Docker Engine is installed correctly by running `docker` commands.
7. Verify Docker Engine is installed correctly by running `docker` commands.
Start with something basic like `docker run hello-world`, or for a more interesting test, run a Dockerized webserver on your new remote machine.
In this example, the `-p` option is used to expose port 80 from the `nginx` container and make it accessible on port `8000` of the `docker-sandbox` host.
```
$ docker run -d -p 8000:80 --name webserver kitematic/hello-world-nginx
Unable to find image 'kitematic/hello-world-nginx:latest' locally
latest: Pulling from kitematic/hello-world-nginx
a285d7f063ea: Pull complete
2d7baf27389b: Pull complete
...
Digest: sha256:ec0ca6dcb034916784c988b4f2432716e2e92b995ac606e080c7a54b52b87066
Status: Downloaded newer image for kitematic/hello-world-nginx:latest
942dfb4a0eaae75bf26c9785ade4ff47ceb2ec2a152be82b9d7960e8b5777e65
```
$ docker run -d -p 8000:80 --name webserver kitematic/hello-world-nginx
Unable to find image 'kitematic/hello-world-nginx:latest' locally
latest: Pulling from kitematic/hello-world-nginx
a285d7f063ea: Pull complete
2d7baf27389b: Pull complete
...
Digest: sha256:ec0ca6dcb034916784c988b4f2432716e2e92b995ac606e080c7a54b52b87066
Status: Downloaded newer image for kitematic/hello-world-nginx:latest
942dfb4a0eaae75bf26c9785ade4ff47ceb2ec2a152be82b9d7960e8b5777e65
In a web browser, go to `http://<host_ip>:8000` to bring up the webserver home page. You got the `<host_ip>` from the output of the `docker-machine ip <machine>` command you ran in a previous step. Use the port you exposed in the `docker run` command.
@ -206,6 +194,6 @@ If you create a host with Docker Machine, but remove it through the cloud provid
* [Use Docker Machine to provision hosts on cloud providers](/machine/get-started-cloud/)
* [Install Docker Engine](../../installation/index.md)
* [Install Docker Engine](../../installation/index.md)
* [Docker User Guide](../../userguide/intro.md)
* [Docker User Guide](../../userguide/intro.md)

View File

@ -13,4 +13,4 @@ title: Install Engine on cloud hosts
* [Understand cloud install options and choose one](overview.md)
* [Example: Use Machine to provision cloud hosts](cloud-ex-machine-ocean.md)
* [Example: Manual install on a cloud provider](cloud-ex-aws.md)
* [Example: Manual install on a cloud provider](cloud-ex-aws.md)

View File

@ -45,4 +45,4 @@ To do this, you use the `docker-machine create` command with the driver for the
* For supported platforms, see [Install Docker Engine](../index.md).
* To get started with Docker post-install, see [Docker User Guide](../../userguide/intro.md).
* To get started with Docker post-install, see [Docker User Guide](../../userguide/intro.md).

View File

@ -41,4 +41,4 @@ Instructions for installing prior releases of Docker can be found in the followi
## Where to go after installing
* [About Docker Engine](../index.md)
* [Support](https://www.docker.com/support/)
* [Training](https://training.docker.com//)
* [Training](https://training.docker.com//)

View File

@ -66,7 +66,7 @@ The `docker` package creates a new group named `docker`. Users, other than
`root` user, must be part of this group to interact with the
Docker daemon. You can add users with this command syntax:
sudo /usr/sbin/usermod -a -G docker <username>
$ sudo /usr/sbin/usermod -a -G docker <username>
Once you add a user, make sure they relog to pick up these new permissions.

View File

@ -98,4 +98,4 @@ and volumes run the following command:
$ rm -rf /var/lib/docker
You must delete the user created configuration files manually.
You must delete the user created configuration files manually.

View File

@ -35,23 +35,23 @@ packages.
## Install Docker Engine
There are two ways to install Docker Engine. You can [install using the `yum`
package manager](centos.md#install-with-yum). Or you can use `curl` with the [`get.docker.com`
site](centos.md#install-with-the-script). This second method runs an installation script
package manager](#install-with-yum). Or you can use `curl` with the [`get.docker.com`
site](#install-with-the-script). This second method runs an installation script
which also installs via the `yum` package manager.
### Install with yum
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing packages are up-to-date.
2. Make sure your existing packages are up-to-date.
```bash
$ sudo yum update
```
3. Add the `yum` repo.
3. Add the `yum` repo.
```bash
```none
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
@ -62,19 +62,19 @@ which also installs via the `yum` package manager.
EOF
```
4. Install the Docker package.
4. Install the Docker package.
```bash
$ sudo yum install docker-engine
```
5. Enable the service.
5. Enable the service.
```bash
$ sudo systemctl enable docker.service
```
6. Start the Docker daemon.
6. Start the Docker daemon.
```bash
$ sudo systemctl start docker
@ -118,13 +118,13 @@ learn how to [customize your Systemd Docker daemon options](../../admin/systemd.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing packages are up-to-date.
2. Make sure your existing packages are up-to-date.
```bash
$ sudo yum update
```
3. Run the Docker installation script.
3. Run the Docker installation script.
```bash
$ curl -fsSL https://get.docker.com/ | sh
@ -132,19 +132,19 @@ learn how to [customize your Systemd Docker daemon options](../../admin/systemd.
This script adds the `docker.repo` repository and installs Docker.
4. Enable the service.
4. Enable the service.
```bash
$ sudo systemctl enable docker.service
```
5. Start the Docker daemon.
5. Start the Docker daemon.
```bash
$ sudo systemctl start docker
```
6. Verify `docker` is installed correctly by running a test image in a container.
6. Verify `docker` is installed correctly by running a test image in a container.
```bash
$ sudo docker run --rm hello-world
@ -172,23 +172,23 @@ To create the `docker` group and add your user:
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Create the `docker` group.
2. Create the `docker` group.
```bash
$ sudo groupadd docker
```
3. Add your user to `docker` group.
3. Add your user to `docker` group.
```bash
$ sudo usermod -aG docker your_username
```
4. Log out and log back in.
4. Log out and log back in.
This ensures your user is running with the correct permissions.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
```bash
$ docker run --rm hello-world
@ -206,7 +206,7 @@ $ sudo systemctl enable docker
You can uninstall the Docker software with `yum`.
1. List the installed Docker packages.
1. List the installed Docker packages.
```bash
$ yum list installed | grep docker
@ -215,7 +215,7 @@ You can uninstall the Docker software with `yum`.
docker-engine-selinux.noarch 1.12.3-1.el7.centos @dockerrepo
```
2. Remove the package.
2. Remove the package.
```bash
$ sudo yum -y remove docker-engine.x86_64
@ -225,11 +225,10 @@ You can uninstall the Docker software with `yum`.
This command does not remove images, containers, volumes, or user-created
configuration files on your host.
3. To delete all images, containers, and volumes, run the following command:
3. To delete all images, containers, and volumes, run the following command:
```bash
$ rm -rf /var/lib/docker
```
4. Locate and delete any user-created configuration files.

View File

@ -84,4 +84,4 @@ If you have any issues please file a bug with the
For support contact the [CRUX Mailing List](http://crux.nu/Main/MailingLists)
or join CRUX's [IRC Channels](http://crux.nu/Main/IrcChannels). on the
[FreeNode](http://freenode.net/) IRC Network.
[FreeNode](http://freenode.net/) IRC Network.

View File

@ -33,21 +33,21 @@ packages.
## Install Docker Engine
There are two ways to install Docker Engine. You can [install using the `dnf`
package manager](fedora.md#install-with-dnf). Or you can use `curl` [with the `get.docker.com`
site](fedora.md#install-with-the-script). This second method runs an installation script
package manager](#install-with-dnf). Or you can use `curl` [with the `get.docker.com`
site](#install-with-the-script). This second method runs an installation script
which also installs via the `dnf` package manager.
### Install with DNF
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing packages are up-to-date.
2. Make sure your existing packages are up-to-date.
```bash
$ sudo dnf update
```
3. Add the `yum` repo.
3. Add the `yum` repo.
```bash
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
@ -60,19 +60,19 @@ which also installs via the `dnf` package manager.
EOF
```
4. Install the Docker package.
4. Install the Docker package.
```bash
$ sudo dnf install docker-engine
```
5. Enable the service.
5. Enable the service.
```bash
$ sudo systemctl enable docker.service
```
6. Start the Docker daemon.
6. Start the Docker daemon.
```bash
$ sudo systemctl start docker
@ -118,13 +118,13 @@ You use the same installation procedure for all versions of Fedora.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing packages are up-to-date.
2. Make sure your existing packages are up-to-date.
```bash
$ sudo dnf update
```
3. Run the Docker installation script.
3. Run the Docker installation script.
```bash
$ curl -fsSL https://get.docker.com/ | sh
@ -132,19 +132,19 @@ You use the same installation procedure for all versions of Fedora.
This script adds the `docker.repo` repository and installs Docker.
4. Enable the service.
4. Enable the service.
```bash
$ sudo systemctl enable docker.service
```
5. Start the Docker daemon.
5. Start the Docker daemon.
```bash
$ sudo systemctl start docker
```
6. Verify `docker` is installed correctly by running a test image in a container.
6. Verify `docker` is installed correctly by running a test image in a container.
```bash
$ sudo docker run hello-world
@ -172,13 +172,13 @@ To create the `docker` group and add your user:
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Create the `docker` group.
2. Create the `docker` group.
```bash
$ sudo groupadd docker
```
3. Add your user to `docker` group.
3. Add your user to `docker` group.
```bash
$ sudo usermod -aG docker your_username`
@ -188,7 +188,7 @@ To create the `docker` group and add your user:
This ensures your user is running with the correct permissions.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
```bash
$ docker run hello-world
@ -225,7 +225,7 @@ This configuration allows IP forwarding from the container as expected.
You can uninstall the Docker software with `dnf`.
1. List the installed Docker packages.
1. List the installed Docker packages.
```bash
$ dnf list installed | grep docker
@ -233,7 +233,7 @@ You can uninstall the Docker software with `dnf`.
docker-engine.x86_64 1.7.1-0.1.fc21 @/docker-engine-1.7.1-0.1.fc21.el7.x86_64
```
2. Remove the package.
2. Remove the package.
```bash
$ sudo dnf -y remove docker-engine.x86_64
@ -242,10 +242,10 @@ You can uninstall the Docker software with `dnf`.
This command does not remove images, containers, volumes, or user-created
configuration files on your host.
3. To delete all images, containers, and volumes, run the following command:
3. To delete all images, containers, and volumes, run the following command:
```bash
$ rm -rf /var/lib/docker
```
4. Locate and delete any user-created configuration files.
4. Locate and delete any user-created configuration files.

View File

@ -114,4 +114,4 @@ and volumes run the following command:
$ rm -rf /var/lib/docker
You must delete the user created configuration files manually.
You must delete the user created configuration files manually.

View File

@ -81,11 +81,11 @@ btrfs storage engine on both Oracle Linux 6 and 7.
This section contains optional procedures for configuring your Oracle Linux to work
better with Docker.
* [Create a docker group](oracle.md#create-a-docker-group)
* [Configure Docker to start on boot](oracle.md#configure-docker-to-start-on-boot)
* [Use the btrfs storage engine](oracle.md#use-the-btrfs-storage-engine)
* [Create a docker group](#create-a-docker-group)
* [Configure Docker to start on boot](#configure-docker-to-start-on-boot)
* [Use the btrfs storage engine](#use-the-btrfs-storage-engine)
### Create a Docker group
### Create a Docker group
The `docker` daemon binds to a Unix socket instead of a TCP port. By default
that Unix socket is owned by the user `root` and other users can access it with
@ -201,4 +201,4 @@ Request at [My Oracle Support](https://support.oracle.com).
If you do not have an Oracle Linux Support Subscription, you can use the [Oracle
Linux
Forum](https://community.oracle.com/community/server_%26_storage_systems/linux/oracle_linux) for community-based support.
Forum](https://community.oracle.com/community/server_%26_storage_systems/linux/oracle_linux) for community-based support.

View File

@ -32,21 +32,21 @@ packages.
## Install Docker Engine
There are two ways to install Docker Engine. You can [install using the `yum`
package manager](rhel.md#install-with-yum). Or you can use `curl` with the [`get.docker.com`
site](rhel.md#install-with-the-script). This second method runs an installation script
package manager](#install-with-yum). Or you can use `curl` with the [`get.docker.com`
site](#install-with-the-script). This second method runs an installation script
which also installs via the `yum` package manager.
### Install with yum
1. Log into your machine as a user with `sudo` or `root` privileges.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing packages are up-to-date.
2. Make sure your existing packages are up-to-date.
```bash
$ sudo yum update
```
3. Add the `yum` repo.
3. Add the `yum` repo.
```bash
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
@ -59,19 +59,19 @@ which also installs via the `yum` package manager.
EOF
```
4. Install the Docker package.
4. Install the Docker package.
```bash
$ sudo yum install docker-engine
```
5. Enable the service.
5. Enable the service.
```bash
$ sudo systemctl enable docker.service
```
6. Start the Docker daemon.
6. Start the Docker daemon.
```bash
$ sudo systemctl start docker
@ -113,15 +113,15 @@ learn how to [customize your Systemd Docker daemon options](../../admin/systemd.
### Install with the script
1. Log into your machine as a user with `sudo` or `root` privileges.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing packages are up-to-date.
2. Make sure your existing packages are up-to-date.
```bash
$ sudo yum update
```
3. Run the Docker installation script.
3. Run the Docker installation script.
```bash
$ curl -fsSL https://get.docker.com/ | sh
@ -129,19 +129,19 @@ learn how to [customize your Systemd Docker daemon options](../../admin/systemd.
This script adds the `docker.repo` repository and installs Docker.
4. Enable the service.
4. Enable the service.
```bash
$ sudo systemctl enable docker.service
```
5. Start the Docker daemon.
5. Start the Docker daemon.
```bash
$ sudo systemctl start docker
```
6. Verify `docker` is installed correctly by running a test image in a container.
6. Verify `docker` is installed correctly by running a test image in a container.
```bash
$ sudo docker run hello-world
@ -167,25 +167,25 @@ makes the ownership of the Unix socket read/writable by the `docker` group.
To create the `docker` group and add your user:
1. Log into your machine as a user with `sudo` or `root` privileges.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Create the `docker` group.
2. Create the `docker` group.
```bash
$ sudo groupadd docker
```
3. Add your user to `docker` group.
3. Add your user to `docker` group.
```bash
$ sudo usermod -aG docker your_username`
```
4. Log out and log back in.
4. Log out and log back in.
This ensures your user is running with the correct permissions.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
```bash
$ docker run hello-world
@ -203,7 +203,7 @@ $ sudo systemctl enable docker
You can uninstall the Docker software with `yum`.
1. List the installed Docker packages.
1. List the installed Docker packages.
```bash
$ yum list installed | grep docker
@ -211,7 +211,7 @@ You can uninstall the Docker software with `yum`.
docker-engine.x86_64 1.7.1-0.1.el7@/docker-engine-1.7.1-0.1.el7.x86_64
```
2. Remove the package.
2. Remove the package.
```bash
$ sudo yum -y remove docker-engine.x86_64
@ -220,10 +220,10 @@ You can uninstall the Docker software with `yum`.
This command does not remove images, containers, volumes, or user-created
configuration files on your host.
3. To delete all images, containers, and volumes, run the following command:
3. To delete all images, containers, and volumes, run the following command:
```bash
$ rm -rf /var/lib/docker
```
4. Locate and delete any user-created configuration files.
4. Locate and delete any user-created configuration files.

View File

@ -6,8 +6,8 @@ title: Install Docker on macOS
You have two options for installing Docker on Mac:
- [Docker for Mac](mac.md#docker-for-mac)
- [Docker Toolbox](mac.md#docker-toolbox)
- [Docker for Mac](#docker-for-mac)
- [Docker Toolbox](#docker-toolbox)
## Docker for Mac
@ -46,4 +46,4 @@ Your Mac must be running macOS 10.8 "Mountain Lion" or newer to install the Dock
* If you are interested in using the Kitematic GUI, see the [Kitematic user guide](/kitematic/userguide/).
> **Note**: The Boot2Docker command line was deprecated several releases back in favor of Docker Machine, and now Docker for Mac.
> **Note**: The Boot2Docker command line was deprecated several releases back in favor of Docker Machine, and now Docker for Mac.

View File

@ -6,8 +6,8 @@ title: Install Docker on Windows
You have two options for installing Docker on Windows:
- [Docker for Windows](windows.md#docker-for-windows)
- [Docker Toolbox](windows.md#docker-toolbox)
- [Docker for Windows](#docker-for-windows)
- [Docker Toolbox](#docker-toolbox)
## Docker for Windows
@ -41,4 +41,4 @@ To run Docker, your machine must have a 64-bit operating system running Windows
* If you are interested in using the Kitematic GUI, see the [Kitematic user guide](/kitematic/userguide/).
> **Note**: The Boot2Docker command line was deprecated several releases > back in favor of Docker Machine, and now Docker for Windows.
> **Note**: The Boot2Docker command line was deprecated several releases > back in favor of Docker Machine, and now Docker for Windows.

View File

@ -74,4 +74,4 @@ the default path then you would run:
If you use the
devicemapper storage driver, you also need to pass the flag `--privileged` to
give the tool access to your storage devices.
give the tool access to your storage devices.

View File

@ -0,0 +1,41 @@
<!--[metadata]>
+++
title = "container prune"
description = "Remove all stopped containers"
keywords = [container, prune, delete, remove]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# container prune
```markdown
Usage: docker container prune [OPTIONS]
Remove all stopped containers
Options:
-f, --force Do not prompt for confirmation
--help Print usage
```
## Examples
```bash
$ docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
4a7f7eebae0f63178aff7eb0aa39cd3f0627a203ab2df258c1a00b456cf20063
f98f9c2aa1eaf727e4ec9c0283bc7d4aa4762fbdba7f26191f26c97f64090360
Total reclaimed space: 212 B
```
## Related information
* [system df](system_df.md)
* [volume prune](volume_prune.md)
* [image prune](image_prune.md)
* [system prune](system_prune.md)

View File

@ -0,0 +1,65 @@
<!--[metadata]>
+++
title = "image prune"
description = "Remove all stopped images"
keywords = [image, prune, delete, remove]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# image prune
```markdown
Usage: docker image prune [OPTIONS]
Remove unused images
Options:
-a, --all Remove all unused images, not just dangling ones
-f, --force Do not prompt for confirmation
--help Print usage
```
Remove all dangling images. If `-a` is specified, will also remove all images not referenced by any container.
Example output:
```bash
$ docker image prune -a
WARNING! This will remove all images without at least one container associated to them.
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: alpine:latest
untagged: alpine@sha256:3dcdb92d7432d56604d4545cbd324b14e647b313626d99b889d0626de158f73a
deleted: sha256:4e38e38c8ce0b8d9041a9c4fefe786631d1416225e13b0bfe8cfa2321aec4bba
deleted: sha256:4fe15f8d0ae69e169824f25f1d4da3015a48feeeeebb265cd2e328e15c6a869f
untagged: alpine:3.3
untagged: alpine@sha256:4fa633f4feff6a8f02acfc7424efd5cb3e76686ed3218abf4ca0fa4a2a358423
untagged: my-jq:latest
deleted: sha256:ae67841be6d008a374eff7c2a974cde3934ffe9536a7dc7ce589585eddd83aff
deleted: sha256:34f6f1261650bc341eb122313372adc4512b4fceddc2a7ecbb84f0958ce5ad65
deleted: sha256:cf4194e8d8db1cb2d117df33f2c75c0369c3a26d96725efb978cc69e046b87e7
untagged: my-curl:latest
deleted: sha256:b2789dd875bf427de7f9f6ae001940073b3201409b14aba7e5db71f408b8569e
deleted: sha256:96daac0cb203226438989926fc34dd024f365a9a8616b93e168d303cfe4cb5e9
deleted: sha256:5cbd97a14241c9cd83250d6b6fc0649833c4a3e84099b968dd4ba403e609945e
deleted: sha256:a0971c4015c1e898c60bf95781c6730a05b5d8a2ae6827f53837e6c9d38efdec
deleted: sha256:d8359ca3b681cc5396a4e790088441673ed3ce90ebc04de388bfcd31a0716b06
deleted: sha256:83fc9ba8fb70e1da31dfcc3c88d093831dbd4be38b34af998df37e8ac538260c
deleted: sha256:ae7041a4cc625a9c8e6955452f7afe602b401f662671cea3613f08f3d9343b35
deleted: sha256:35e0f43a37755b832f0bbea91a2360b025ee351d7309dae0d9737bc96b6d0809
deleted: sha256:0af941dd29f00e4510195dd00b19671bc591e29d1495630e7e0f7c44c1e6a8c0
deleted: sha256:9fc896fc2013da84f84e45b3096053eb084417b42e6b35ea0cce5a3529705eac
deleted: sha256:47cf20d8c26c46fff71be614d9f54997edacfe8d46d51769706e5aba94b16f2b
deleted: sha256:2c675ee9ed53425e31a13e3390bf3f539bf8637000e4bcfbb85ee03ef4d910a1
Total reclaimed space: 16.43 MB
```
## Related information
* [system df](system_df.md)
* [container prune](container_prune.md)
* [volume prune](volume_prune.md)
* [system prune](system_prune.md)

View File

@ -0,0 +1,37 @@
<!--[metadata]>
+++
title = "stack ls"
description = "The stack ls command description and usage"
keywords = ["stack, ls"]
advisory = "experimental"
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# stack ls (experimental)
```markdown
Usage: docker stack ls
List stacks
```
Lists the stacks.
For example, the following command shows all stacks and some additional information:
```bash
$ docker stack ls
ID SERVICES
vossibility-stack 6
myapp 2
```
## Related information
* [stack config](stack_config.md)
* [stack deploy](stack_deploy.md)
* [stack rm](stack_rm.md)
* [stack tasks](stack_tasks.md)

View File

@ -0,0 +1,68 @@
<!--[metadata]>
+++
title = "system df"
description = "The system df command description and usage"
keywords = [system, data, usage, disk]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# system df
```markdown
Usage: docker system df [OPTIONS]
Show docker filesystem usage
Options:
--help Print usage
-v, --verbose Show detailed information on space usage
```
The `docker system df` command displays information regarding the
amount of disk space used by the docker daemon.
By default the command will just show a summary of the data used:
```bash
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 5 2 16.43 MB 11.63 MB (70%)
Containers 2 0 212 B 212 B (100%)
Local Volumes 2 1 36 B 0 B (0%)
```
A more detailed view can be requested using the `-v, --verbose` flag:
```bash
$ docker system df -v
Images space usage:
REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS
my-curl latest b2789dd875bf 6 minutes ago 11 MB 11 MB 5 B 0
my-jq latest ae67841be6d0 6 minutes ago 9.623 MB 8.991 MB 632.1 kB 0
<none> <none> a0971c4015c1 6 minutes ago 11 MB 11 MB 0 B 0
alpine latest 4e38e38c8ce0 9 weeks ago 4.799 MB 0 B 4.799 MB 1
alpine 3.3 47cf20d8c26c 9 weeks ago 4.797 MB 4.797 MB 0 B 1
Containers space usage:
CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES
4a7f7eebae0f alpine:latest "sh" 1 0 B 16 minutes ago Exited (0) 5 minutes ago hopeful_yalow
f98f9c2aa1ea alpine:3.3 "sh" 1 212 B 16 minutes ago Exited (0) 48 seconds ago anon-vol
Local Volumes space usage:
NAME LINKS SIZE
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e 2 36 B
my-named-vol 0 0 B
```
* `SHARED SIZE` is the amount of space that an image shares with another one (i.e. their common data)
* `UNIQUE SIZE` is the amount of space that is only used by a given image
* `SIZE` is the virtual size of the image, it is the sum of `SHARED SIZE` and `UNIQUE SIZE`
## Related Information
* [system prune](system_prune.md)
* [container prune](container_prune.md)
* [volume prune](volume_prune.md)
* [image prune](image_prune.md)

View File

@ -0,0 +1,70 @@
<!--[metadata]>
+++
title = "system prune"
description = "Remove unused data"
keywords = [system, prune, delete, remove]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# system prune
```markdown
Usage: docker system prune [OPTIONS]
Delete unused data
Options:
-a, --all Remove all unused images not just dangling ones
-f, --force Do not prompt for confirmation
--help Print usage
```
Remove all unused containers, volumes and images (both dangling and unreferenced).
Example output:
```bash
$ docker system prune -a
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all images without at least one container associated to them
Are you sure you want to continue? [y/N] y
Deleted Containers:0998aa37185a1a7036b0e12cf1ac1b6442dcfa30a5c9650a42ed5010046f195b
73958bfb884fa81fa4cc6baf61055667e940ea2357b4036acbbe25a60f442a4d
Deleted Volumes:
named-vol
Deleted Images:
untagged: my-curl:latest
deleted: sha256:7d88582121f2a29031d92017754d62a0d1a215c97e8f0106c586546e7404447d
deleted: sha256:dd14a93d83593d4024152f85d7c63f76aaa4e73e228377ba1d130ef5149f4d8b
untagged: alpine:3.3
deleted: sha256:695f3d04125db3266d4ab7bbb3c6b23aa4293923e762aa2562c54f49a28f009f
untagged: alpine:latest
deleted: sha256:ee4603260daafe1a8c2f3b78fd760922918ab2441cbb2853ed5c439e59c52f96
deleted: sha256:9007f5987db353ec398a223bc5a135c5a9601798ba20a1abba537ea2f8ac765f
deleted: sha256:71fa90c8f04769c9721459d5aa0936db640b92c8c91c9b589b54abd412d120ab
deleted: sha256:bb1c3357b3c30ece26e6604aea7d2ec0ace4166ff34c3616701279c22444c0f3
untagged: my-jq:latest
deleted: sha256:6e66d724542af9bc4c4abf4a909791d7260b6d0110d8e220708b09e4ee1322e1
deleted: sha256:07b3fa89d4b17009eb3988dfc592c7d30ab3ba52d2007832dffcf6d40e3eda7f
deleted: sha256:3a88a5c81eb5c283e72db2dbc6d65cbfd8e80b6c89bb6e714cfaaa0eed99c548
Total reclaimed space: 13.5 MB
```
## Related information
* [volume create](volume_create.md)
* [volume ls](volume_ls.md)
* [volume inspect](volume_inspect.md)
* [volume rm](volume_rm.md)
* [Understand Data Volumes](../../tutorials/dockervolumes.md)
* [system df](system_df.md)
* [container prune](container_prune.md)
* [image prune](image_prune.md)
* [system prune](system_prune.md)

View File

@ -0,0 +1,48 @@
<!--[metadata]>
+++
title = "volume prune"
description = "Remove unused volumes"
keywords = [volume, prune, delete]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# volume prune
```markdown
Usage: docker volume prune [OPTIONS]
Remove all unused volumes
Options:
-f, --force Do not prompt for confirmation
--help Print usage
```
Remove all unused volumes. Unused volumes are those which are not referenced by any containers
Example output:
```bash
$ docker volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e
my-named-vol
Total reclaimed space: 36 B
```
## Related information
* [volume create](volume_create.md)
* [volume ls](volume_ls.md)
* [volume inspect](volume_inspect.md)
* [volume rm](volume_rm.md)
* [Understand Data Volumes](../../tutorials/dockervolumes.md)
* [system df](system_df.md)
* [container prune](container_prune.md)
* [image prune](image_prune.md)
* [system prune](system_prune.md)

View File

@ -52,7 +52,7 @@ profile docker-default flags=(attach_disconnected,mediate_deleted) {
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/efi/efivars/** rwklx,
deny /sys/firmware/** rwklx,
deny /sys/kernel/security/** rwklx,
}
```
@ -168,7 +168,7 @@ profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/efi/efivars/** rwklx,
deny /sys/firmware/** rwklx,
deny /sys/kernel/security/** rwklx,
}
```
@ -308,4 +308,4 @@ Advanced users and package managers can find a profile for `/usr/bin/docker`
in the Docker Engine source repository.
The `docker-default` profile for containers lives in
[profiles/apparmor](https://github.com/docker/docker/tree/master/profiles/apparmor).
[profiles/apparmor](https://github.com/docker/docker/tree/master/profiles/apparmor).

View File

@ -53,7 +53,7 @@ creating an os-provided bundled certificate chain.
## Creating the client certificates
You will use OpenSSL's `genrsa` and `req` commands to first generate an RSA
key and then use the key to create the certificate.
key and then use the key to create the certificate.
$ openssl genrsa -out client.key 4096
$ openssl req -new -x509 -text -key client.key -out client.cert
@ -65,7 +65,7 @@ key and then use the key to create the certificate.
## Troubleshooting tips
The Docker daemon interprets `.crt` files as CA certificates and `.cert` files
The Docker daemon interprets ``.crt` files as CA certificates and `.cert` files
as client certificates. If a CA certificate is accidentally given the extension
`.cert` instead of the correct `.crt` extension, the Docker daemon logs the
following error message:

View File

@ -209,4 +209,4 @@ flags:
## Related information
* [Using certificates for repository client verification](certificates.md)
* [Use trusted images](trust/index.md)
* [Use trusted images](trust/index.md)

View File

@ -1,7 +1,3 @@
---
{}
---
FROM debian
RUN apt-get update && apt-get install -yq openssl

View File

@ -1,6 +1,3 @@
---
{}
---
HOST:=boot2docker

2
engine/security/https/make_certs.sh Normal file → Executable file
View File

@ -1,5 +1,3 @@
#!/bin/sh
openssl genrsa -aes256 -out ca-key.pem 2048
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem

2
engine/security/https/parsedocs.sh Normal file → Executable file
View File

@ -1,5 +1,3 @@
#!/bin/sh
echo "#!/bin/sh"

View File

@ -14,4 +14,4 @@ This section discusses the security features you can configure and use within yo
* You can configure secure computing mode (Seccomp) policies to secure system calls in a container. For more information, see [Seccomp security profiles for Docker](seccomp.md).
* An AppArmor profile for Docker is installed with the official *.deb* packages. For information about this profile and overriding it, see [AppArmor security profiles for Docker](apparmor.md).
* An AppArmor profile for Docker is installed with the official *.deb* packages. For information about this profile and overriding it, see [AppArmor security profiles for Docker](apparmor.md).

View File

@ -33,24 +33,65 @@ compatibility. The default Docker profile (found [here](https://github.com/docke
```json
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": [
"SCMP_ARCH_X86_64",
"SCMP_ARCH_X86",
"SCMP_ARCH_X32"
"archMap": [
{
"architecture": "SCMP_ARCH_X86_64",
"subArchitectures": [
"SCMP_ARCH_X86",
"SCMP_ARCH_X32"
]
},
...
],
"syscalls": [
{
"name": "accept",
"names": [
"accept",
"accept4",
"access",
"alarm",
"alarm",
"bind",
"brk",
...
"waitid",
"waitpid",
"write",
"writev"
],
"action": "SCMP_ACT_ALLOW",
"args": []
"args": [],
"comment": "",
"includes": {},
"excludes": {}
},
{
"name": "accept4",
"names": [
"clone"
],
"action": "SCMP_ACT_ALLOW",
"args": []
"args": [
{
"index": 1,
"value": 2080505856,
"valueTwo": 0,
"op": "SCMP_CMP_MASKED_EQ"
}
],
"comment": "s390 parameter ordering for clone is different",
"includes": {
"arches": [
"s390",
"s390x"
]
},
"excludes": {
"caps": [
"CAP_SYS_ADMIN"
]
}
},
...
]
}
```
@ -132,4 +173,4 @@ profile.
```
$ docker run --rm -it --security-opt seccomp=unconfined debian:jessie \
unshare --map-root-user --user sh -c whoami
```
```

View File

@ -289,4 +289,4 @@ Because the tag `docker/trusttest:latest` is not trusted, the `pull` fails.
* [Manage keys for content trust](trust_key_mng.md)
* [Automation with content trust](trust_automation.md)
* [Delegations for content trust](trust_delegation.md)
* [Play in a content trust sandbox](trust_sandbox.md)
* [Play in a content trust sandbox](trust_sandbox.md)

View File

@ -25,4 +25,4 @@ for [Notary](https://github.com/docker/notary#using-notary) depending on which o
Please check back here for instructions after Notary Server has an official
stable release. To get a head start on deploying Notary in production see
https://github.com/docker/notary.
https://github.com/docker/notary.

View File

@ -10,4 +10,4 @@ The following topics are available:
* [Manage keys for content trust](trust_key_mng.md)
* [Automation with content trust](trust_automation.md)
* [Delegations for content trust](trust_delegation.md)
* [Play in a content trust sandbox](trust_sandbox.md)
* [Play in a content trust sandbox](trust_sandbox.md)

View File

@ -72,4 +72,4 @@ unable to process Dockerfile: No trust data for notrust
* [Content trust in Docker](content_trust.md)
* [Manage keys for content trust](trust_key_mng.md)
* [Delegations for content trust](trust_delegation.md)
* [Play in a content trust sandbox](trust_sandbox.md)
* [Play in a content trust sandbox](trust_sandbox.md)

View File

@ -96,7 +96,7 @@ to rotate the snapshot key specifically, and you want the server to manage it (`
stands for "remote").
When adding a delegation, your must acquire
[the PEM-encoded x509 certificate with the public key](trust_delegation.md#generating-delegation-keys)
[the PEM-encoded x509 certificate with the public key](#generating-delegation-keys)
of the collaborator you wish to delegate to.
Assuming you have the certificate `delegation.crt`, you can add a delegation
@ -217,4 +217,4 @@ the legacy tags that were signed directly with the `targets` key.
* [Content trust in Docker](content_trust.md)
* [Manage keys for content trust](trust_key_mng.md)
* [Automation with content trust](trust_automation.md)
* [Play in a content trust sandbox](trust_sandbox.md)
* [Play in a content trust sandbox](trust_sandbox.md)

View File

@ -91,4 +91,4 @@ the new key.
* [Content trust in Docker](content_trust.md)
* [Automation with content trust](trust_automation.md)
* [Delegations for content trust](trust_delegation.md)
* [Play in a content trust sandbox](trust_sandbox.md)
* [Play in a content trust sandbox](trust_sandbox.md)

View File

@ -284,4 +284,4 @@ When you are done, and want to clean up all the services you've started and any
anonymous volumes that have been created, just run the following command in the
directory where you've created your Docker Compose file:
$ docker-compose down -v
$ docker-compose down -v

View File

@ -107,7 +107,7 @@ While it is possible to scale a swarm down to a single manager node, it is
impossible to demote the last manager node. This ensures you maintain access to
the swarm and that the swarm can still process requests. Scaling down to a
single manager is an unsafe operation and is not recommended. If
the last node leaves the swarm unexpectedly during the demote operation, the
the last node leaves the swarm unexpetedly during the demote operation, the
swarm will become unavailable until you reboot the node or restart with
`--force-new-cluster`.
@ -176,17 +176,17 @@ for more information.
From the command line, run `docker node inspect <id-node>` to query the nodes.
For instance, to query the reachability of the node as a manager:
```bash{% raw %}
```bash
docker node inspect manager1 --format "{{ .ManagerStatus.Reachability }}"
reachable
{% endraw %}```
```
To query the status of the node as a worker that accept tasks:
```bash{% raw %}
```bash
docker node inspect manager1 --format "{{ .Status.State }}"
ready
{% endraw %}```
```
From those commands, we can see that `manager1` is both at the status
`reachable` as a manager and `ready` as a worker.

View File

@ -11,8 +11,8 @@ cluster of one or more Docker Engines called a swarm. A swarm consists
of one or more nodes: physical or virtual machines running Docker
Engine 1.12 or later in swarm mode.
There are two types of nodes: [**managers**](nodes.md#manager-nodes) and
[**workers**](nodes.md#worker-nodes).
There are two types of nodes: [**managers**](#manager-nodes) and
[**workers**](#worker-nodes).
![Swarm mode cluster](../images/swarm-diagram.png)
@ -80,4 +80,4 @@ You can also demote a manager node to a worker node. See
## Learn More
* Read about how swarm mode [services](services.md) work.
* Learn how [PKI](pki.md) works in swarm mode
* Learn how [PKI](pki.md) works in swarm mode

View File

@ -53,7 +53,7 @@ that spawns a new container.
A task is a one-directional mechanism. It progresses monotonically through a
series of states: assigned, prepared, running, etc. If the task fails the
scheduler removes the task and its container and then creates a new task to
orchestrator removes the task and its container and then creates a new task to
replace it according to the desired state specified by the service.
The underlying logic of Docker swarm mode is a general purpose scheduler and

View File

@ -10,7 +10,7 @@ swarm mode. To take full advantage of swarm mode you can add nodes to the swarm:
* Adding worker nodes increases capacity. When you deploy a service to a swarm,
the Engine schedules tasks on available nodes whether they are worker nodes or
manager nodes. When you add workers to your swarm, you increase the scale of
the swarm to handle tasks without affecting the manager raft consensus.
the swarm to handle tasks without affecting the manager raft consenus.
* Manager nodes increase fault-tolerance. Manager nodes perform the
orchestration and cluster management functions for the swarm. Among manager
nodes, a single leader node conducts orchestration tasks. If a leader node
@ -102,4 +102,4 @@ This node joined a swarm as a manager.
## Learn More
* `swarm join`[command line reference](../reference/commandline/swarm_join.md)
* [Swarm mode tutorial](swarm-tutorial/index.md)
* [Swarm mode tutorial](swarm-tutorial/index.md)

View File

@ -28,7 +28,7 @@ A **node** is an instance of the Docker engine participating in the swarm. You c
To deploy your application to a swarm, you submit a service definition to a
**manager node**. The manager node dispatches units of work called
[tasks](key-concepts.md#services-and-tasks) to worker nodes.
[tasks](#Services-and-tasks) to worker nodes.
Manager nodes also perform the orchestration and cluster management functions
required to maintain the desired state of the swarm. Manager nodes elect a

View File

@ -6,10 +6,10 @@ title: Manage nodes in a swarm
As part of the swarm management lifecycle, you may need to view or update a node as follows:
* [list nodes in the swarm](manage-nodes.md#list-nodes)
* [inspect an individual node](manage-nodes.md#inspect-an-individual-node)
* [update a node](manage-nodes.md#update-a-node)
* [leave the swarm](manage-nodes.md#leave-the-swarm)
* [list nodes in the swarm](#list-nodes)
* [inspect an individual node](#inspect-an-individual-node)
* [update a node](#update-a-node)
* [leave the swarm](#leave-the-swarm)
## List nodes

View File

@ -45,7 +45,7 @@ different nodes. For more information, refer to [Docker swarm mode overlay netwo
The `--subnet` flag specifies the subnet for use with the overlay network. When
you don't specify a subnet, the swarm manager automatically chooses a subnet and
assigns it to the network. On some older kernels, including kernel 3.10,
automatically assigned addresses may overlap with another subnet in your
automatically assigned adresses may overlap with another subnet in your
infrastructure. Such overlaps can cause connectivity issues or failures with containers connected to the network.
Before you attach a service to the network, the network only extends to manager
@ -92,10 +92,10 @@ tasks are running for the service:
```bash
$ docker service ps my-web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
63s86gf6a0ms34mvboniev7bs my-web.1 nginx node1 Running Running 58 seconds ago
6b3q2qbjveo4zauc6xig7au10 my-web.2 nginx node2 Running Running 58 seconds ago
66u2hcrz0miqpc8h0y0f3v7aw my-web.3 nginx node3 Running Running about a minute ago
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
my-web.1.63s86gf6a0ms34mvboniev7bs nginx node1 Running Running 58 seconds ago
my-web.2.6b3q2qbjveo4zauc6xig7au10 nginx node2 Running Running 58 seconds ago
my-web.3.66u2hcrz0miqpc8h0y0f3v7aw nginx node3 Running Running about a minute ago
```
![service vip image](images/service-vip.png)
@ -164,7 +164,7 @@ active tasks.
You can inspect the service to view the virtual IP. For example:
```liquid
```bash
$ docker service inspect \
--format='{% raw %}{{json .Endpoint.VirtualIPs}}{% endraw %}' \
my-web
@ -192,8 +192,8 @@ using the DNS name `my-web`:
```bash
$ docker service ps my-busybox
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
1dok2cmx2mln5hbqve8ilnair my-busybox.1 busybox node1 Running Running 5 seconds ago
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
my-busybox.1.1dok2cmx2mln5hbqve8ilnair busybox node1 Running Running 5 seconds ago
```
3. From the node where the busybox task is running, open an interactive shell to
@ -288,7 +288,7 @@ Address 3: 10.0.9.9 my-dnsrr-service.2.am6fx47p3bropyy2dy4f8hofb.my-network
## Confirm VIP connectivity
In general we recommend you use `dig`, `nslookup`, or another DNS query tool to
In genaral we recommend you use `dig`, `nslookup`, or another DNS query tool to
test access to the service name via DNS. Because a VIP is a logical IP, `ping`
is not the right tool to confirm VIP connectivity.

View File

@ -35,4 +35,4 @@ the properties inherent to distributed systems:
and the [Raft Consensus Algorithm paper](https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf))
- *mutual exclusion* through the leader election process
- *cluster membership* management
- *globally consistent object sequencing* and CAS (compare-and-swap) primitives
- *globally consistent object sequencing* and CAS (compare-and-swap) primitives

View File

@ -49,7 +49,7 @@ anixjtol6wdf my_web 1/1 nginx
```
To make the web server accessible from outside the swarm, you need to
[publish the port](services.md#publish-ports-externally-to-the-swarm) where the swarm
[publish the port](#publish-ports-externally-to-the-swarm) where the swarm
listens for web requests.
You can include a command to run inside containers after the image:
@ -146,8 +146,8 @@ Swarm mode lets you network services in a couple of ways:
#### Publish ports externally to the swarm
You publish service ports externally to the swarm using the `--publish<TARGET-PORT>:<SERVICE-PORT>`
flag. When you publish a service port, the swarm
You publish service ports externally to the swarm using the `--publish
<TARGET-PORT>:<SERVICE-PORT>` flag. When you publish a service port, the swarm
makes the service accessible at the target port on every node regardless if
there is a task for the service running on the node.

View File

@ -37,7 +37,7 @@ as follows:
* designates the current node as a leader manager node for the swarm.
* names the node with the machine hostname.
* configures the manager to listen on an active network interface on port 2377.
* sets the current node to `Active` availability, meaning it can receive tasks
* sets the current node to `Active` availability, meanining it can receive tasks
from the scheduler.
* starts an internal distributed data store for Engines participating in the
swarm to maintain a consistent view of the swarm and all services running on it.
@ -84,7 +84,7 @@ reach the first manager node is not the same address the manager sees as its
own. For instance, in a cloud setup that spans different regions, hosts have
both internal addresses for access within the region and external addresses that
you use for access from outside that region. In this case, specify the external
address with `--advertise-addr` so that the node can propagate that information
address with `--advertise-addr` so that the node can propogate that information
to other nodes that subsequently connect to it.
Refer to the `docker swarm init` [CLI reference](../reference/commandline/swarm_init.md)

View File

@ -10,7 +10,7 @@ to add worker nodes.
1. Open a terminal and ssh into the machine where you want to run a worker node.
This tutorial uses the name `worker1`.
2. Run the command produced by the `docker swarm init` output from the
2. Run the command produced by the `docker swarm init` output from the
[Create a swarm](create-swarm.md) tutorial step to create a worker node joined to the existing swarm:
```bash
@ -37,7 +37,7 @@ This tutorial uses the name `worker1`.
3. Open a terminal and ssh into the machine where you want to run a second
worker node. This tutorial uses the name `worker2`.
4. Run the command produced by the `docker swarm init` output from the
4. Run the command produced by the `docker swarm init` output from the
[Create a swarm](create-swarm.md) tutorial step to create a second worker node
joined to the existing swarm:
@ -49,7 +49,7 @@ joined to the existing swarm:
This node joined a swarm as a worker.
```
5. Open a terminal and ssh into the machine where the manager node runs and run
5. Open a terminal and ssh into the machine where the manager node runs and run
the `docker node ls` command to see the worker nodes:
```bash
@ -68,4 +68,4 @@ the `docker node ls` command to see the worker nodes:
## What's next?
Now your swarm consists of a manager and two worker nodes. In the next step of
the tutorial, you [deploy a service](deploy-service.md) to the swarm.
the tutorial, you [deploy a service](deploy-service.md) to the swarm.

View File

@ -16,17 +16,17 @@ machines.
$ docker-machine ssh manager1
```
2. Run the following command to create a new swarm:
2. Run the following command to create a new swarm:
```bash
docker swarm init --advertise-addr <MANAGER-IP>
```
>**Note:** If you are using Docker for Mac or Docker for Windows to test
single-node swarm, simply run `docker swarm init` with no arguments. There is no
need to specify `--advertise-addr` in this case. To learn more, see the topic
on how to [Use Docker for Mac or Docker for
Windows](index.md#use-docker-for-mac-or-docker-for-windows) with Swarm.
single-node swarm, simply run `docker swarm init` with no arguments. There is no
need to specify ` --advertise-addr` in this case. To learn more, see the topic
on how to [Use Docker for Mac or Docker for
Windows](index.md#use-docker-for-mac-or-docker-for-windows) with Swarm.
In the tutorial, the following command creates a swarm on the `manager1`
machine:
@ -52,7 +52,7 @@ machines.
join as managers or workers depending on the value for the `--token`
flag.
2. Run `docker info` to view the current state of the swarm:
2. Run `docker info` to view the current state of the swarm:
```bash
$ docker info
@ -70,7 +70,7 @@ machines.
...snip...
```
3. Run the `docker node ls` command to view information about nodes:
3. Run the `docker node ls` command to view information about nodes:
```bash
$ docker node ls

View File

@ -11,7 +11,7 @@ you can delete the service from the swarm.
run your manager node. For example, the tutorial uses a machine named
`manager1`.
2. Run `docker service rm helloworld` to remove the `helloworld` service.
2. Run `docker service rm helloworld` to remove the `helloworld` service.
```
$ docker service rm helloworld
@ -19,7 +19,7 @@ run your manager node. For example, the tutorial uses a machine named
helloworld
```
3. Run `docker service inspect <SERVICE-ID>` to verify that the swarm manager
3. Run `docker service inspect <SERVICE-ID>` to verify that the swarm manager
removed the service. The CLI returns a message that the service is not found:
```

View File

@ -25,7 +25,7 @@ example, the tutorial uses a machine named `manager1`.
* The arguments `alpine ping docker.com` define the service as an Alpine
Linux container that executes the command `ping docker.com`.
3. Run `docker service ls` to see the list of running services:
3. Run `docker service ls` to see the list of running services:
```
$ docker service ls

View File

@ -17,7 +17,7 @@ node and launches replica tasks on a node with `ACTIVE` availability.
run your manager node. For example, the tutorial uses a machine named
`manager1`.
2. Verify that all your nodes are actively available.
2. Verify that all your nodes are actively available.
```bash
$ docker node ls
@ -28,7 +28,7 @@ run your manager node. For example, the tutorial uses a machine named
e216jshn25ckzbvmwlnh5jr3g * manager1 Ready Active Leader
```
3. If you aren't still running the `redis` service from the [rolling
3. If you aren't still running the `redis` service from the [rolling
update](rolling-update.md) tutorial, start it now:
```bash
@ -37,22 +37,22 @@ update](rolling-update.md) tutorial, start it now:
c5uo6kdmzpon37mgj9mwglcfw
```
4. Run `docker service ps redis` to see how the swarm manager assigned the
4. Run `docker service ps redis` to see how the swarm manager assigned the
tasks to different nodes:
```bash
$ docker service ps redis
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
7q92v0nr1hcgts2amcjyqg3pq redis.1 redis redis:3.0.6 Running 26 seconds Running manager1
7h2l8h3q3wqy5f66hlv9ddmi6 redis.2 redis redis:3.0.6 Running 26 seconds Running worker1
9bg7cezvedmkgg6c8yzvbhwsd redis.3 redis redis:3.0.6 Running 26 seconds Running worker2
NAME IMAGE NODE DESIRED STATE CURRENT STATE
redis.1.7q92v0nr1hcgts2amcjyqg3pq redis:3.0.6 manager1 Running Running 26 seconds
redis.2.7h2l8h3q3wqy5f66hlv9ddmi6 redis:3.0.6 worker1 Running Running 26 seconds
redis.3.9bg7cezvedmkgg6c8yzvbhwsd redis:3.0.6 worker2 Running Running 26 seconds
```
In this case the swarm manager distributed one task to each node. You may
see the tasks distributed differently among the nodes in your environment.
5. Run `docker node update --availability drain <NODE-ID>` to drain a node that
5. Run `docker node update --availability drain <NODE-ID>` to drain a node that
had a task assigned to it:
```bash
@ -61,7 +61,7 @@ had a task assigned to it:
worker1
```
6. Inspect the node to check its availability:
6. Inspect the node to check its availability:
```bash
$ docker node inspect --pretty worker1
@ -76,24 +76,24 @@ had a task assigned to it:
The drained node shows `Drain` for `AVAILABILITY`.
7. Run `docker service ps redis` to see how the swarm manager updated the
7. Run `docker service ps redis` to see how the swarm manager updated the
task assignments for the `redis` service:
```bash
$ docker service ps redis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7q92v0nr1hcgts2amcjyqg3pq redis.1 redis:3.0.6 manager1 Running Running 4 minutes
b4hovzed7id8irg1to42egue8 redis.2 redis:3.0.6 worker2 Running Running About a minute
7h2l8h3q3wqy5f66hlv9ddmi6 \_ redis.2 redis:3.0.6 worker1 Shutdown Shutdown 2 minutes ago
9bg7cezvedmkgg6c8yzvbhwsd redis.3 redis:3.0.6 worker2 Running Running 4 minutes
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
redis.1.7q92v0nr1hcgts2amcjyqg3pq redis:3.0.6 manager1 Running Running 4 minutes
redis.2.b4hovzed7id8irg1to42egue8 redis:3.0.6 worker2 Running Running About a minute
\_ redis.2.7h2l8h3q3wqy5f66hlv9ddmi6 redis:3.0.6 worker1 Shutdown Shutdown 2 minutes ago
redis.3.9bg7cezvedmkgg6c8yzvbhwsd redis:3.0.6 worker2 Running Running 4 minutes
```
The swarm manager maintains the desired state by ending the task on a node
with `Drain` availability and creating a new task on a node with `Active`
availability.
8. Run `docker node update --availability active <NODE-ID>` to return the
8. Run `docker node update --availability active <NODE-ID>` to return the
drained node to an active state:
```bash
@ -102,22 +102,22 @@ drained node to an active state:
worker1
```
9. Inspect the node to see the updated state:
9. Inspect the node to see the updated state:
```bash
$ docker node inspect --pretty worker1
```bash
$ docker node inspect --pretty worker1
ID: 38ciaotwjuritcdtn9npbnkuz
Hostname: worker1
Status:
ID: 38ciaotwjuritcdtn9npbnkuz
Hostname: worker1
Status:
State: Ready
Availability: Active
...snip...
```
...snip...
```
When you set the node back to `Active` availability, it can receive new tasks:
* during a service update to scale up
* during a rolling update
* when you set another node to `Drain` availability
* when a task fails on another active node
* when a task fails on another active node

View File

@ -25,10 +25,10 @@ If you are brand new to Docker, see [About Docker Engine](../../index.md).
To run this tutorial, you need the following:
* [three networked host machines](index.md#three-networked-host-machines)
* [Docker Engine 1.12 or later installed](index.md#docker-engine-1-12-or-newer)
* [the IP address of the manager machine](index.md#the-ip-address-of-the-manager-machine)
* [open ports between the hosts](index.md#open-ports-between-the-hosts)
* [three networked host machines](#three-networked-host-machines)
* [Docker Engine 1.12 or later installed](#docker-engine-1-12-or-newer)
* [the IP address of the manager machine](#the-ip-address-of-the-manager-machine)
* [open ports between the hosts](#open-ports-between-the-hosts)
### Three networked host machines
@ -51,9 +51,9 @@ Install Docker Engine and verify that the Docker Engine daemon is running on
each of the machines. You can get the latest version of Docker Engine as
follows:
* [install Docker Engine on Linux machines](index.md#install-docker-engine-on-linux-machines)
* [install Docker Engine on Linux machines](#install-docker-engine-on-linux-machines)
* [use Docker for Mac or Docker for Windows](index.md#use-docker-for-mac-or-docker-for-windows)
* [use Docker for Mac or Docker for Windows](#use-docker-for-mac-or-docker-for-windows)
#### Install Docker Engine on Linux machines

View File

@ -11,7 +11,7 @@ the Docker CLI to see details about the service running in the swarm.
run your manager node. For example, the tutorial uses a machine named
`manager1`.
2. Run `docker service inspect --pretty <SERVICE-ID>` to display the details
2. Run `docker service inspect --pretty <SERVICE-ID>` to display the details
about a service in an easily readable format.
To see the details on the `helloworld` service:
@ -21,7 +21,7 @@ about a service in an easily readable format.
ID: 9uk4639qpg7npwf3fn2aasksr
Name: helloworld
Mode: REPLICATED
Service Mode: REPLICATED
Replicas: 1
Placement:
UpdateConfig:
@ -29,12 +29,14 @@ about a service in an easily readable format.
ContainerSpec:
Image: alpine
Args: ping docker.com
Resources:
Endpoint Mode: vip
```
>**Tip**: To return the service details in json format, run the same command
without the `--pretty` flag.
```json
```
$ docker service inspect helloworld
[
{
@ -83,14 +85,14 @@ about a service in an easily readable format.
]
```
4. Run `docker service ps <SERVICE-ID>` to see which nodes are running the
4. Run `docker service ps <SERVICE-ID>` to see which nodes are running the
service:
```
$ docker service ps helloworld
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
8p1vev3fq5zm0mi8g0as41w35 helloworld.1 helloworld alpine Running 3 minutes Running worker2
NAME IMAGE NODE DESIRED STATE LAST STATE
helloworld.1.8p1vev3fq5zm0mi8g0as41w35 alpine worker2 Running Running 3 minutes
```
In this case, the one instance of the `helloworld` service is running on the
@ -101,7 +103,7 @@ service:
task so you can see if tasks are running according to the service
definition.
4. Run `docker ps` on the node where the task is running to see details about
4. Run `docker ps` on the node where the task is running to see details about
the container for the task.
>**Tip**: If `helloworld` is running on a node other than your manager node,
@ -117,4 +119,4 @@ the container for the task.
## What's next?
Next, you can [change the scale](scale-service.md) for the service running in
the swarm.
the swarm.

View File

@ -13,7 +13,7 @@ Redis 3.0.7 container image using rolling updates.
run your manager node. For example, the tutorial uses a machine named
`manager1`.
2. Deploy Redis 3.0.6 to the swarm and configure the swarm with a 10 second
2. Deploy Redis 3.0.6 to the swarm and configure the swarm with a 10 second
update delay:
```bash
@ -44,14 +44,14 @@ update delay:
`--update-failure-action` flag for `docker service create` or
`docker service update`.
3. Inspect the `redis` service:
3. Inspect the `redis` service:
```bash
$ docker service inspect --pretty redis
ID: 0u6a4s31ybk7yw2wyvtikmu50
Name: redis
Mode: Replicated
Service Mode: Replicated
Replicas: 3
Placement:
Strategy: Spread
@ -61,9 +61,10 @@ update delay:
ContainerSpec:
Image: redis:3.0.6
Resources:
Endpoint Mode: vip
```
4. Now you can update the container image for `redis`. The swarm manager
4. Now you can update the container image for `redis`. The swarm manager
applies the update to nodes according to the `UpdateConfig` policy:
```bash
@ -81,7 +82,7 @@ applies the update to nodes according to the `UpdateConfig` policy:
* If, at any time during the update, a task returns `FAILED`, pause the
update.
5. Run `docker service inspect --pretty redis` to see the new image in the
5. Run `docker service inspect --pretty redis` to see the new image in the
desired state:
```bash
@ -89,7 +90,7 @@ desired state:
ID: 0u6a4s31ybk7yw2wyvtikmu50
Name: redis
Mode: Replicated
Service Mode: Replicated
Replicas: 3
Placement:
Strategy: Spread
@ -99,6 +100,7 @@ desired state:
ContainerSpec:
Image: redis:3.0.7
Resources:
Endpoint Mode: vip
```
The output of `service inspect` shows if your update paused due to failure:
@ -125,22 +127,22 @@ desired state:
To avoid repeating certain update failures, you may need to reconfigure the
service by passing flags to `docker service update`.
6. Run `docker service ps <SERVICE-ID>` to watch the rolling update:
6. Run `docker service ps <SERVICE-ID>` to watch the rolling update:
```bash
$ docker service ps redis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
dos1zffgeofhagnve8w864fco redis.1 redis:3.0.7 worker1 Running Running 37 seconds
88rdo6pa52ki8oqx6dogf04fh \_ redis.1 redis:3.0.6 worker2 Shutdown Shutdown 56 seconds ago
9l3i4j85517skba5o7tn5m8g0 redis.2 redis:3.0.7 worker2 Running Running About a minute
66k185wilg8ele7ntu8f6nj6i \_ redis.2 redis:3.0.6 worker1 Shutdown Shutdown 2 minutes ago
egiuiqpzrdbxks3wxgn8qib1g redis.3 redis:3.0.7 worker1 Running Running 48 seconds
ctzktfddb2tepkr45qcmqln04 \_ redis.3 redis:3.0.6 mmanager1 Shutdown Shutdown 2 minutes ago
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
redis.1.dos1zffgeofhagnve8w864fco redis:3.0.7 worker1 Running Running 37 seconds
\_ redis.1.88rdo6pa52ki8oqx6dogf04fh redis:3.0.6 worker2 Shutdown Shutdown 56 seconds ago
redis.2.9l3i4j85517skba5o7tn5m8g0 redis:3.0.7 worker2 Running Running About a minute
\_ redis.2.66k185wilg8ele7ntu8f6nj6i redis:3.0.6 worker1 Shutdown Shutdown 2 minutes ago
redis.3.egiuiqpzrdbxks3wxgn8qib1g redis:3.0.7 worker1 Running Running 48 seconds
\_ redis.3.ctzktfddb2tepkr45qcmqln04 redis:3.0.6 mmanager1 Shutdown Shutdown 2 minutes ago
```
Before Swarm updates all of the tasks, you can see that some are running
`redis:3.0.6` while others are running `redis:3.0.7`. The output above shows
the state once the rolling updates are done.
Next, learn about how to [drain a node](drain-node.md) in the swarm.
Next, learn about how to [drain a node](drain-node.md) in the swarm.

View File

@ -12,7 +12,7 @@ the swarm.
run your manager node. For example, the tutorial uses a machine named
`manager1`.
2. Run the following command to change the desired state of the
2. Run the following command to change the desired state of the
service running in the swarm:
```bash
@ -27,24 +27,24 @@ service running in the swarm:
helloworld scaled to 5
```
3. Run `docker service ps <SERVICE-ID>` to see the updated task list:
3. Run `docker service ps <SERVICE-ID>` to see the updated task list:
```
$ docker service ps helloworld
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
8p1vev3fq5zm0mi8g0as41w35 helloworld.1 helloworld alpine Running 7 minutes Running worker2
c7a7tcdq5s0uk3qr88mf8xco6 helloworld.2 helloworld alpine Running 24 seconds Running worker1
6crl09vdcalvtfehfh69ogfb1 helloworld.3 helloworld alpine Running 24 seconds Running worker1
auky6trawmdlcne8ad8phb0f1 helloworld.4 helloworld alpine Running 24 seconds Accepted manager1
ba19kca06l18zujfwxyc5lkyn helloworld.5 helloworld alpine Running 24 seconds Running worker2
NAME IMAGE NODE DESIRED STATE CURRENT STATE
helloworld.1.8p1vev3fq5zm0mi8g0as41w35 alpine worker2 Running Running 7 minutes
helloworld.2.c7a7tcdq5s0uk3qr88mf8xco6 alpine worker1 Running Running 24 seconds
helloworld.3.6crl09vdcalvtfehfh69ogfb1 alpine worker1 Running Running 24 seconds
helloworld.4.auky6trawmdlcne8ad8phb0f1 alpine manager1 Running Running 24 seconds
helloworld.5.ba19kca06l18zujfwxyc5lkyn alpine worker2 Running Running 24 seconds
```
You can see that swarm has created 4 new tasks to scale to a total of 5
running instances of Alpine Linux. The tasks are distributed between the
three nodes of the swarm. One is running on `manager1`.
4. Run `docker ps` to see the containers running on the node where you're
4. Run `docker ps` to see the containers running on the node where you're
connected. The following example shows the tasks running on `manager1`:
```
@ -60,4 +60,4 @@ connected. The following example shows the tasks running on `manager1`:
## What's next?
At this point in the tutorial, you're finished with the `helloworld` service.
The next step shows how to [delete the service](delete-service.md).
The next step shows how to [delete the service](delete-service.md).

View File

@ -256,7 +256,6 @@ building your own Sinatra image for your fictitious development team.
# This is a comment
FROM ubuntu:14.04
MAINTAINER Kate Smith <ksmith@example.com>
RUN apt-get update && apt-get install -y ruby ruby-dev
RUN gem install sinatra
@ -268,7 +267,7 @@ is capitalized.
> **Note:** You use `#` to indicate a comment
The first instruction `FROM` tells Docker what the source of our image is, in
this case you're basing our new image on an Ubuntu 14.04 image. The instruction uses the `MAINTAINER` instruction to specify who maintains the new image.
this case you're basing our new image on an Ubuntu 14.04 image.
Lastly, you've specified two `RUN` instructions. A `RUN` instruction executes
a command inside the image, for example installing a package. Here you're
@ -285,10 +284,7 @@ Now let's take our `Dockerfile` and use the `docker build` command to build an i
Sending build context to Docker daemon
Step 1 : FROM ubuntu:14.04
---> e54ca5efa2e9
Step 2 : MAINTAINER Kate Smith <ksmith@example.com>
---> Using cache
---> 851baf55332b
Step 3 : RUN apt-get update && apt-get install -y ruby ruby-dev
Step 2 : RUN apt-get update && apt-get install -y ruby ruby-dev
---> Running in 3a2558904e9b
Selecting previously unselected package libasan0:amd64.
(Reading database ... 11518 files and directories currently installed.)
@ -423,7 +419,7 @@ Now let's take our `Dockerfile` and use the `docker build` command to build an i
Running hooks in /etc/ca-certificates/update.d....done.
---> c55c31703134
Removing intermediate container 3a2558904e9b
Step 4 : RUN gem install sinatra
Step 3 : RUN gem install sinatra
---> Running in 6b81cb6313e5
unable to convert "\xC3" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping
unable to convert "\xC3" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping
@ -464,7 +460,7 @@ step-by-step. You can see that each step creates a new container, runs
the instruction inside that container and then commits that change -
just like the `docker commit` work flow you saw earlier. When all the
instructions have executed you're left with the `97feabe5d2ed` image
(also helpfully tagged as `ouruser/sinatra:v2`) and all intermediate
(also helpfuly tagged as `ouruser/sinatra:v2`) and all intermediate
containers will get removed to clean things up.
> **Note:**

View File

@ -182,4 +182,4 @@ webhooks](/docker-hub/repos/#webhooks)
## Next steps
Go and use Docker!
Go and use Docker!

View File

@ -28,7 +28,7 @@ containers that bypasses the [*Union File System*](../reference/glossary.md#unio
- Volumes are initialized when a container is created. If the container's
base image contains data at the specified mount point, that existing data is
copied into the new volume upon volume initialization. (Note that this does
not apply when [mounting a host directory](dockervolumes.md#mount-a-host-directory-as-a-data-volume).)
not apply when [mounting a host directory](#mount-a-host-directory-as-a-data-volume).)
- Data volumes can be shared and reused among containers.
- Changes to a data volume are made directly.
- Changes to a data volume will not be included when you update an image.

View File

@ -13,4 +13,4 @@ title: Engine tutorials
* [Build your own images](dockerimages.md)
* [Network containers](networkingcontainers.md)
* [Manage data in containers](dockervolumes.md)
* [Store images on Docker Hub](dockerrepos.md)
* [Store images on Docker Hub](dockerrepos.md)

View File

@ -78,7 +78,8 @@ $ docker network inspect bridge
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
}
},
"Labels": {}
}
]
```
@ -120,12 +121,13 @@ If you inspect the network, you'll find that it has nothing in it.
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1/16"
"Gateway": "172.18.0.1"
}
]
},
"Containers": {},
"Options": {}
"Options": {},
"Labels": {}
}
]
@ -142,9 +144,7 @@ Launch a container running a PostgreSQL database and pass it the `--net=my-bridg
If you inspect your `my-bridge-network` you'll see it has a container attached.
You can also inspect your container to see where it is connected:
{% raw %}
$ docker inspect --format='{{json .NetworkSettings.Networks}}' db
{% endraw %}
{"my-bridge-network":{"NetworkID":"7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99",
"EndpointID":"508b170d56b2ac9e4ef86694b0a76a22dd3df1983404f7321da5649645bf7043","Gateway":"172.18.0.1","IPAddress":"172.18.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}
@ -155,18 +155,14 @@ Now, go ahead and start your by now familiar web application. This time don't sp
Which network is your `web` application running under? Inspect the application and you'll find it is running in the default `bridge` network.
{% raw %}
$ docker inspect --format='{{json .NetworkSettings.Networks}}' web
{% endraw %}
{"bridge":{"NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812",
"EndpointID":"508b170d56b2ac9e4ef86694b0a76a22dd3df1983404f7321da5649645bf7043","Gateway":"172.17.0.1","IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}
Then, get the IP address of your `web`
{% raw %}
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web
{% endraw %}
172.17.0.2

View File

@ -121,7 +121,7 @@ Apache web server and your web application installed. You can build or update
images from scratch or download and use images created by others. An image may be
based on, or may extend, one or more other images. A docker image is described in
text file called a _Dockerfile_, which has a simple, well-defined syntax. For more
details about images, see [How does a Docker image work?](understanding-docker.md#how-does-a-docker-image-work).
details about images, see [How does a Docker image work?](#how-does-a-docker-image-work).
Docker images are the **build** component of Docker.
@ -132,7 +132,7 @@ a container, you can provide configuration metadata such as networking informati
or environment variables. Each container is an isolated and secure application
platform, but can be given access to resources running in a different host or
container, as well as persistent storage or databases. For more details about
containers, see [How does a container work?](understanding-docker.md#how-does-a-container-work).
containers, see [How does a container work?](#how-does-a-container-work).
Docker containers are the **run** component of Docker.
@ -140,7 +140,7 @@ Docker containers are the **run** component of Docker.
A docker registry is a library of images. A registry can be public or private,
and can be on the same server as the Docker daemon or Docker client, or on a
totally separate server. For more details about registries, see
[How does a Docker registry work?](understanding-docker.md#how-does-a-docker-registry-work)
[How does a Docker registry work?](#how-does-a-docker-registry-work)
Docker registries are the **distribution** component of Docker.

View File

@ -527,4 +527,4 @@ These Official Repositories have exemplary `Dockerfile`s:
* [More about Base Images](baseimages.md)
* [More about Automated Builds](/docker-hub/builds/)
* [Guidelines for Creating Official
Repositories](/docker-hub/official_repos/)
Repositories](/docker-hub/official_repos/)

View File

@ -6,4 +6,4 @@ title: Work with images
* [Create a base image](baseimages.md)
* [Best practices for writing Dockerfiles](dockerfile_best-practices.md)
* [Image management](image_management.md)
* [Image management](image_management.md)

View File

@ -52,4 +52,4 @@ This guide helps users learn how to use Docker Engine.
## Misc
- [Apply custom metadata](labels-custom-metadata.md)
- [Apply custom metadata](labels-custom-metadata.md)

View File

@ -131,4 +131,4 @@ Go to [Docker Swarm user guide](/swarm/).
* Docker on IRC: irc.freenode.net and channel #docker
* [Docker on Twitter](https://twitter.com/docker)
* Get [Docker help](https://stackoverflow.com/search?q=docker) on
StackOverflow
StackOverflow

View File

@ -106,4 +106,4 @@ Labels on swarm nodes and services can be updated dynamically.
- [Adding labels when creating a swarm service](../reference/commandline/service_create.md#set-metadata-on-a-service-l-label)
- [Updating a swarm service's labels](../reference/commandline/service_update.md)
- [Inspecting a swarm service's labels](../reference/commandline/service_inspect.md)
- [Filtering swarm services by label](../reference/commandline/service_ls.md#filtering)
- [Filtering swarm services by label](../reference/commandline/service_ls.md#filtering)

View File

@ -24,14 +24,94 @@ the files alone and use the following Docker options instead.
Various container options that affect container domain name services.
| Options | Description |
| ------- | ----------- |
| `--name=CONTAINER-NAME` | Container name configured using `--name` is used to discover a container within an user-defined docker network. The embedded DNS server maintains the mapping between the container name and its IP address (on the network the container is connected to). |
| `--network-alias=ALIAS` | In addition to `--name` as described above, a container is discovered by one or more of its configured `--network-alias` (or `--alias` in docker network connect command) within the user-defined network. The embedded DNS server maintains the mapping between all of the container aliases and its IP address on a specific user-defined network. A container can have different aliases in different networks by using the `--alias` option in docker network connect command. |
| `--link=CONTAINER_NAME:ALIAS` | Using this option as you run a container gives the embedded DNS an extra entry named ALIAS that points to the IP address of the container identified by CONTAINER_NAME. When using `--link` the embedded DNS will guarantee that localized lookup result only on that container where the `--link` is used. This lets processes inside the new container connect to container without having to know its name or IP. |
| `--dns=[IP_ADDRESS...]` | The IP addresses passed via the `--dns` option is used by the embedded DNS server to forward the DNS query if embedded DNS server is unable to resolve a name resolution request from the containers. These `--dns` IP addresses are managed by the embedded DNS server and will not be updated in the container's `/etc/resolv.conf` file.|
| `--dns-search=DOMAIN...` | Sets the domain names that are searched when a bare unqualified hostname isused inside of the container. These `--dns-search` options are managed by the embedded DNS server and will not be updated in the container's `/etc/resolv.conf` file. When a container process attempts to access host and the search domain `example.com` is set, for instance, the DNS logic will not only look up host but also `host.example.com`. |
| `--dns-opt=OPTION...` |Sets the options used by DNS resolvers. These options are managed by the embedded DNS server and will not be updated in the container's `/etc/resolv.conf` file. See documentation for resolv.conf for a list of valid options |
<table>
<tr>
<td>
<p>
<code>--name=CONTAINER-NAME</code>
</p>
</td>
<td>
<p>
Container name configured using <code>--name</code> is used to discover a container within
an user-defined docker network. The embedded DNS server maintains the mapping between
the container name and its IP address (on the network the container is connected to).
</p>
</td>
</tr>
<tr>
<td>
<p>
<code>--network-alias=ALIAS</code>
</p>
</td>
<td>
<p>
In addition to <code>--name</code> as described above, a container is discovered by one or more
of its configured <code>--network-alias</code> (or <code>--alias</code> in <code>docker network connect</code> command)
within the user-defined network. The embedded DNS server maintains the mapping between
all of the container aliases and its IP address on a specific user-defined network.
A container can have different aliases in different networks by using the <code>--alias</code>
option in <code>docker network connect</code> command.
</p>
</td>
</tr>
<tr>
<td>
<p>
<code>--link=CONTAINER_NAME:ALIAS</code>
</p>
</td>
<td>
<p>
Using this option as you <code>run</code> a container gives the embedded DNS
an extra entry named <code>ALIAS</code> that points to the IP address
of the container identified by <code>CONTAINER_NAME</code>. When using <code>--link</code>
the embedded DNS will guarantee that localized lookup result only on that
container where the <code>--link</code> is used. This lets processes inside the new container
connect to container without having to know its name or IP.
</p>
</td>
</tr>
<tr>
<td><p>
<code>--dns=[IP_ADDRESS...]</code>
</p></td>
<td><p>
The IP addresses passed via the <code>--dns</code> option is used by the embedded DNS
server to forward the DNS query if embedded DNS server is unable to resolve a name
resolution request from the containers.
These <code>--dns</code> IP addresses are managed by the embedded DNS server and
will not be updated in the container's <code>/etc/resolv.conf</code> file.
</tr>
<tr>
<td><p>
<code>--dns-search=DOMAIN...</code>
</p></td>
<td><p>
Sets the domain names that are searched when a bare unqualified hostname is
used inside of the container. These <code>--dns-search</code> options are managed by the
embedded DNS server and will not be updated in the container's <code>/etc/resolv.conf</code> file.
When a container process attempts to access <code>host</code> and the search
domain <code>example.com</code> is set, for instance, the DNS logic will not only
look up <code>host</code> but also <code>host.example.com</code>.
</p>
</td>
</tr>
<tr>
<td><p>
<code>--dns-opt=OPTION...</code>
</p></td>
<td><p>
Sets the options used by DNS resolvers. These options are managed by the embedded
DNS server and will not be updated in the container's <code>/etc/resolv.conf</code> file.
</p>
<p>
See documentation for <code>resolv.conf</code> for a list of valid options
</p></td>
</tr>
</table>
In the absence of the `--dns=IP_ADDRESS...`, `--dns-search=DOMAIN...`, or
`--dns-opt=OPTION...` options, Docker uses the `/etc/resolv.conf` of the
@ -49,4 +129,4 @@ IPv6 Google DNS nameservers will also be added (2001:4860:4860::8888 and
> **Note**: If you need access to a host's localhost resolver, you must modify
> your DNS service on the host to listen on a non-localhost address that is
> reachable from within the container.
> reachable from within the container.

View File

@ -96,4 +96,4 @@ address: this alternative is preferred for performance reasons.
- [Understand Docker container networks](../index.md)
- [Work with network commands](../work-with-networks.md)
- [Legacy container links](dockerlinks.md)
- [Legacy container links](dockerlinks.md)

Some files were not shown because too many files have changed in this diff Show More