diff --git a/ci-cd/github-actions.md b/ci-cd/github-actions.md index a19397ad54..f2b7677f33 100644 --- a/ci-cd/github-actions.md +++ b/ci-cd/github-actions.md @@ -185,9 +185,9 @@ on: This ensures that the main CI will only trigger if we tag our commits with `V.n.n.n.` Let’s test this. For example, run the following command: -```bash -git tag -a v1.0.2 -git push origin v1.0.2 +```console +$ git tag -a v1.0.2 +$ git push origin v1.0.2 ``` Now, go to GitHub and check your Actions diff --git a/compose/cli-command.md b/compose/cli-command.md index 3bb962df60..097e2a0cb5 100644 --- a/compose/cli-command.md +++ b/compose/cli-command.md @@ -60,8 +60,11 @@ $ docker-compose disable-v2 ### Install on Linux -You can install the new Compose CLI, including Compose V2, using the following install script: +You can install Compose V2 by downloading the adequate binary for your system +from the [project release page](https://github.com/docker/compose-cli/releases){:target="_blank" rel="noopener" class="_"} and copying it into `$HOME/.docker/cli-plugins` as `docker-compose`. ```console -$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh -``` \ No newline at end of file +$ mkdir -p ~/.docker/cli-plugins/ +$ curl -SL https://github.com/docker/compose-cli/releases/download/v2.0.0-beta.6/docker-compose-linux-amd64 -o ~/.docker/cli-plugins/docker-compose +$ chmod +x ~/.docker/cli-plugins/docker-compose +``` diff --git a/compose/completion.md b/compose/completion.md index 447ecba761..f87867f013 100644 --- a/compose/completion.md +++ b/compose/completion.md @@ -44,7 +44,7 @@ available. 3. Add the following to your `~/.bash_profile`: - ```shell + ```bash if [ -f $(brew --prefix)/etc/bash_completion ]; then . $(brew --prefix)/etc/bash_completion fi @@ -59,7 +59,7 @@ completion. 2. Add the following lines to `~/.bash_profile`: - ```shell + ```bash if [ -f /opt/local/etc/profile.d/bash_completion.sh ]; then . /opt/local/etc/profile.d/bash_completion.sh fi diff --git a/compose/environment-variables.md b/compose/environment-variables.md index c1f1ff475c..4879c3a666 100644 --- a/compose/environment-variables.md +++ b/compose/environment-variables.md @@ -42,7 +42,7 @@ named `.env`. The `.env` file path is as follows: in `+v1.28` by limiting the filepath to the project directory. -```shell +```console $ cat .env TAG=v1.5 @@ -58,7 +58,7 @@ image `webapp:v1.5`. You can verify this with the [config command](reference/config.md), which prints your resolved application config to the terminal: -```shell +```console $ docker-compose config version: '3' @@ -72,7 +72,7 @@ Values in the shell take precedence over those specified in the `.env` file. If you set `TAG` to a different value in your shell, the substitution in `image` uses that instead: -```shell +```console $ export TAG=v2.0 $ docker-compose config @@ -90,13 +90,13 @@ By passing the file as an argument, you can store it anywhere and name it appropriately, for example, `.env.ci`, `.env.dev`, `.env.prod`. Passing the file path is done using the `--env-file` option: -```shell -docker-compose --env-file ./config/.env.dev up +```console +$ docker-compose --env-file ./config/.env.dev up ``` This file path is relative to the current working directory where the Docker Compose command is executed. -```shell +```console $ cat .env TAG=v1.5 @@ -113,7 +113,7 @@ services: The `.env` file is loaded by default: -```shell +```console $ docker-compose config version: '3' services: @@ -122,7 +122,7 @@ services: ``` Passing the `--env-file ` argument overrides the default file path: -```shell +```console $ docker-compose --env-file ./config/.env.dev config version: '3' services: @@ -186,14 +186,14 @@ web: Similar to `docker run -e`, you can set environment variables on a one-off container with `docker-compose run -e`: -```shell -docker-compose run -e DEBUG=1 web python console.py +```console +$ docker-compose run -e DEBUG=1 web python console.py ``` You can also pass a variable from the shell by not giving it a value: -```shell -docker-compose run -e DEBUG web python console.py +```console +$ docker-compose run -e DEBUG web python console.py ``` The value of the `DEBUG` variable in the container is taken from the value for @@ -211,7 +211,7 @@ priority used by Compose to choose which value to use: In the example below, we set the same environment variable on an Environment file, and the Compose file: -```shell +```console $ cat ./Docker/api/api.env NODE_ENV=test @@ -229,7 +229,7 @@ services: When you run the container, the environment variable defined in the Compose file takes precedence. -```shell +```console $ docker-compose exec api node > process.env.NODE_ENV diff --git a/compose/index.md b/compose/index.md index d23aa98bc2..638ce84ff7 100644 --- a/compose/index.md +++ b/compose/index.md @@ -158,7 +158,7 @@ is the automated test suite. Automated end-to-end testing requires an environment in which to run tests. Compose provides a convenient way to create and destroy isolated testing environments for your test suite. By defining the full environment in a [Compose file](compose-file/index.md), you can create and destroy these environments in just a few commands: -```bash +```console $ docker-compose up -d $ ./run_tests $ docker-compose down diff --git a/compose/install.md b/compose/install.md index 654d64114b..7e4df68924 100644 --- a/compose/install.md +++ b/compose/install.md @@ -125,8 +125,8 @@ also included below. 1. Run this command to download the current stable release of Docker Compose: - ```bash - sudo curl -L "https://github.com/docker/compose/releases/download/{{site.compose_version}}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose + ```console + $ sudo curl -L "https://github.com/docker/compose/releases/download/{{site.compose_version}}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose ``` > To install a different version of Compose, substitute `{{site.compose_version}}` @@ -137,8 +137,8 @@ also included below. 2. Apply executable permissions to the binary: - ```bash - sudo chmod +x /usr/local/bin/docker-compose + ```console + $ sudo chmod +x /usr/local/bin/docker-compose ``` > **Note**: If the command `docker-compose` fails after installation, check your path. @@ -146,8 +146,8 @@ also included below. For example: -```bash -sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose +```console +$ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose ``` 3. Optionally, install [command completion](completion.md) for the @@ -155,7 +155,7 @@ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose 4. Test the installation. - ```bash + ```console $ docker-compose --version docker-compose version {{site.compose_version}}, build 1110ad01 ``` @@ -182,13 +182,13 @@ dependencies. See the [virtualenv tutorial](https://docs.python-guide.org/dev/virtualenvs/) to get started. -```bash -pip install docker-compose +```console +$ pip install docker-compose ``` If you are not using virtualenv, -```bash -sudo pip install docker-compose +```console +$ sudo pip install docker-compose ``` > pip version 6.0 or greater is required. @@ -198,9 +198,9 @@ sudo pip install docker-compose Compose can also be run inside a container, from a small bash script wrapper. To install compose as a container run this command: -```bash -sudo curl -L --fail https://github.com/docker/compose/releases/download/{{site.compose_version}}/run.sh -o /usr/local/bin/docker-compose -sudo chmod +x /usr/local/bin/docker-compose +```console +$ sudo curl -L --fail https://github.com/docker/compose/releases/download/{{site.compose_version}}/run.sh -o /usr/local/bin/docker-compose +$ sudo chmod +x /usr/local/bin/docker-compose ``` @@ -239,29 +239,29 @@ your existing containers (for example, because they have data volumes you want to preserve), you can use Compose 1.5.x to migrate them with the following command: -```bash -docker-compose migrate-to-labels +```console +$ docker-compose migrate-to-labels ``` Alternatively, if you're not worried about keeping them, you can remove them. Compose just creates new ones. -```bash -docker container rm -f -v myapp_web_1 myapp_db_1 ... +```console +$ docker container rm -f -v myapp_web_1 myapp_db_1 ... ``` ## Uninstallation To uninstall Docker Compose if you installed using `curl`: -```bash -sudo rm /usr/local/bin/docker-compose +```console +$ sudo rm /usr/local/bin/docker-compose ``` To uninstall Docker Compose if you installed using `pip`: -```bash -pip uninstall docker-compose +```console +$ pip uninstall docker-compose ``` > Got a "Permission denied" error? diff --git a/compose/reference/ps.md b/compose/reference/ps.md index 0dcca28466..68b28d2b1e 100644 --- a/compose/reference/ps.md +++ b/compose/reference/ps.md @@ -17,7 +17,7 @@ Options: Lists containers. -```bash +```console $ docker-compose ps Name Command State Ports --------------------------------------------------------------------------------------------- diff --git a/compose/reference/pull.md b/compose/reference/pull.md index 6e262a9769..7a67338cea 100644 --- a/compose/reference/pull.md +++ b/compose/reference/pull.md @@ -38,7 +38,7 @@ services: If you run `docker-compose pull ServiceName` in the same directory as the `docker-compose.yml` file that defines the service, Docker pulls the associated image. For example, to call the `postgres` image configured as the `db` service in our example, you would run `docker-compose pull db`. -```bash +```console $ docker-compose pull db Pulling db (postgres:latest)... latest: Pulling from library/postgres diff --git a/compose/reference/top.md b/compose/reference/top.md index 82b14b1880..587f9f1eb9 100644 --- a/compose/reference/top.md +++ b/compose/reference/top.md @@ -11,7 +11,7 @@ Usage: top [SERVICE...] Displays the running processes. -```bash +```console $ docker-compose top compose_service_a_1 PID USER TIME COMMAND diff --git a/config/containers/logging/awslogs.md b/config/containers/logging/awslogs.md index e1b358a84e..509c304e74 100644 --- a/config/containers/logging/awslogs.md +++ b/config/containers/logging/awslogs.md @@ -38,7 +38,7 @@ Restart Docker for the changes to take effect. You can set the logging driver for a specific container by using the `--log-driver` option to `docker run`: -```bash +```console $ docker run --log-driver=awslogs ... ``` @@ -65,7 +65,7 @@ the `awslogs-region` log option or the `AWS_REGION` environment variable to set the region. By default, if your Docker daemon is running on an EC2 instance and no region is set, the driver uses the instance's region. -```bash +```console $ docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 ... ``` @@ -89,7 +89,7 @@ You must specify a for the `awslogs` logging driver. You can specify the log group with the `awslogs-group` log option: -```bash +```console $ docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=myLogGroup ... ``` @@ -112,7 +112,7 @@ Log driver returns an error by default if the log group does not exist. However, `awslogs-create-group` to `true` to automatically create the log group as needed. The `awslogs-create-group` option defaults to `false`. -```bash +```console $ docker run \ --log-driver=awslogs \ --log-opt awslogs-region=us-east-1 \ @@ -162,7 +162,7 @@ The format can be expressed as a `strftime` expression of `[%b %d, %Y %H:%M:%S]`, and the `awslogs-datetime-format` value can be set to that expression: -```bash +```console $ docker run \ --log-driver=awslogs \ --log-opt awslogs-region=us-east-1 \ @@ -237,7 +237,7 @@ INFO Another message was logged You can use the regular expression of `^INFO`: -```bash +```console $ docker run \ --log-driver=awslogs \ --log-opt awslogs-region=us-east-1 \ diff --git a/config/containers/logging/gcplogs.md b/config/containers/logging/gcplogs.md index c162553a13..81135a9b8b 100644 --- a/config/containers/logging/gcplogs.md +++ b/config/containers/logging/gcplogs.md @@ -80,7 +80,7 @@ logging message. Below is an example of the logging options required to log to the default logging destination which is discovered by querying the GCE metadata server. -```bash +```console $ docker run \ --log-driver=gcplogs \ --log-opt labels=location \ @@ -98,7 +98,7 @@ container. An example of the logging options for running outside of GCE (the daemon must be configured with GOOGLE_APPLICATION_CREDENTIALS): -```bash +```console $ docker run \ --log-driver=gcplogs \ --log-opt gcp-project=test-project diff --git a/config/containers/logging/gelf.md b/config/containers/logging/gelf.md index 768b3f0cde..3762fdba35 100644 --- a/config/containers/logging/gelf.md +++ b/config/containers/logging/gelf.md @@ -50,7 +50,7 @@ Restart Docker for the changes to take effect. You can set the logging driver for a specific container by setting the `--log-driver` flag when using `docker container create` or `docker run`: -```bash +```console $ docker run \ --log-driver gelf –-log-opt gelf-address=udp://1.2.3.4:12201 \ alpine echo hello world @@ -82,7 +82,7 @@ The `gelf` logging driver supports the following options: This example configures the container to use the GELF server running at `192.168.0.42` on port `12201`. -```bash +```console $ docker run -dit \ --log-driver=gelf \ --log-opt gelf-address=udp://192.168.0.42:12201 \ diff --git a/config/containers/logging/journald.md b/config/containers/logging/journald.md index 563625bdd1..04a9f26002 100644 --- a/config/containers/logging/journald.md +++ b/config/containers/logging/journald.md @@ -45,7 +45,7 @@ Restart Docker for the changes to take effect. To configure the logging driver for a specific container, use the `--log-driver` flag on the `docker run` command. -```bash +```console $ docker run --log-driver=journald ... ``` @@ -68,7 +68,7 @@ message. Below is an example of the logging options required to log to journald. -```bash +```console $ docker run \ --log-driver=journald \ --log-opt labels=location \ @@ -96,21 +96,21 @@ Use the `journalctl` command to retrieve log messages. You can apply filter expressions to limit the retrieved messages to those associated with a specific container: -```bash +```console $ sudo journalctl CONTAINER_NAME=webserver ``` You can use additional filters to further limit the messages retrieved. The `-b` flag only retrieves messages generated since the last system boot: -```bash +```console $ sudo journalctl -b CONTAINER_NAME=webserver ``` The `-o` flag specifies the format for the retried log messages. Use `-o json` to return the log messages in JSON format. -```bash +```console $ sudo journalctl -o json CONTAINER_NAME=webserver ``` @@ -121,7 +121,7 @@ when retrieving log messages. The reason for that is that `\r` is appended to the end of the line and `journalctl` doesn't strip it automatically unless `--all` is set: -```bash +```console $ sudo journalctl -b CONTAINER_NAME=webserver --all ``` diff --git a/config/containers/logging/json-file.md b/config/containers/logging/json-file.md index 9339d19a4a..a0cf9fa35f 100644 --- a/config/containers/logging/json-file.md +++ b/config/containers/logging/json-file.md @@ -58,7 +58,7 @@ Existing containers do not use the new logging configuration. You can set the logging driver for a specific container by using the `--log-driver` flag to `docker container create` or `docker run`: -```bash +```console $ docker run \ --log-driver json-file --log-opt max-size=10m \ alpine echo hello world @@ -84,6 +84,6 @@ The `json-file` logging driver supports the following logging options: This example starts an `alpine` container which can have a maximum of 3 log files no larger than 10 megabytes each. -```bash +```console $ docker run -it --log-opt max-size=10m --log-opt max-file=3 alpine ash ``` diff --git a/config/containers/logging/local.md b/config/containers/logging/local.md index cd1eb79ef7..2a0e7a6451 100644 --- a/config/containers/logging/local.md +++ b/config/containers/logging/local.md @@ -48,7 +48,7 @@ Restart Docker for the changes to take effect for newly created containers. Exis You can set the logging driver for a specific container by using the `--log-driver` flag to `docker container create` or `docker run`: -```bash +```console $ docker run \ --log-driver local --log-opt max-size=10m \ alpine echo hello world @@ -69,6 +69,6 @@ The `local` logging driver supports the following logging options: This example starts an `alpine` container which can have a maximum of 3 log files no larger than 10 megabytes each. -```bash +```console $ docker run -it --log-driver local --log-opt max-size=10m --log-opt max-file=3 alpine ash ``` diff --git a/config/containers/logging/log_tags.md b/config/containers/logging/log_tags.md index 6ab18836cf..d808848f0c 100644 --- a/config/containers/logging/log_tags.md +++ b/config/containers/logging/log_tags.md @@ -11,7 +11,7 @@ The `tag` log option specifies how to format a tag that identifies the container's log messages. By default, the system uses the first 12 characters of the container ID. To override this behavior, specify a `tag` option: -```bash +```console $ docker run --log-driver=fluentd --log-opt fluentd-address=myhost.local:24224 --log-opt tag="mailer" ``` diff --git a/config/containers/logging/logentries.md b/config/containers/logging/logentries.md index 6a640c7b45..7876b1c457 100644 --- a/config/containers/logging/logentries.md +++ b/config/containers/logging/logentries.md @@ -19,21 +19,21 @@ Some options are supported by specifying `--log-opt` as many times as needed: Configure the default logging driver by passing the `--log-driver` option to the Docker daemon: -```bash +```console $ dockerd --log-driver=logentries ``` To set the logging driver for a specific container, pass the `--log-driver` option to `docker run`: -```bash +```console $ docker run --log-driver=logentries ... ``` Before using this logging driver, you need to create a new Log Set in the Logentries web interface and pass the token of that log set to Docker: -```bash +```console $ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34cd-5678-0123456789ab ``` @@ -45,7 +45,7 @@ Users can use the `--log-opt NAME=VALUE` flag to specify additional Logentries l You need to provide your log set token for logentries driver to work: -```bash +```console $ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34cd-5678-0123456789ab ``` @@ -53,6 +53,6 @@ $ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34 You could specify whether to send log message wrapped into container data (default) or to send raw log line -```bash +```console $ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34cd-5678-0123456789ab --log-opt line-only=true ``` diff --git a/config/containers/logging/splunk.md b/config/containers/logging/splunk.md index 95fa989ffb..6ffba4f4cf 100644 --- a/config/containers/logging/splunk.md +++ b/config/containers/logging/splunk.md @@ -92,7 +92,7 @@ scheme. This is used for verification. The `SplunkServerDefaultCert` is automatically generated by Splunk certificates. {% raw %} -```bash +```console $ docker run \ --log-driver=splunk \ --log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \ diff --git a/config/containers/logging/syslog.md b/config/containers/logging/syslog.md index 529fae95d6..dbe94c37f8 100644 --- a/config/containers/logging/syslog.md +++ b/config/containers/logging/syslog.md @@ -64,8 +64,8 @@ Restart Docker for the changes to take effect. You can set the logging driver for a specific container by using the `--log-driver` flag to `docker container create` or `docker run`: -```bash -docker run \ +```console +$ docker run \ --log-driver syslog --log-opt syslog-address=udp://1.2.3.4:1111 \ alpine echo hello world ``` diff --git a/config/containers/resource_constraints.md b/config/containers/resource_constraints.md index dd6c3b8907..a3ec136555 100644 --- a/config/containers/resource_constraints.md +++ b/config/containers/resource_constraints.md @@ -188,12 +188,12 @@ the container's cgroup on the host machine. If you have 1 CPU, each of the following commands guarantees the container at most 50% of the CPU every second. -```bash -docker run -it --cpus=".5" ubuntu /bin/bash +```console +$ docker run -it --cpus=".5" ubuntu /bin/bash ``` Which is the equivalent to manually specifying `--cpu-period` and `--cpu-quota`; -```bash +```console $ docker run -it --cpu-period=100000 --cpu-quota=50000 ubuntu /bin/bash ``` @@ -246,7 +246,7 @@ documentation or the `ulimit` command for information on appropriate values. The following example command sets each of these three flags on a `debian:jessie` container. -```bash +```console $ docker run -it \ --cpu-rt-runtime=950000 \ --ulimit rtprio=99 \ @@ -273,13 +273,13 @@ Verify that your GPU is running and accessible. Follow the instructions at (https://nvidia.github.io/nvidia-container-runtime/) and then run this command: -```bash +```console $ apt-get install nvidia-container-runtime ``` Ensure the `nvidia-container-runtime-hook` is accessible from `$PATH`. -```bash +```console $ which nvidia-container-runtime-hook ``` @@ -290,7 +290,7 @@ Restart the Docker daemon. Include the `--gpus` flag when you start a container to access GPU resources. Specify how many GPUs to use. For example: -```bash +```console $ docker run -it --rm --gpus all ubuntu nvidia-smi ``` @@ -316,13 +316,13 @@ Exposes all available GPUs and returns a result akin to the following: Use the `device` option to specify GPUs. For example: -```bash +```console $ docker run -it --rm --gpus device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a ubuntu nvidia-smi ``` Exposes that specific GPU. -```bash +```console $ docker run -it --rm --gpus '"device=0,2"' ubuntu nvidia-smi ``` @@ -337,7 +337,7 @@ Exposes the first and third GPUs. You can set capabilities manually. For example, on Ubuntu you can run the following: -```bash +```console $ docker run --gpus 'all,capabilities=utility' --rm ubuntu nvidia-smi ``` diff --git a/config/containers/runmetrics.md b/config/containers/runmetrics.md index e11181101f..3b86e54671 100644 --- a/config/containers/runmetrics.md +++ b/config/containers/runmetrics.md @@ -17,7 +17,7 @@ and network IO metrics. The following is a sample output from the `docker stats` command -```bash +```console $ docker stats redis1 redis2 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O @@ -50,7 +50,7 @@ corresponding to existing containers. To figure out where your control groups are mounted, you can run: -```bash +```console $ grep cgroup /proc/mounts ``` @@ -279,7 +279,7 @@ an interface) can do some serious accounting. For instance, you can setup a rule to account for the outbound HTTP traffic on a web server: -```bash +```console $ iptables -I OUTPUT -p tcp --sport 80 ``` @@ -289,7 +289,7 @@ rule. Later, you can check the values of the counters, with: -```bash +```console $ iptables -nxvL OUTPUT ``` @@ -331,13 +331,13 @@ Containers can interact with their sub-containers, though. The exact format of the command is: -```bash +```console $ ip netns exec ``` For example: -```bash +```console $ ip netns exec mycontainer netstat -i ``` @@ -369,7 +369,7 @@ cgroup (and thus, in the container). Pick any one of the PIDs. Putting everything together, if the "short ID" of a container is held in the environment variable `$CID`, then you can do this: -```bash +```console $ TASKS=/sys/fs/cgroup/devices/docker/$CID*/tasks $ PID=$(head -n 1 $TASKS) $ mkdir -p /var/run/netns diff --git a/config/containers/start-containers-automatically.md b/config/containers/start-containers-automatically.md index bb1a9ac021..b73c66c7c8 100644 --- a/config/containers/start-containers-automatically.md +++ b/config/containers/start-containers-automatically.md @@ -34,19 +34,19 @@ any of the following: The following example starts a Redis container and configures it to always restart unless it is explicitly stopped or Docker is restarted. -```bash +```console $ docker run -d --restart unless-stopped redis ``` This command changes the restart policy for an already running container named `redis`. -```bash +```console $ docker update --restart unless-stopped redis ``` And this command will ensure all currently running containers will be restarted unless stopped. -```bash +```console $ docker update --restart unless-stopped $(docker ps -q) ``` diff --git a/config/daemon/index.md b/config/daemon/index.md index 9219acd563..46ae85e695 100644 --- a/config/daemon/index.md +++ b/config/daemon/index.md @@ -48,7 +48,7 @@ configuration. When you start Docker this way, it runs in the foreground and sends its logs directly to your terminal. -```bash +```console $ dockerd INFO[0000] +job init_networkdriver() @@ -98,8 +98,8 @@ This can be useful for troubleshooting problems. Here's an example of how to manually start the Docker daemon, using the same configurations as above: -```bash -dockerd --debug \ +```console +$ dockerd --debug \ --tls=true \ --tlscert=/var/docker/server.pem \ --tlskey=/var/docker/serverkey.pem \ @@ -274,7 +274,7 @@ Docker platform. 3. Send a `HUP` signal to the daemon to cause it to reload its configuration. On Linux hosts, use the following command. - ```bash + ```console $ sudo kill -SIGHUP $(pidof dockerd) ``` @@ -292,7 +292,7 @@ by sending a `SIGUSR1` signal to the daemon. - **Linux**: - ```bash + ```console $ sudo kill -SIGUSR1 $(pidof dockerd) ``` diff --git a/config/daemon/ipv6.md b/config/daemon/ipv6.md index 59841b84ca..350c58f0cc 100644 --- a/config/daemon/ipv6.md +++ b/config/daemon/ipv6.md @@ -27,7 +27,7 @@ either IPv4 or IPv6 (or both) with any container, service, or network. 2. Reload the Docker configuration file. - ```bash + ```console $ systemctl reload docker ``` diff --git a/config/daemon/prometheus.md b/config/daemon/prometheus.md index 76b6fc7205..1a8ea618cb 100644 --- a/config/daemon/prometheus.md +++ b/config/daemon/prometheus.md @@ -210,7 +210,7 @@ Next, start a single-replica Prometheus service using this configuration.
-```bash +```console $ docker service create --replicas 1 --name my-prometheus \ --mount type=bind,source=/tmp/prometheus.yml,destination=/etc/prometheus/prometheus.yml \ --publish published=9090,target=9090,protocol=tcp \ @@ -220,7 +220,7 @@ $ docker service create --replicas 1 --name my-prometheus \
-```bash +```console $ docker service create --replicas 1 --name my-prometheus \ --mount type=bind,source=/tmp/prometheus.yml,destination=/etc/prometheus/prometheus.yml \ --publish published=9090,target=9090,protocol=tcp \ @@ -263,7 +263,7 @@ To make the graph more interesting, create some network actions by starting a service with 10 tasks that just ping Docker non-stop (you can change the ping target to anything you like): -```bash +```console $ docker service create \ --replicas 10 \ --name ping_service \ @@ -278,7 +278,7 @@ your graph. When you are ready, stop and remove the `ping_service` service, so that you are not flooding a host with pings for no reason. -```bash +```console $ docker service remove ping_service ``` diff --git a/config/formatting.md b/config/formatting.md index 1106aef498..5436f0d8df 100644 --- a/config/formatting.md +++ b/config/formatting.md @@ -20,8 +20,8 @@ include examples of customizing the output format. > In a Posix shell, you can run the following with a single quote: > > {% raw %} -> ```bash -> docker inspect --format '{{join .Args " , "}}' +> ```console +> $ docker inspect --format '{{join .Args " , "}}' > ``` > {% endraw %} > @@ -29,8 +29,8 @@ include examples of customizing the output format. > escape the double quotes inside the params as follows: > > {% raw %} -> ```bash -> docker inspect --format '{{join .Args \" , \"}}' +> ```console +> $ docker inspect --format '{{join .Args \" , \"}}' > ``` > {% endraw %} > diff --git a/config/pruning.md b/config/pruning.md index 2589b5acbd..b59661415a 100644 --- a/config/pruning.md +++ b/config/pruning.md @@ -21,7 +21,7 @@ default, `docker image prune` only cleans up _dangling_ images. A dangling image is one that is not tagged and is not referenced by any container. To remove dangling images: -```bash +```console $ docker image prune WARNING! This will remove all dangling images. @@ -31,7 +31,7 @@ Are you sure you want to continue? [y/N] y To remove all images which are not used by existing containers, use the `-a` flag: -```bash +```console $ docker image prune -a WARNING! This will remove all images without at least one container associated to them. @@ -45,7 +45,7 @@ You can limit which images are pruned using filtering expressions with the `--filter` flag. For example, to only consider images created more than 24 hours ago: -```bash +```console $ docker image prune -a --filter "until=24h" ``` @@ -62,7 +62,7 @@ exist, especially on a development system! A stopped container's writable layers still take up disk space. To clean this up, you can use the `docker container prune` command. -```bash +```console $ docker container prune WARNING! This will remove all stopped containers. @@ -76,7 +76,7 @@ By default, all stopped containers are removed. You can limit the scope using the `--filter` flag. For instance, the following command only removes stopped containers older than 24 hours: -```bash +```console $ docker container prune --filter "until=24h" ``` @@ -90,7 +90,7 @@ Volumes can be used by one or more containers, and take up space on the Docker host. Volumes are never removed automatically, because to do so could destroy data. -```bash +```console $ docker volume prune WARNING! This will remove all volumes not used by at least one container. @@ -104,7 +104,7 @@ By default, all unused volumes are removed. You can limit the scope using the `--filter` flag. For instance, the following command only removes volumes which are not labelled with the `keep` label: -```bash +```console $ docker volume prune --filter "label!=keep" ``` @@ -119,7 +119,7 @@ rules, bridge network devices, and routing table entries. To clean these things up, you can use `docker network prune` to clean up networks which aren't used by any containers. -```bash +```console $ docker network prune WARNING! This will remove all networks not used by at least one container. @@ -133,7 +133,7 @@ By default, all unused networks are removed. You can limit the scope using the `--filter` flag. For instance, the following command only removes networks older than 24 hours: -```bash +```console $ docker network prune --filter "until=24h" ``` @@ -147,7 +147,7 @@ The `docker system prune` command is a shortcut that prunes images, containers, and networks. Volumes are not pruned by default, and you must specify the `--volumes` flag for `docker system prune` to prune volumes. -```bash +```console $ docker system prune WARNING! This will remove: @@ -160,7 +160,7 @@ Are you sure you want to continue? [y/N] y To also prune volumes, add the `--volumes` flag: -```bash +```console $ docker system prune --volumes WARNING! This will remove: diff --git a/desktop/faqs.md b/desktop/faqs.md index 962cf0657c..59f4a951e3 100644 --- a/desktop/faqs.md +++ b/desktop/faqs.md @@ -38,8 +38,8 @@ variables, specify these to connect to Docker instances through Unix sockets. For example: -```bash -export DOCKER_HOST=unix:///var/run/docker.sock +```console +$ export DOCKER_HOST=unix:///var/run/docker.sock ``` Docker Desktop Windows users can connect to the Docker Engine through a **named pipe**: `npipe:////./pipe/docker_engine`, or **TCP socket** at this URL: diff --git a/desktop/kubernetes.md b/desktop/kubernetes.md index e08df320be..c50cf663da 100644 --- a/desktop/kubernetes.md +++ b/desktop/kubernetes.md @@ -21,7 +21,7 @@ The Kubernetes client command `kubectl` is included and configured to connect to the local Kubernetes server. If you have already installed `kubectl` and pointing to some other environment, such as `minikube` or a GKE cluster, ensure you change the context so that `kubectl` is pointing to `docker-desktop`: -```bash +```console $ kubectl config get-contexts $ kubectl config use-context docker-desktop ``` @@ -62,8 +62,8 @@ the `PATH`. You can test the command by listing the available nodes: -```bash -kubectl get nodes +```console +$ kubectl get nodes NAME STATUS ROLES AGE VERSION docker-desktop Ready master 3h v1.19.7 diff --git a/develop/develop-images/baseimages.md b/develop/develop-images/baseimages.md index 8c60450b2d..b0f2e76703 100644 --- a/develop/develop-images/baseimages.md +++ b/develop/develop-images/baseimages.md @@ -73,8 +73,8 @@ Assuming you built the "hello" executable example by using the source code at and you compiled it with the `-static` flag, you can build this Docker image using this `docker build` command: -```bash -docker build --tag hello . +```console +$ docker build --tag hello . ``` Don't forget the `.` character at the end, which sets the build context to the @@ -84,7 +84,7 @@ current directory. > you need a Linux binary, rather than a Mac or Windows binary. > You can use a Docker container to build it: > -> ```bash +> ```console > $ docker run --rm -it -v $PWD:/build ubuntu:20.04 > > container# apt-get update && apt-get install build-essential @@ -94,8 +94,8 @@ current directory. To run your new image, use the `docker run` command: -```bash -docker run --rm hello +```console +$ docker run --rm hello ``` This example creates the hello-world image used in the tutorials. diff --git a/develop/develop-images/build_enhancements.md b/develop/develop-images/build_enhancements.md index 1393bb119b..00614c2de4 100644 --- a/develop/develop-images/build_enhancements.md +++ b/develop/develop-images/build_enhancements.md @@ -37,7 +37,7 @@ the [Dockerfile reference](/engine/reference/builder/) page. Easiest way from a fresh install of docker is to set the `DOCKER_BUILDKIT=1` environment variable when invoking the `docker build` command, such as: -```bash +```console $ DOCKER_BUILDKIT=1 docker build . ``` @@ -226,7 +226,7 @@ RUN --mount=type=ssh git clone git@github.com:myorg/myproject.git myproject Once the `Dockerfile` is created, use the `--ssh` option for connectivity with the SSH agent. -```bash +```console $ docker build --ssh default . ``` diff --git a/develop/develop-images/dockerfile_best-practices.md b/develop/develop-images/dockerfile_best-practices.md index 6fa2d2f8c8..7f7be9327c 100644 --- a/develop/develop-images/dockerfile_best-practices.md +++ b/develop/develop-images/dockerfile_best-practices.md @@ -73,21 +73,21 @@ context. > a text file named `hello` and create a Dockerfile that runs `cat` on it. Build > the image from within the build context (`.`): > -> ```shell -> mkdir myproject && cd myproject -> echo "hello" > hello -> echo -e "FROM busybox\nCOPY /hello /\nRUN cat /hello" > Dockerfile -> docker build -t helloapp:v1 . +> ```console +> $ mkdir myproject && cd myproject +> $ echo "hello" > hello +> $ echo -e "FROM busybox\nCOPY /hello /\nRUN cat /hello" > Dockerfile +> $ docker build -t helloapp:v1 . > ``` > > Move `Dockerfile` and `hello` into separate directories and build a second > version of the image (without relying on cache from the last build). Use `-f` > to point to the Dockerfile and specify the directory of the build context: > -> ```shell -> mkdir -p dockerfiles context -> mv Dockerfile dockerfiles && mv hello context -> docker build --no-cache -t helloapp:v2 -f dockerfiles/Dockerfile context +> ```console +> $ mkdir -p dockerfiles context +> $ mv Dockerfile dockerfiles && mv hello context +> $ docker build --no-cache -t helloapp:v2 -f dockerfiles/Dockerfile context > ``` Inadvertently including files that are not necessary for building an image @@ -664,7 +664,7 @@ RUN echo $ADMIN_USER > ./mark RUN unset ADMIN_USER ``` -```bash +```console $ docker run --rm test sh -c 'echo $ADMIN_USER' mark @@ -687,7 +687,7 @@ RUN export ADMIN_USER="mark" \ CMD sh ``` -```bash +```console $ docker run --rm test sh -c 'echo $ADMIN_USER' ``` @@ -762,13 +762,13 @@ CMD ["--help"] Now the image can be run like this to show the command's help: -```bash +```console $ docker run s3cmd ``` Or using the right parameters to execute a command: -```bash +```console $ docker run s3cmd ls s3://mybucket ``` @@ -819,19 +819,19 @@ This script allows the user to interact with Postgres in several ways. It can simply start Postgres: -```bash +```console $ docker run postgres ``` Or, it can be used to run Postgres and pass parameters to the server: -```bash +```console $ docker run postgres postgres --help ``` Lastly, it could also be used to start a totally different tool, such as Bash: -```bash +```console $ docker run --rm -it postgres bash ``` diff --git a/develop/develop-images/multistage-build.md b/develop/develop-images/multistage-build.md index 9831325814..611b6aea13 100644 --- a/develop/develop-images/multistage-build.md +++ b/develop/develop-images/multistage-build.md @@ -116,7 +116,7 @@ CMD ["./app"] You only need the single Dockerfile. You don't need a separate build script, either. Just run `docker build`. -```bash +```console $ docker build -t alexellis2/href-counter:latest . ``` @@ -160,7 +160,7 @@ Dockerfile including every stage. You can specify a target build stage. The following command assumes you are using the previous `Dockerfile` but stops at the stage named `builder`: -```bash +```console $ docker build --target builder -t alexellis2/href-counter:latest . ``` diff --git a/docker-for-mac/apple-silicon.md b/docker-for-mac/apple-silicon.md index 28d1ad6d29..6e0390e0c8 100644 --- a/docker-for-mac/apple-silicon.md +++ b/docker-for-mac/apple-silicon.md @@ -24,8 +24,8 @@ Download Docker Desktop for Mac on Apple silicon: You must install **Rosetta 2** as some binaries are still Darwin/AMD64. To install Rosetta 2 manually from the command line, run the following command: -```shell -softwareupdate --install-rosetta +```console +$ softwareupdate --install-rosetta ``` We expect to fix this in a future release. diff --git a/docker-for-mac/index.md b/docker-for-mac/index.md index 97e781c849..8b7d2adadf 100644 --- a/docker-for-mac/index.md +++ b/docker-for-mac/index.md @@ -175,8 +175,8 @@ You can see whether you are running experimental mode at the command line. If `Experimental` is `true`, then Docker is running in experimental mode, as shown here. (If `false`, Experimental mode is off.) -```bash -> docker version +```console +$ docker version Client: Docker Engine - Community Version: 19.03.1 @@ -243,7 +243,7 @@ To manually add a custom, self-signed certificate, start by adding the certificate to the macOS keychain, which is picked up by Docker Desktop. Here is an example: -```bash +```console $ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt ``` @@ -347,13 +347,13 @@ ln -s $etc/docker-compose.bash-completion $(brew --prefix)/etc/bash_completion.d Add the following to your `~/.bash_profile`: -```shell +```bash [ -f /usr/local/etc/bash_completion ] && . /usr/local/etc/bash_completion ``` OR -```shell +```bash if [ -f $(brew --prefix)/etc/bash_completion ]; then . $(brew --prefix)/etc/bash_completion fi @@ -383,15 +383,15 @@ directory. Create the `completions` directory: -```bash -mkdir -p ~/.config/fish/completions +```console +$ mkdir -p ~/.config/fish/completions ``` Now add fish completions from docker. -```bash -ln -shi /Applications/Docker.app/Contents/Resources/etc/docker.fish-completion ~/.config/fish/completions/docker.fish -ln -shi /Applications/Docker.app/Contents/Resources/etc/docker-compose.fish-completion ~/.config/fish/completions/docker-compose.fish +```console +$ ln -shi /Applications/Docker.app/Contents/Resources/etc/docker.fish-completion ~/.config/fish/completions/docker.fish +$ ln -shi /Applications/Docker.app/Contents/Resources/etc/docker-compose.fish-completion ~/.config/fish/completions/docker-compose.fish ``` ## Give feedback and get help diff --git a/docker-for-mac/install.md b/docker-for-mac/install.md index 44313d0f1b..50b1118178 100644 --- a/docker-for-mac/install.md +++ b/docker-for-mac/install.md @@ -45,8 +45,8 @@ Your Mac must meet the following requirements to successfully install Docker Des - You must install **Rosetta 2** as some binaries are still Darwin/AMD64. To install Rosetta 2 manually from the command line, run the following command: - ```bash - softwareupdate --install-rosetta + ```console + $ softwareupdate --install-rosetta ``` For more information, see [Docker Desktop for Apple silicon](apple-silicon.md). diff --git a/docker-for-mac/networking.md b/docker-for-mac/networking.md index a1f055d081..311f416287 100644 --- a/docker-for-mac/networking.md +++ b/docker-for-mac/networking.md @@ -109,14 +109,14 @@ overlay network, not a bridge network, as these are not routed. The command to run the `nginx` webserver shown in [Getting Started](index.md#explore-the-application) is an example of this. -```bash +```console $ docker run -d -p 80:80 --name webserver nginx ``` To clarify the syntax, the following two commands both expose port `80` on the container to port `8000` on the host: -```bash +```console $ docker run --publish 8000:80 --name webserver nginx $ docker run -p 8000:80 --name webserver nginx @@ -126,7 +126,7 @@ To expose all ports, use the `-P` flag. For example, the following command starts a container (in detached mode) and the `-P` exposes all ports on the container to random ports on the host. -```bash +```console $ docker run -d -P --name webserver nginx ``` diff --git a/docker-for-mac/space.md b/docker-for-mac/space.md index 1005b9b742..aae3422474 100644 --- a/docker-for-mac/space.md +++ b/docker-for-mac/space.md @@ -45,19 +45,19 @@ docker system df -v Alternatively, to list images, run: -```bash +```console $ docker image ls ``` and then, to list containers, run: -```bash +```console $ docker container ls -a ``` If there are lots of redundant objects, run the command: -```bash +```console $ docker system prune ``` @@ -77,7 +77,7 @@ $ docker run --privileged --pid=host docker/desktop-reclaim-space Note that many tools report the maximum file size, not the actual file size. To query the actual size of the file on the host from a terminal, run: -```bash +```console $ cd ~/Library/Containers/com.docker.docker/Data $ cd vms/0/data $ ls -klsh Docker.raw diff --git a/docker-for-mac/troubleshoot.md b/docker-for-mac/troubleshoot.md index 6e19929598..1346cd2524 100644 --- a/docker-for-mac/troubleshoot.md +++ b/docker-for-mac/troubleshoot.md @@ -40,17 +40,18 @@ system. > Uninstall Docker Desktop from the command line > ->To uninstall Docker Desktop from a terminal, run: ` ->--uninstall`. If your instance is installed in the default location, this ->command provides a clean uninstall: +> To uninstall Docker Desktop from a terminal, run: ` +> --uninstall`. If your instance is installed in the default location, this +> command provides a clean uninstall: > ->```shell ->$ /Applications/Docker.app/Contents/MacOS/Docker --uninstall ->Docker is running, exiting... ->Docker uninstalled successfully. You can move the Docker application to the trash. ->``` ->You might want to use the command-line uninstall if, for example, you find that ->the app is non-functional, and you cannot uninstall it from the menu. +> ```console +> $ /Applications/Docker.app/Contents/MacOS/Docker --uninstall +> Docker is running, exiting... +> Docker uninstalled successfully. You can move the Docker application to the trash. +> ``` +> +> You might want to use the command-line uninstall if, for example, you find that +> the app is non-functional, and you cannot uninstall it from the menu. ## Diagnose and feedback @@ -61,7 +62,7 @@ documentation, on [Docker Desktop issues on GitHub](https://github.com/docker/for-mac/issues), or the [Docker Desktop forum](https://forums.docker.com/c/docker-for-mac), we can help you troubleshoot the log data. Before reporting an issue, we recommend that you read the information provided on this page to fix some common known issues. ->**Note** +> **Note** > > Docker Desktop offers support for users subscribed to a Pro or a Team plan. If you are experiencing any issues with Docker Desktop, follow the instructions in this section to send a support request to Docker Support. @@ -90,7 +91,7 @@ First, locate the `com.docker.diagnose` tool. If you have installed Docker Desk To create *and upload* diagnostics, run: -```sh +```console $ /Applications/Docker.app/Contents/MacOS/com.docker.diagnose gather -upload ``` @@ -110,7 +111,7 @@ composed of your user ID (BE9AFAAF-F68B-41D0-9D12-84760E6B8740) and a timestamp To view the contents of the diagnostic file, run: -```sh +```console $ open /tmp/BE9AFAAF-F68B-41D0-9D12-84760E6B8740/20190905152051.zip ``` @@ -125,7 +126,7 @@ browse the logs yourself. To watch the live flow of Docker Desktop logs in the command line, run the following script from your favorite shell. -```bash +```console $ pred='process matches ".*(ocker|vpnkit).*" || (process in {"taskgated-helper", "launchservicesd", "kernel"} && eventMessage contains[c] "docker")' $ /usr/bin/log stream --style syslog --level=debug --color=always --predicate "$pred" @@ -133,7 +134,7 @@ $ /usr/bin/log stream --style syslog --level=debug --color=always --predicate "$ Alternatively, to collect the last day of logs (`1d`) in a file, run: -``` +```console $ /usr/bin/log show --debug --info --style syslog --last 1d --predicate "$pred" >/tmp/logs.txt ``` @@ -203,8 +204,8 @@ Tables (EPT) and Unrestricted Mode are supported.* To check if your Mac supports the Hypervisor framework, run the following command in a terminal window. -```bash -sysctl kern.hv_support +```console +$ sysctl kern.hv_support ``` If your Mac supports the Hypervisor Framework, the command prints @@ -305,8 +306,8 @@ in the Apple documentation, and Docker Desktop [Mac system requirements](install `DOCKER_CERT_PATH` environment variables, specify these to connect to Docker instances through Unix sockets. For example: - ```bash - export DOCKER_HOST=unix:///var/run/docker.sock + ```console + $ export DOCKER_HOST=unix:///var/run/docker.sock ``` * There are a number of issues with the performance of directories bind-mounted diff --git a/docker-for-windows/index.md b/docker-for-windows/index.md index fd183a489c..417f6384a6 100644 --- a/docker-for-windows/index.md +++ b/docker-for-windows/index.md @@ -226,7 +226,7 @@ Run `docker version` to verify whether you have enabled experimental features. E is listed under `Server` data. If `Experimental` is `true`, then Docker is running in experimental mode, as shown here: -```shell +```console > docker version Client: Docker Engine - Community diff --git a/docker-for-windows/networking.md b/docker-for-windows/networking.md index 39f4212df8..c53eb09924 100644 --- a/docker-for-windows/networking.md +++ b/docker-for-windows/networking.md @@ -105,13 +105,13 @@ overlay network, not a bridge network, as these are not routed. The command to run the `nginx` webserver shown in [Getting Started](index.md#explore-the-application) is an example of this. -```bash +```console $ docker run -d -p 80:80 --name webserver nginx ``` To clarify the syntax, the following two commands both publish container's port `80` to host's port `8000`: -```bash +```console $ docker run --publish 8000:80 --name webserver nginx $ docker run -p 8000:80 --name webserver nginx @@ -121,7 +121,7 @@ To publish all ports, use the `-P` flag. For example, the following command starts a container (in detached mode) and the `-P` flag publishes all exposed ports of the container to random ports on the host. -```bash +```console $ docker run -d -P --name webserver nginx ``` diff --git a/docker-hub/index.md b/docker-hub/index.md index 1d672de31d..12668bc7a8 100644 --- a/docker-hub/index.md +++ b/docker-hub/index.md @@ -136,12 +136,11 @@ Docker Hub. 1. Start by creating a [Dockerfile](../engine/reference/builder/) to specify your application as shown below: - ```shell - cat > Dockerfile </my-private-repo .` to build your Docker image. diff --git a/docker-hub/vulnerability-scanning.md b/docker-hub/vulnerability-scanning.md index 77f906514e..16ca0f6f84 100644 --- a/docker-hub/vulnerability-scanning.md +++ b/docker-hub/vulnerability-scanning.md @@ -42,14 +42,14 @@ To scan an image for vulnerabilities, push the image to Docker Hub: 2. Use the command line to log into your Docker account. See [docker login](../engine/reference/commandline/login.md) for more information. 3. Tag the image that you’d like to scan. For example, to tag a Redis image, run: - ```shell - docker tag redis /:latest + ```console + $ docker tag redis /:latest ``` 4. Push the image to Docker Hub to trigger vulnerability scanning on the image: - ```shell - docker push /:latest + ```console + $ docker push /:latest ``` ## View the vulnerability report diff --git a/docsarchive.md b/docsarchive.md index 4d4713ee39..70730b89b7 100644 --- a/docsarchive.md +++ b/docsarchive.md @@ -21,6 +21,6 @@ you can still access that documentation in the following ways: - By running a container of the specific [tag for your documentation version](https://hub.docker.com/r/docs/docker.github.io) in Docker Hub. For example, run the following to access `v1.9`: - ```bash - docker run -it -p 4000:4000 docs/docker.github.io:v1.9 + ```console + $ docker run -it -p 4000:4000 docs/docker.github.io:v1.9 ``` diff --git a/engine/api/index.md b/engine/api/index.md index 8f1ff4876c..9ea0dc2d20 100644 --- a/engine/api/index.md +++ b/engine/api/index.md @@ -70,7 +70,7 @@ unless you need to take advantage of new features. To see the highest version of the API your Docker daemon and client support, use `docker version`: -```bash +```console $ docker version Client: Docker Engine - Community @@ -107,8 +107,8 @@ You can specify the API version to use, in one of the following ways: environment variable `DOCKER_API_VERSION` to the correct version. This works on Linux, Windows, or macOS clients. - ```bash - DOCKER_API_VERSION='1.41' + ```console + $ DOCKER_API_VERSION='1.41' ``` While the environment variable is set, that version of the API is used, even diff --git a/engine/api/sdk/examples.md b/engine/api/sdk/examples.md index 63b633e314..a984ccd928 100644 --- a/engine/api/sdk/examples.md +++ b/engine/api/sdk/examples.md @@ -109,7 +109,7 @@ print(client.containers.run("alpine", ["echo", "hello", "world"]))
-```bash +```console $ curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" \ -d '{"Image": "alpine", "Cmd": ["echo", "hello world"]}' \ -X POST http://localhost/v{{ site.latest_engine_api_version}}/containers/create @@ -214,7 +214,7 @@ print(container.id)
-```bash +```console $ curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" \ -d '{"Image": "bfirsh/reticulate-splines"}' \ -X POST http://localhost/v{{ site.latest_engine_api_version}}/containers/create @@ -285,7 +285,7 @@ for container in client.containers.list():
-```bash +```console $ curl --unix-socket /var/run/docker.sock http://localhost/v{{ site.latest_engine_api_version}}/containers/json [{ "Id":"ae63e8b89a26f01f6b4b2c9a7817c31a1b6196acf560f66586fbc8809ffcd772", @@ -365,7 +365,7 @@ for container in client.containers.list():
-```bash +```console $ curl --unix-socket /var/run/docker.sock http://localhost/v{{ site.latest_engine_api_version}}/containers/json [{ "Id":"ae63e8b89a26f01f6b4b2c9a7817c31a1b6196acf560f66586fbc8809ffcd772", @@ -442,7 +442,7 @@ print(container.logs())
-```bash +```console $ curl --unix-socket /var/run/docker.sock "http://localhost/v{{ site.latest_engine_api_version}}/containers/ca5f55cdb/logs?stdout=1" Reticulating spline 1... Reticulating spline 2... @@ -511,7 +511,7 @@ for image in client.images.list():
-```bash +```console $ curl --unix-socket /var/run/docker.sock http://localhost/v{{ site.latest_engine_api_version}}/images/json [{ "Id":"sha256:31d9a31e1dd803470c5a151b8919ef1988ac3efd44281ac59d43ad623f275dcd", @@ -581,7 +581,7 @@ print(image.id)
-```bash +```console $ curl --unix-socket /var/run/docker.sock \ -X POST "http://localhost/v{{ site.latest_engine_api_version}}/images/create?fromImage=alpine" {"status":"Pulling from library/alpine","id":"3.1"} @@ -679,7 +679,7 @@ This example leaves the credentials in your shell's history, so consider this a naive implementation. The credentials are passed as a Base-64-encoded JSON structure. -```bash +```console $ JSON=$(echo '{"username": "string", "password": "string", "serveraddress": "string"}' | base64) $ curl --unix-socket /var/run/docker.sock \ @@ -775,7 +775,7 @@ print(image.id)
-```bash +```console $ docker run -d alpine touch /helloworld 0888269a9d584f0fa8fc96b3c0d8d57969ceea3a64acf47cd34eebb4744dbc52 $ curl --unix-socket /var/run/docker.sock\ diff --git a/engine/api/sdk/index.md b/engine/api/sdk/index.md index 08d4985274..d8e0ea9926 100644 --- a/engine/api/sdk/index.md +++ b/engine/api/sdk/index.md @@ -22,8 +22,8 @@ installed and coexist together. ### Go SDK -```bash -go get github.com/docker/docker/client +```console +$ go get github.com/docker/docker/client ``` The client requires a recent version of Go. Run `go version` and ensure that you @@ -149,7 +149,7 @@ print client.containers.run("alpine", ["echo", "hello", "world"])
-```bash +```console $ curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" \ -d '{"Image": "alpine", "Cmd": ["echo", "hello world"]}' \ -X POST http://localhost/v{{ site.latest_engine_api_version}}/containers/create diff --git a/engine/reference/commandline/app.md b/engine/reference/commandline/app.md index ce4144cc58..79d12adaf6 100644 --- a/engine/reference/commandline/app.md +++ b/engine/reference/commandline/app.md @@ -383,7 +383,7 @@ pushing an official Docker image as part of your app, you may find your app bundle becomes large with all image architectures embedded. To just push the architecture required, you can add the `--platform` flag. -```bash +```console $ docker login $ docker app push my-app --platform="linux/amd64" --tag /:0.1.0 diff --git a/engine/release-notes/18.09.md b/engine/release-notes/18.09.md index 7f1202408a..d26b1f849b 100644 --- a/engine/release-notes/18.09.md +++ b/engine/release-notes/18.09.md @@ -188,8 +188,8 @@ In Docker versions prior to 18.09, containerd was managed by the Docker engine d Run the following command to get the current value of the `MountFlags` property for the `docker.service`: -```bash -sudo systemctl show --property=MountFlags docker.service +```console +$ sudo systemctl show --property=MountFlags docker.service MountFlags= ``` Update your configuration if this command prints a non-empty value for `MountFlags`, and restart the docker service. @@ -244,8 +244,8 @@ configuration which changes mount settings (for example, `MountFlags=slave`) bre Run the following command to get the current value of the `MountFlags` property for the `docker.service`: -```bash -sudo systemctl show --property=MountFlags docker.service +```console +$ sudo systemctl show --property=MountFlags docker.service MountFlags= ``` diff --git a/engine/scan/index.md b/engine/scan/index.md index 40f938c6d2..4ee07235b7 100644 --- a/engine/scan/index.md +++ b/engine/scan/index.md @@ -21,20 +21,20 @@ This page contains information about the `docker scan` CLI command. For informat The `docker scan` command allows you to scan existing Docker images using the image name or ID. For example, run the following command to scan the hello-world image: -```shell -$ docker scan hello-world +```console +$ docker scan hello-world - Testing hello-world... +Testing hello-world... - Organization: docker-desktop-test - Package manager: linux - Project name: docker-image|hello-world - Docker image: hello-world - Licenses: enabled +Organization: docker-desktop-test +Package manager: linux +Project name: docker-image|hello-world +Docker image: hello-world +Licenses: enabled - ✓ Tested 0 dependencies for known issues, no vulnerable paths found. +✓ Tested 0 dependencies for known issues, no vulnerable paths found. - Note that we do not currently have vulnerability data for your image. +Note that we do not currently have vulnerability data for your image. ``` ### Get a detailed scan report @@ -43,7 +43,7 @@ You can get a detailed scan report about a Docker image by providing the Dockerf For example, if you apply the option to the `docker-scan` test image, it displays the following result: -```shell +```console $ docker scan --file Dockerfile docker-scan:e2e Testing docker-scan:e2e ... @@ -74,7 +74,7 @@ According to our scan, you are currently using the most secure version of the se When using docker scan with the `--file` flag, you can also add the `--exclude-base` tag. This excludes the base image (specified in the Dockerfile using the `FROM` directive) vulnerabilities from your report. For example: -```shell +```console $ docker scan --file Dockerfile --exclude-base docker-scan:e2e Testing docker-scan:e2e ... @@ -105,7 +105,7 @@ Tested 200 dependencies for known issues, found 16 issues. You can also display the scan result as a JSON output by adding the `--json` flag to the command. For example: -```shell +```console $ docker scan --json hello-world { "vulnerabilities": [], @@ -158,7 +158,7 @@ $ docker scan --json hello-world In addition to the `--json` flag, you can also use the `--group-issues` flag to display a vulnerability only once in the scan report: -```shell +```console $ docker scan --json --group-issues docker-scan:e2e { { @@ -211,7 +211,7 @@ You can find all the sources of the vulnerability in the `from` section. To view the dependency tree of your image, use the --dependency-tree flag. This displays all the dependencies before the scan result. For example: -```shell +```console $ docker scan --dependency-tree debian:buster $ docker-image|99138c65ebc7 @ latest @@ -322,7 +322,7 @@ Tested 200 dependencies for known issues, found 37 issues. If you have an existing Snyk account, you can directly use your Snyk [API token](https://app.snyk.io/account){: target="_blank" rel="noopener" class="_"}: -```shell +```console $ docker scan --login --token SNYK_AUTH_TOKEN Your account has been authenticated. Snyk is now ready to be used. @@ -347,7 +347,7 @@ To run vulnerability scanning on your Docker images, you must meet the following Check your installation by running `docker scan --version`, it should print the current version of docker scan and the Snyk engine version. For example: -```shell +```console $ docker scan --version Version: v0.5.0 Git commit: 5a09266 diff --git a/engine/security/apparmor.md b/engine/security/apparmor.md index 9e46dcf4e4..73caf1f9c6 100644 --- a/engine/security/apparmor.md +++ b/engine/security/apparmor.md @@ -32,7 +32,7 @@ When you run a container, it uses the `docker-default` policy unless you override it with the `security-opt` option. For example, the following explicitly specifies the default policy: -```bash +```console $ docker run --rm -it --security-opt apparmor=docker-default hello-world ``` @@ -40,19 +40,19 @@ $ docker run --rm -it --security-opt apparmor=docker-default hello-world To load a new profile into AppArmor for use with containers: -```bash +```console $ apparmor_parser -r -W /path/to/your_profile ``` Then, run the custom profile with `--security-opt` like so: -```bash +```console $ docker run --rm -it --security-opt apparmor=your_profile hello-world ``` To unload a profile from AppArmor: -```bash +```console # unload the profile $ apparmor_parser -R /path/to/profile ``` @@ -154,7 +154,7 @@ profile docker-nginx flags=(attach_disconnected,mediate_deleted) { 2. Load the profile. - ```bash + ```console $ sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx ``` @@ -162,20 +162,20 @@ profile docker-nginx flags=(attach_disconnected,mediate_deleted) { To run nginx in detached mode: - ```bash + ```console $ docker run --security-opt "apparmor=docker-nginx" \ -p 80:80 -d --name apparmor-nginx nginx ``` 4. Exec into the running container. - ```bash + ```console $ docker container exec -it apparmor-nginx bash ``` 5. Try some operations to test the profile. - ```bash + ```console root@6da5a2a930b9:~# ping 8.8.8.8 ping: Lacking privilege for raw socket. @@ -233,7 +233,7 @@ default unless in `privileged` mode. This line shows that apparmor has denied If you need to check which profiles are loaded, you can use `aa-status`. The output looks like: -```bash +```console $ sudo aa-status apparmor module is loaded. 14 profiles are loaded. diff --git a/engine/security/seccomp.md b/engine/security/seccomp.md index 671c1f2be2..6ac67a1886 100644 --- a/engine/security/seccomp.md +++ b/engine/security/seccomp.md @@ -13,7 +13,7 @@ This feature is available only if Docker has been built with `seccomp` and the kernel is configured with `CONFIG_SECCOMP` enabled. To check if your kernel supports `seccomp`: -```bash +```console $ grep CONFIG_SECCOMP= /boot/config-$(uname -r) CONFIG_SECCOMP=y ``` @@ -47,7 +47,7 @@ When you run a container, it uses the default profile unless you override it with the `--security-opt` option. For example, the following explicitly specifies a policy: -```bash +```console $ docker run --rm \ -it \ --security-opt seccomp=/path/to/seccomp/profile.json \ diff --git a/engine/security/trust/trust_automation.md b/engine/security/trust/trust_automation.md index d999f64d1b..f1647cf4f8 100644 --- a/engine/security/trust/trust_automation.md +++ b/engine/security/trust/trust_automation.md @@ -84,7 +84,7 @@ The `FROM` tag is pulling a signed image. You cannot build an image that has a `FROM` that is not either present locally or signed. Given that content trust data exists for the tag `latest`, the following build should succeed: -```bash +```console $ docker build -t docker/trusttest:testing . Using default tag: latest latest: Pulling from docker/trusttest @@ -97,7 +97,7 @@ Digest: sha256:d149ab53f871 If content trust is enabled, building from a Dockerfile that relies on tag without trust data, causes the build command to fail: -```bash +```console $ docker build -t docker/trusttest:testing . unable to process Dockerfile: No trust data for notrust ``` diff --git a/engine/security/trust/trust_delegation.md b/engine/security/trust/trust_delegation.md index a80f8567fb..a5c4480573 100644 --- a/engine/security/trust/trust_delegation.md +++ b/engine/security/trust/trust_delegation.md @@ -65,7 +65,7 @@ Successfully added signer: jeff to registry.example.com/user/repo If you do not log in, you will see: -```bash +```console $ docker trust signer add --key cert.pem jeff registry.example.com/user/repo Adding signer "jeff" to registry.example.com/user/repo... Initializing signed repository for registry.example.com/user/repo... @@ -111,7 +111,7 @@ Docker trust has a built-in generator for a delegation key pair, `$ docker trust generate `. Running this command will automatically load the delegation private key in to the local Docker trust store. -```bash +```console $ docker trust key generate jeff Generating key for jeff... @@ -129,7 +129,7 @@ cfssl along with a local or company-wide Certificate Authority. Here is an example of how to generate a 2048-bit RSA portion key (all RSA keys must be at least 2048 bits): -```bash +```console $ openssl genrsa -out delegation.key 2048 Generating RSA private key, 2048 bit long modulus @@ -144,7 +144,7 @@ Then they need to generate an x509 certificate containing the public key, which what you need from them. Here is the command to generate a CSR (certificate signing request): -```bash +```console $ openssl req -new -sha256 -key delegation.key -out delegation.csr ``` @@ -152,7 +152,7 @@ Then they can send it to whichever CA you trust to sign certificates, or they can self-sign the certificate (in this example, creating a certificate that is valid for 1 year): -```bash +```console $ openssl x509 -req -sha256 -days 365 -in delegation.csr -signkey delegation.key -out delegation.crt ``` @@ -161,7 +161,7 @@ by a CA. Finally you will need to add the private key into your local Docker trust store. -```bash +```console $ docker trust key load delegation.key --name jeff Loading key from "delegation.key"... @@ -175,7 +175,7 @@ Successfully imported key from delegation.key To list the keys that have been imported in to the local Docker trust store we can use the Notary CLI. -```bash +```console $ notary key list ROLE GUN KEY ID LOCATION @@ -209,7 +209,7 @@ For DCT the name of the second delegation, in the below example `jeff`, is there to help you keep track of the owner of the keys. In more advanced use cases of Notary additional delegations are used for hierarchy. -```bash +```console $ docker trust signer add --key cert.pem jeff registry.example.com/admin/demo Adding signer "jeff" to registry.example.com/admin/demo... @@ -224,7 +224,7 @@ Successfully added signer: jeff to registry.example.com/admin/demo You can see which keys have been pushed to the Notary server for each repository with the `$ docker trust inspect` command. -```bash +```console $ docker trust inspect --pretty registry.example.com/admin/demo No signatures for registry.example.com/admin/demo @@ -244,7 +244,7 @@ Administrative keys for registry.example.com/admin/demo You could also use the Notary CLI to list delegations and keys. Here you can clearly see the keys were attached to `targets/releases` and `targets/jeff`. -```bash +```console $ notary delegation list registry.example.com/admin/demo ROLE PATHS KEY IDS THRESHOLD @@ -264,7 +264,7 @@ the `targets/release` role. > Note you will need the passphrase for the repository key; this would have been > configured when you first initiated the repository. -```bash +```console $ docker trust signer add --key ben.pub ben registry.example.com/admin/demo Adding signer "ben" to registry.example.com/admin/demo... @@ -274,7 +274,7 @@ Successfully added signer: ben to registry.example.com/admin/demo Check to prove that there are now 2 delegations (Signer). -```bash +```console $ docker trust inspect --pretty registry.example.com/admin/demo No signatures for registry.example.com/admin/demo @@ -301,7 +301,7 @@ will automatically handle adding this new key to `targets/releases`. > Note you will need the passphrase for the repository key; this would have been > configured when you first initiated the repository. -```bash +```console $ docker trust signer add --key cert2.pem jeff registry.example.com/admin/demo Adding signer "jeff" to registry.example.com/admin/demo... @@ -311,7 +311,7 @@ Successfully added signer: jeff to registry.example.com/admin/demo Check to prove that the delegation (Signer) now contains multiple Key IDs. -```bash +```console $ docker trust inspect --pretty registry.example.com/admin/demo No signatures for registry.example.com/admin/demo @@ -337,7 +337,7 @@ attached to the `targets/releases` role, you can use the > Note tags that were signed by the removed delegation will need to be resigned > by an active delegation -```bash +```console $ docker trust signer remove registry.example.com/admin/demo Removing signer "ben" from registry.example.com/admin/demo... Enter passphrase for repository key with ID b0014f8: @@ -364,7 +364,7 @@ WARN[0000] Error getting targets/releases: valid signatures did not meet thresho Resigning the delegation file is done with the `$ notary witness` command -```bash +```console $ notary witness registry.example.com/admin/demo targets/releases --publish ``` @@ -381,7 +381,7 @@ and the role specific to that signer `targets/`. 1) We will need to grab the Key ID from the Notary Server -```bash +```console $ notary delegation list registry.example.com/admin/demo ROLE PATHS KEY IDS THRESHOLD @@ -394,7 +394,7 @@ targets/releases "" 8fb597cbaf196f0781628b2f52bff6b3912e4e8075 2) Remove from the `targets/releases` delegation -```bash +```console $ notary delegation remove registry.example.com/admin/demo targets/releases 1091060d7bfd938dfa5be703fa057974f9322a4faef6f580334f3d6df44c02d1 --publish Auto-publishing changes to registry.example.com/admin/demo @@ -406,7 +406,7 @@ Successfully published changes for repository registry.example.com/admin/demo 3) Remove from the `targets/` delegation -```bash +```console $ notary delegation remove registry.example.com/admin/demo targets/jeff 1091060d7bfd938dfa5be703fa057974f9322a4faef6f580334f3d6df44c02d1 --publish Removal of delegation role targets/jeff with keys [5570b88df0736c468493247a07e235e35cf3641270c944d0e9e8899922fc6f99], to repository "registry.example.com/admin/demo" staged for next publish. @@ -420,7 +420,7 @@ Successfully published changes for repository registry.example.com/admin/demo 4) Check the remaining delegation list -```bash +```console $ notary delegation list registry.example.com/admin/demo ROLE PATHS KEY IDS THRESHOLD @@ -437,7 +437,7 @@ the `$ notary key remove` command. 1) We will need to get the Key ID from the local Docker Trust store -```bash +```console $ notary key list ROLE GUN KEY ID LOCATION @@ -450,7 +450,7 @@ targets ...example.com/admin/demo c819f2eda8fba2810ec6a7f95f051c90276c87fd 2) Remove the key from the local Docker Trust store -```bash +```console $ notary key remove 1091060d7bfd938dfa5be703fa057974f9322a4faef6f580334f3d6df44c02d1 Are you sure you want to remove 1091060d7bfd938dfa5be703fa057974f9322a4faef6f580334f3d6df44c02d1 (role jeff) from /home/ubuntu/.docker/trust/private? (yes/no) y @@ -466,7 +466,7 @@ snapshot and all delegations keys using the Notary CLI. This is often required by a container registry before a particular repository can be deleted. -```bash +```console $ notary delete registry.example.com/admin/demo --remote Deleting trust data for repository registry.example.com/admin/demo diff --git a/engine/security/trust/trust_key_mng.md b/engine/security/trust/trust_key_mng.md index 2bc654fb5a..e5da1f7a6e 100644 --- a/engine/security/trust/trust_key_mng.md +++ b/engine/security/trust/trust_key_mng.md @@ -53,7 +53,7 @@ of the repository key is recoverable; loss of the root key is not. The Docker client stores the keys in the `~/.docker/trust/private` directory. Before backing them up, you should `tar` them into an archive: -```bash +```console $ umask 077; tar -zcvf private_keys_backup.tar.gz ~/.docker/trust/private; umask 022 ``` diff --git a/engine/security/trust/trust_sandbox.md b/engine/security/trust/trust_sandbox.md index cef02c9392..37b0cfe318 100644 --- a/engine/security/trust/trust_sandbox.md +++ b/engine/security/trust/trust_sandbox.md @@ -222,7 +222,7 @@ data. Then, you try and pull it. 3. List the layers for the `test/trusttest` image you pushed: - ```bash + ```console root@65084fc6f047:/# ls -l /var/lib/registry/docker/registry/v2/repositories/test/trusttest/_layers/sha256 total 12 drwxr-xr-x 2 root root 4096 Jun 10 17:26 a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 diff --git a/engine/security/userns-remap.md b/engine/security/userns-remap.md index 589aeff216..a18c5023fc 100644 --- a/engine/security/userns-remap.md +++ b/engine/security/userns-remap.md @@ -76,7 +76,7 @@ avoid these situations. To verify this, use the `id` command: - ```bash + ```console $ id testuser uid=1001(testuser) gid=1001(testuser) groups=1001(testuser) @@ -135,7 +135,7 @@ procedure to configure the daemon using the `daemon.json` configuration file. The `daemon.json` method is recommended. If you use the flag, use the following command as a model: -```bash +```console $ dockerd --userns-remap="testuser:testuser" ``` @@ -168,7 +168,7 @@ $ dockerd --userns-remap="testuser:testuser" 2. If you are using the `dockremap` user, verify that Docker created it using the `id` command. - ```bash + ```console $ id dockremap uid=112(dockremap) gid=116(dockremap) groups=116(dockremap) @@ -176,7 +176,7 @@ $ dockerd --userns-remap="testuser:testuser" Verify that the entry has been added to `/etc/subuid` and `/etc/subgid`: - ```bash + ```console $ grep dockremap /etc/subuid dockremap:231072:65536 @@ -196,7 +196,7 @@ $ dockerd --userns-remap="testuser:testuser" 4. Start a container from the `hello-world` image. - ```bash + ```console $ docker run hello-world ``` @@ -205,7 +205,7 @@ $ dockerd --userns-remap="testuser:testuser" and not group-or-world-readable. Some of the subdirectories are still owned by `root` and have different permissions. - ```bash + ```console $ sudo ls -ld /var/lib/docker/231072.231072/ drwx------ 11 231072 231072 11 Jun 21 21:19 /var/lib/docker/231072.231072/ diff --git a/engine/swarm/admin_guide.md b/engine/swarm/admin_guide.md index a16b818cd8..e9a113e8c8 100644 --- a/engine/swarm/admin_guide.md +++ b/engine/swarm/admin_guide.md @@ -133,8 +133,8 @@ operations like swarm heartbeat or leader elections. To avoid interference with manager node operation, you can drain manager nodes to make them unavailable as worker nodes: -```bash -docker node update --availability drain +```console +$ docker node update --availability drain ``` When you drain a node, the scheduler reassigns any tasks running on the node to @@ -161,8 +161,8 @@ From the command line, run `docker node inspect ` to query the nodes. For instance, to query the reachability of the node as a manager: {% raw %} -```bash -docker node inspect manager1 --format "{{ .ManagerStatus.Reachability }}" +```console +$ docker node inspect manager1 --format "{{ .ManagerStatus.Reachability }}" reachable ``` {% endraw %} @@ -170,8 +170,8 @@ reachable To query the status of the node as a worker that accept tasks: {% raw %} -```bash -docker node inspect manager1 --format "{{ .Status.State }}" +```console +$ docker node inspect manager1 --format "{{ .Status.State }}" ready ``` {% endraw %} @@ -190,9 +190,8 @@ manager: Alternatively you can also get an overview of the swarm health from a manager node with `docker node ls`: -```bash - -docker node ls +```console +$ docker node ls ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS 1mhtdwhvsgr3c26xxbnzdc3yp node05 Accepted Ready Active 516pacagkqp2xc3fk9t1dhjor node02 Accepted Ready Active Reachable @@ -301,7 +300,7 @@ restore the data to a new swarm. to connect to nodes that were part of the old swarm, and presumably no longer exist. - ```bash + ```console $ docker swarm init --force-new-cluster ``` @@ -350,9 +349,10 @@ except the manager the command was run from. The quorum is achieved because there is now only one manager. Promote nodes to be managers until you have the desired number of managers. -```bash -# From the node to recover -docker swarm init --force-new-cluster --advertise-addr node01:2377 +From the node to recover, run: + +```console +$ docker swarm init --force-new-cluster --advertise-addr node01:2377 ``` When you run the `docker swarm init` command with the `--force-new-cluster` diff --git a/engine/swarm/configs.md b/engine/swarm/configs.md index abe1b0035d..edcdbb732f 100644 --- a/engine/swarm/configs.md +++ b/engine/swarm/configs.md @@ -141,7 +141,7 @@ real-world example, continue to input because the last argument, which represents the file to read the config from, is set to `-`. - ```bash + ```console $ echo "This is a config" | docker config create my-config - ``` @@ -149,14 +149,14 @@ real-world example, continue to the container can access the config at `/my-config`, but you can customize the file name on the container using the `target` option. - ```bash + ```console $ docker service create --name redis --config my-config redis:alpine ``` 3. Verify that the task is running without issues using `docker service ps`. If everything is working, the output looks similar to this: - ```bash + ```console $ docker service ps redis ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS @@ -170,7 +170,7 @@ real-world example, continue to how to find the container ID, and the second and third commands use shell completion to do this automatically. - ```bash + ```console $ docker ps --filter name=redis -q 5cb1c2348a59 @@ -187,7 +187,7 @@ real-world example, continue to 5. Try removing the config. The removal fails because the `redis` service is running and has access to the config. - ```bash + ```console $ docker config ls @@ -204,7 +204,7 @@ real-world example, continue to 6. Remove access to the config from the running `redis` service by updating the service. - ```bash + ```console $ docker service update --config-rm my-config redis ``` @@ -220,7 +220,7 @@ real-world example, continue to 8. Stop and remove the service, and remove the config from Docker. - ```bash + ```console $ docker service rm redis $ docker config rm my-config @@ -297,14 +297,14 @@ name as its argument. The template will be rendered when container is created. 2. Save the `index.html.tmpl` file as a swarm config named `homepage`. Provide parameter `--template-driver` and specify `golang` as template engine. - ```bash + ```console $ docker config create --template-driver golang homepage index.html.tmpl ``` 3. Create a service that runs Nginx and has access to the environment variable HELLO and to the config. - ```bash + ```console $ docker service create \ --name hello-template \ --env HELLO="Docker" \ @@ -316,7 +316,7 @@ name as its argument. The template will be rendered when container is created. 4. Verify that the service is operational: you can reach the Nginx server, and that the correct output is being served. - ```bash + ```console $ curl http://0.0.0.0:3000 @@ -351,13 +351,13 @@ generate the site key and certificate, name the files `site.key` and 1. Generate a root key. - ```bash + ```console $ openssl genrsa -out "root-ca.key" 4096 ``` 2. Generate a CSR using the root key. - ```bash + ```console $ openssl req \ -new -key "root-ca.key" \ -out "root-ca.csr" -sha256 \ @@ -377,7 +377,7 @@ generate the site key and certificate, name the files `site.key` and 4. Sign the certificate. - ```bash + ```console $ openssl x509 -req -days 3650 -in "root-ca.csr" \ -signkey "root-ca.key" -sha256 -out "root-ca.crt" \ -extfile "root-ca.cnf" -extensions \ @@ -386,13 +386,13 @@ generate the site key and certificate, name the files `site.key` and 5. Generate the site key. - ```bash + ```console $ openssl genrsa -out "site.key" 4096 ``` 6. Generate the site certificate and sign it with the site key. - ```bash + ```console $ openssl req -new -key "site.key" -out "site.csr" -sha256 \ -subj '/C=US/ST=CA/L=San Francisco/O=Docker/CN=localhost' ``` @@ -414,7 +414,7 @@ generate the site key and certificate, name the files `site.key` and 8. Sign the site certificate. - ```bash + ```console $ openssl x509 -req -days 750 -in "site.csr" -sha256 \ -CA "root-ca.crt" -CAkey "root-ca.key" -CAcreateserial \ -out "site.crt" -extfile "site.cnf" -extensions server @@ -452,7 +452,7 @@ generate the site key and certificate, name the files `site.key` and to decouple the key and certificate from the services that use them. In these examples, the secret name and the file name are the same. - ```bash + ```console $ docker secret create site.key site.key $ docker secret create site.crt site.crt @@ -461,13 +461,13 @@ generate the site key and certificate, name the files `site.key` and 3. Save the `site.conf` file in a Docker config. The first parameter is the name of the config, and the second parameter is the file to read it from. - ```bash + ```console $ docker config create site.conf site.conf ``` List the configs: - ```bash + ```console $ docker config ls ID NAME CREATED UPDATED @@ -479,7 +479,7 @@ generate the site key and certificate, name the files `site.key` and config. Set the mode to `0440` so that the file is only readable by its owner and that owner's group, not the world. - ```bash + ```console $ docker service create \ --name nginx \ --secret site.key \ @@ -498,7 +498,7 @@ generate the site key and certificate, name the files `site.key` and 5. Verify that the Nginx service is running. - ```bash + ```console $ docker service ls ID NAME MODE REPLICAS IMAGE @@ -513,7 +513,7 @@ generate the site key and certificate, name the files `site.key` and 6. Verify that the service is operational: you can reach the Nginx server, and that the correct TLS certificate is being used. - ```bash + ```console $ curl --cacert root-ca.crt https://0.0.0.0:3000 @@ -543,7 +543,7 @@ generate the site key and certificate, name the files `site.key` and ``` - ```bash + ```console $ openssl s_client -connect 0.0.0.0:3000 -CAfile root-ca.crt CONNECTED(00000003) @@ -588,7 +588,7 @@ generate the site key and certificate, name the files `site.key` and this example by removing the `nginx` service and the stored secrets and config. - ```bash + ```console $ docker service rm nginx $ docker secret rm site.crt site.key @@ -633,7 +633,7 @@ configuration file. 3. Update the `nginx` service to use the new config instead of the old one. - ```bash + ```console $ docker service update \ --config-rm site.conf \ --config-add source=site-v2.conf,target=/etc/nginx/conf.d/site.conf,mode=0440 \ @@ -644,14 +644,14 @@ configuration file. `docker service ps nginx`. When it is, you can remove the old `site.conf` config. - ```bash + ```console $ docker config rm site.conf ``` 5. To clean up, you can remove the `nginx` service, as well as the secrets and configs. - ```bash + ```console $ docker service rm nginx $ docker secret rm site.crt site.key diff --git a/engine/swarm/how-swarm-mode-works/swarm-task-states.md b/engine/swarm/how-swarm-mode-works/swarm-task-states.md index a81e6339b3..b144564202 100644 --- a/engine/swarm/how-swarm-mode-works/swarm-task-states.md +++ b/engine/swarm/how-swarm-mode-works/swarm-task-states.md @@ -48,7 +48,7 @@ Run `docker service ps ` to get the state of a task. The `CURRENT STATE` field shows the task's state and how long it's been there. -```bash +```console $ docker service ps webserver ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS owsz0yp6z375 webserver.1 nginx UbuntuVM Running Running 44 seconds ago diff --git a/engine/swarm/ingress.md b/engine/swarm/ingress.md index f505b39989..e244d25960 100644 --- a/engine/swarm/ingress.md +++ b/engine/swarm/ingress.md @@ -31,7 +31,7 @@ specify the port to bind on the routing mesh. If you leave off the `published` port, a random high-numbered port is bound for each service task. You need to inspect the task to determine the port. -```bash +```console $ docker service create \ --name \ --publish published=,target= \ @@ -51,7 +51,7 @@ is required. For example, the following command publishes port 80 in the nginx container to port 8080 for any node in the swarm: -```bash +```console $ docker service create \ --name my-web \ --publish published=8080,target=80 \ @@ -73,7 +73,7 @@ within the host. You can publish a port for an existing service using the following command: -```bash +```console $ docker service update \ --publish-add published=,target= \ @@ -83,7 +83,7 @@ You can use `docker service inspect` to view the service's published port. For instance: {% raw %} -```bash +```console $ docker service inspect --format="{{json .Endpoint.Spec.Ports}}" my-web [{"Protocol":"tcp","TargetPort":80,"PublishedPort":8080}] @@ -105,7 +105,7 @@ set the `protocol` key to either `tcp` or `udp`. **Long syntax:** -```bash +```console $ docker service create --name dns-cache \ --publish published=53,target=53 \ dns-cache @@ -113,7 +113,7 @@ $ docker service create --name dns-cache \ **Short syntax:** -```bash +```console $ docker service create --name dns-cache \ -p 53:53 \ dns-cache @@ -123,7 +123,7 @@ $ docker service create --name dns-cache \ **Long syntax:** -```bash +```console $ docker service create --name dns-cache \ --publish published=53,target=53 \ --publish published=53,target=53,protocol=udp \ @@ -132,7 +132,7 @@ $ docker service create --name dns-cache \ **Short syntax:** -```bash +```console $ docker service create --name dns-cache \ -p 53:53 \ -p 53:53/udp \ @@ -143,7 +143,7 @@ $ docker service create --name dns-cache \ **Long syntax:** -```bash +```console $ docker service create --name dns-cache \ --publish published=53,target=53,protocol=udp \ dns-cache @@ -151,7 +151,7 @@ $ docker service create --name dns-cache \ **Short syntax:** -```bash +```console $ docker service create --name dns-cache \ -p 53:53/udp \ dns-cache @@ -180,7 +180,7 @@ set `mode` to `host`. If you omit the `mode` key or set it to `ingress`, the routing mesh is used. The following command creates a global service using `host` mode and bypassing the routing mesh. -```bash +```console $ docker service create --name dns-cache \ --publish published=53,target=53,protocol=udp,mode=host \ --mode global \ diff --git a/engine/swarm/join-nodes.md b/engine/swarm/join-nodes.md index e9a9c1dbcd..225c60f61c 100644 --- a/engine/swarm/join-nodes.md +++ b/engine/swarm/join-nodes.md @@ -28,7 +28,7 @@ to [Run Docker Engine in swarm mode](swarm-mode.md#view-the-join-command-or-upda To retrieve the join command including the join token for worker nodes, run the following command on a manager node: -```bash +```console $ docker swarm join-token worker To add a worker to this swarm, run the following command: @@ -40,7 +40,7 @@ To add a worker to this swarm, run the following command: Run the command from the output on the worker to join the swarm: -```bash +```console $ docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \ 192.168.99.100:2377 @@ -76,7 +76,7 @@ For more detail about swarm managers and administering a swarm, see To retrieve the join command including the join token for manager nodes, run the following command on a manager node: -```bash +```console $ docker swarm join-token manager To add a manager to this swarm, run the following command: @@ -88,7 +88,7 @@ To add a manager to this swarm, run the following command: Run the command from the output on the new manager node to join it to the swarm: -```bash +```console $ docker swarm join \ --token SWMTKN-1-61ztec5kyafptydic6jfc1i33t37flcl4nuipzcusor96k7kby-5vy9t8u35tuqm7vh67lrz9xp6 \ 192.168.99.100:2377 diff --git a/engine/swarm/manage-nodes.md b/engine/swarm/manage-nodes.md index 5b483f393e..fff69a88e2 100644 --- a/engine/swarm/manage-nodes.md +++ b/engine/swarm/manage-nodes.md @@ -15,7 +15,7 @@ As part of the swarm management lifecycle, you may need to view or update a node To view a list of nodes in the swarm run `docker node ls` from a manager node: -```bash +```console $ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS @@ -58,7 +58,7 @@ You can run `docker node inspect ` on a manager node to view the details for an individual node. The output defaults to JSON format, but you can pass the `--pretty` flag to print the results in human-readable format. For example: -```bash +```console $ docker node inspect self --pretty ID: ehkv3bcimagdese79dn78otj5 @@ -103,7 +103,7 @@ Changing node availability lets you: For example, to change a manager node to `Drain` availability: -```bash +```console $ docker node update --availability drain node-1 node-1 @@ -124,7 +124,7 @@ pair. Pass the `--label-add` flag once for each node label you want to add: -```bash +```console $ docker node update --label-add foo --label-add bar=baz node-1 node-1 @@ -164,7 +164,7 @@ maintenance. Similarly, you can demote a manager node to the worker role. To promote a node or set of nodes, run `docker node promote` from a manager node: -```bash +```console $ docker node promote node-3 node-2 Node node-3 promoted to a manager in the swarm. @@ -173,7 +173,7 @@ Node node-2 promoted to a manager in the swarm. To demote a node or set of nodes, run `docker node demote` from a manager node: -```bash +```console $ docker node demote node-3 node-2 Manager node-3 demoted in the swarm. @@ -210,7 +210,7 @@ Run the `docker swarm leave` command on a node to remove it from the swarm. For example to leave the swarm on a worker node: -```bash +```console $ docker swarm leave Node left the swarm. @@ -232,7 +232,7 @@ manager node to remove the node from the node list. For instance: -```bash +```console $ docker node rm node-2 ``` diff --git a/engine/swarm/networking.md b/engine/swarm/networking.md index fb9e385040..0bc01c590d 100644 --- a/engine/swarm/networking.md +++ b/engine/swarm/networking.md @@ -60,7 +60,7 @@ each other over the following ports: To create an overlay network, specify the `overlay` driver when using the `docker network create` command: -```bash +```console $ docker network create \ --driver overlay \ my-network @@ -73,7 +73,7 @@ subnet and uses default options. You can see information about the network using When no containers are connected to the overlay network, its configuration is not very exciting: -```bash +```console $ docker network inspect my-network [ { @@ -110,7 +110,7 @@ connects to the network for the first time. The following example shows the same network as above, but with three containers of a `redis` service connected to it. -```bash +```console $ docker network inspect my-network [ { @@ -184,7 +184,7 @@ the first service is connected to the network. You can configure these when creating a network using the `--subnet` and `--gateway` flags. The following example extends the previous one by configuring the subnet and gateway. -```bash +```console $ docker network create \ --driver overlay \ --subnet 10.0.9.0/24 \ @@ -198,7 +198,7 @@ To customize subnet allocation for your Swarm networks, you can [optionally conf For example, the following command is used when initializing Swarm: -```bash +```console $ docker swarm init --default-addr-pool 10.20.0.0/16 --default-addr-pool-mask-length 26` ``` @@ -237,7 +237,7 @@ option before using it in production. To attach a service to an existing overlay network, pass the `--network` flag to `docker service create`, or the `--network-add` flag to `docker service update`. -```bash +```console $ docker service create \ --replicas 3 \ --name my-web \ @@ -310,7 +310,7 @@ services which publish ports, such as a WordPress service which publishes port 2. Remove the existing `ingress` network: - ```bash + ```console $ docker network rm ingress WARNING! Before removing the routing-mesh network, make sure all the nodes @@ -324,7 +324,7 @@ services which publish ports, such as a WordPress service which publishes port custom options you want to set. This example sets the MTU to 1200, sets the subnet to `10.11.0.0/16`, and sets the gateway to `10.11.0.2`. - ```bash + ```console $ docker network create \ --driver overlay \ --ingress \ @@ -365,7 +365,7 @@ order to delete an existing bridge. The package name is `bridge-utils`. This example uses the subnet `10.11.0.0/16`. For a full list of customizable options, see [Bridge driver options](../reference/commandline/network_create.md#bridge-driver-options). - ```bash + ```console $ docker network create \ --subnet 10.11.0.0/16 \ --opt com.docker.network.bridge.name=docker_gwbridge \ @@ -396,14 +396,14 @@ that your Docker host has two different network interfaces: 10.0.0.1 should be used for control and management traffic and 192.168.0.1 should be used for traffic relating to services. -```bash +```console $ docker swarm init --advertise-addr 10.0.0.1 --data-path-addr 192.168.0.1 ``` This example joins the swarm managed by host `192.168.99.100:2377` and sets the `--advertise-addr` flag to `eth0` and the `--data-path-addr` flag to `eth1`. -```bash +```console $ docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2d7c \ --advertise-addr eth0 \ diff --git a/engine/swarm/secrets.md b/engine/swarm/secrets.md index 1a4f95f394..95098ae6a8 100644 --- a/engine/swarm/secrets.md +++ b/engine/swarm/secrets.md @@ -149,7 +149,7 @@ real-world example, continue to input because the last argument, which represents the file to read the secret from, is set to `-`. - ```bash + ```console $ printf "This is a secret" | docker secret create my_secret_data - ``` @@ -157,14 +157,14 @@ real-world example, continue to the container can access the secret at `/run/secrets/`, but you can customize the file name on the container using the `target` option. - ```bash + ```console $ docker service create --name redis --secret my_secret_data redis:alpine ``` 3. Verify that the task is running without issues using `docker service ps`. If everything is working, the output looks similar to this: - ```bash + ```console $ docker service ps redis ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS @@ -174,7 +174,7 @@ real-world example, continue to If there were an error, and the task were failing and repeatedly restarting, you would see something like this: - ```bash + ```console $ docker service ps redis NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS @@ -192,7 +192,7 @@ real-world example, continue to how to find the container ID, and the second and third commands use shell completion to do this automatically. - ```bash + ```console $ docker ps --filter name=redis -q 5cb1c2348a59 @@ -220,8 +220,7 @@ real-world example, continue to 6. Try removing the secret. The removal fails because the `redis` service is running and has access to the secret. - ```bash - + ```console $ docker secret ls ID NAME CREATED UPDATED @@ -237,7 +236,7 @@ real-world example, continue to 7. Remove access to the secret from the running `redis` service by updating the service. - ```bash + ```console $ docker service update --secret-rm my_secret_data redis ``` @@ -253,7 +252,7 @@ real-world example, continue to 7. Stop and remove the service, and remove the secret from Docker. - ```bash + ```console $ docker service rm redis $ docker secret rm my_secret_data @@ -336,13 +335,13 @@ generate the site key and certificate, name the files `site.key` and 1. Generate a root key. - ```bash + ```console $ openssl genrsa -out "root-ca.key" 4096 ``` 2. Generate a CSR using the root key. - ```bash + ```console $ openssl req \ -new -key "root-ca.key" \ -out "root-ca.csr" -sha256 \ @@ -362,7 +361,7 @@ generate the site key and certificate, name the files `site.key` and 4. Sign the certificate. - ```bash + ```console $ openssl x509 -req -days 3650 -in "root-ca.csr" \ -signkey "root-ca.key" -sha256 -out "root-ca.crt" \ -extfile "root-ca.cnf" -extensions \ @@ -371,13 +370,13 @@ generate the site key and certificate, name the files `site.key` and 5. Generate the site key. - ```bash + ```console $ openssl genrsa -out "site.key" 4096 ``` 6. Generate the site certificate and sign it with the site key. - ```bash + ```console $ openssl req -new -key "site.key" -out "site.csr" -sha256 \ -subj '/C=US/ST=CA/L=San Francisco/O=Docker/CN=localhost' ``` @@ -399,7 +398,7 @@ generate the site key and certificate, name the files `site.key` and 8. Sign the site certificate. - ```bash + ```console $ openssl x509 -req -days 750 -in "site.csr" -sha256 \ -CA "root-ca.crt" -CAkey "root-ca.key" -CAcreateserial \ -out "site.crt" -extfile "site.cnf" -extensions server @@ -440,7 +439,7 @@ generate the site key and certificate, name the files `site.key` and secret from on the host machine's filesystem. In these examples, the secret name and the file name are the same. - ```bash + ```console $ docker secret create site.key site.key $ docker secret create site.crt site.crt @@ -448,7 +447,7 @@ generate the site key and certificate, name the files `site.key` and $ docker secret create site.conf site.conf ``` - ```bash + ```console $ docker secret ls ID NAME CREATED UPDATED @@ -474,7 +473,7 @@ generate the site key and certificate, name the files `site.key` and secret available in a different path. The example below creates a symbolic link to the true location of the `site.conf` file so that Nginx can read it: - ```bash + ```console $ docker service create \ --name nginx \ --secret site.key \ @@ -490,7 +489,7 @@ generate the site key and certificate, name the files `site.key` and secret is made available at `/etc/nginx/conf.d/site.conf` inside the container without the use of symbolic links: - ```bash + ```console $ docker service create \ --name nginx \ --secret site.key \ @@ -512,7 +511,7 @@ generate the site key and certificate, name the files `site.key` and 5. Verify that the Nginx service is running. - ```bash + ```console $ docker service ls ID NAME MODE REPLICAS IMAGE @@ -527,7 +526,7 @@ generate the site key and certificate, name the files `site.key` and 6. Verify that the service is operational: you can reach the Nginx server, and that the correct TLS certificate is being used. - ```bash + ```console $ curl --cacert root-ca.crt https://localhost:3000 @@ -557,7 +556,7 @@ generate the site key and certificate, name the files `site.key` and ``` - ```bash + ```console $ openssl s_client -connect localhost:3000 -CAfile root-ca.crt CONNECTED(00000003) @@ -601,7 +600,7 @@ generate the site key and certificate, name the files `site.key` and 7. To clean up after running this example, remove the `nginx` service and the stored secrets. - ```bash + ```console $ docker service rm nginx $ docker secret rm site.crt site.key site.conf @@ -647,7 +646,7 @@ line. The last argument is set to `-`, which indicates that the input is read from standard input. - ```bash + ```console $ openssl rand -base64 20 | docker secret create mysql_password - l1vinzevzhj4goakjap5ya409 @@ -660,13 +659,13 @@ line. shared with the WordPress service created later. It's only needed to bootstrap the `mysql` service. - ```bash + ```console $ openssl rand -base64 20 | docker secret create mysql_root_password - ``` List the secrets managed by Docker using `docker secret ls`: - ```bash + ```console $ docker secret ls ID NAME CREATED UPDATED @@ -680,7 +679,7 @@ line. between the MySQL and WordPress services. There is no need to expose the MySQL service to any external host or container. - ```bash + ```console $ docker network create -d overlay mysql_private ``` @@ -711,7 +710,7 @@ line. user cannot create or drop databases or change the MySQL configuration. - ```bash + ```console $ docker service create \ --name mysql \ --replicas 1 \ @@ -728,7 +727,7 @@ line. 4. Verify that the `mysql` container is running using the `docker service ls` command. - ```bash + ```console $ docker service ls ID NAME MODE REPLICAS IMAGE @@ -768,7 +767,7 @@ line. - Stores its data, such as themes and plugins, in a volume called `wpdata` so these files persist when the service restarts. - ```bash + ```console $ docker service create \ --name wordpress \ --replicas 1 \ @@ -786,7 +785,7 @@ line. 6. Verify the service is running using `docker service ls` and `docker service ps` commands. - ```bash + ```console $ docker service ls ID NAME MODE REPLICAS IMAGE @@ -794,7 +793,7 @@ line. nzt5xzae4n62 wordpress replicated 1/1 wordpress:latest ``` - ```bash + ```console $ docker service ps wordpress ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS @@ -836,7 +835,7 @@ use it, then remove the old secret. 1. Create the new password and store it as a secret named `mysql_password_v2`. - ```bash + ```console $ openssl rand -base64 20 | docker secret create mysql_password_v2 - ``` @@ -844,7 +843,7 @@ use it, then remove the old secret. Remember that you cannot update or rename a secret, but you can revoke a secret and grant access to it using a new target filename. - ```bash + ```console $ docker service update \ --secret-rm mysql_password mysql @@ -874,7 +873,7 @@ use it, then remove the old secret. First, find the ID of the `mysql` container task. - ```bash + ```console $ docker ps --filter name=mysql -q c7705cf6176f @@ -883,14 +882,14 @@ use it, then remove the old secret. Substitute the ID in the command below, or use the second variant which uses shell expansion to do it all in a single step. - ```bash + ```console $ docker container exec \ bash -c 'mysqladmin --user=wordpress --password="$(< /run/secrets/old_mysql_password)" password "$(< /run/secrets/mysql_password)"' ``` **or**: - ```bash + ```console $ docker container exec $(docker ps --filter name=mysql -q) \ bash -c 'mysqladmin --user=wordpress --password="$(< /run/secrets/old_mysql_password)" password "$(< /run/secrets/mysql_password)"' ``` @@ -900,7 +899,7 @@ use it, then remove the old secret. `0400`. This triggers a rolling restart of the WordPress service and the new secret is used. - ```bash + ```console $ docker service update \ --secret-rm mysql_password \ --secret-add source=mysql_password_v2,target=wp_db_password,mode=0400 \ @@ -917,8 +916,7 @@ use it, then remove the old secret. 6. Revoke access to the old secret from the MySQL service and remove the old secret from Docker. - ```bash - + ```console $ docker service update \ --secret-rm mysql_password \ mysql @@ -932,7 +930,7 @@ use it, then remove the old secret. WordPress service, the MySQL container, the `mydata` and `wpdata` volumes, and the Docker secrets. - ```bash + ```console $ docker service rm wordpress mysql $ docker volume rm mydata wpdata diff --git a/engine/swarm/services.md b/engine/swarm/services.md index 6d13bb154a..f5a68b8a71 100644 --- a/engine/swarm/services.md +++ b/engine/swarm/services.md @@ -29,14 +29,14 @@ to supply the image name. This command starts an Nginx service with a randomly-generated name and no published ports. This is a naive example, since you can't interact with the Nginx service. -```bash +```console $ docker service create nginx ``` The service is scheduled on an available node. To confirm that the service was created and started successfully, use the `docker service ls` command: -```bash +```console $ docker service ls ID NAME MODE REPLICAS IMAGE PORTS @@ -51,7 +51,7 @@ information. To provide a name for your service, use the `--name` flag: -```bash +```console $ docker service create --name my_web nginx ``` @@ -60,14 +60,14 @@ service's containers should run, by adding it after the image name. This example starts a service called `helloworld` which uses an `alpine` image and runs the command `ping docker.com`: -```bash +```console $ docker service create --name helloworld alpine ping docker.com ``` You can also specify an image tag for the service to use. This example modifies the previous one to use the `alpine:3.6` tag: -```bash +```console $ docker service create --name helloworld alpine:3.6 ping docker.com ``` @@ -83,14 +83,14 @@ The following example assumes a gMSA and its credential spec (called credspec.js To use a Config as a credential spec, first create the Docker Config containing the credential spec: -```bash -docker config create credspec credspec.json +```console +$ docker config create credspec credspec.json ``` Now, you should have a Docker Config named credspec, and you can create a service using this credential spec. To do so, use the --credential-spec flag with the config name, like this: -```bash -docker service create --credential-spec="config://credspec" +```console +$ docker service create --credential-spec="config://credspec" ``` Your service will use the gMSA credential spec when it starts, but unlike a typical Docker Config (used by passing the --config flag), the credential spec will not be mounted into the container. @@ -102,7 +102,7 @@ If your image is available on a private registry which requires login, use the your image is stored on `registry.example.com`, which is a private registry, use a command like the following: -```bash +```console $ docker login registry.example.com $ docker service create \ @@ -167,13 +167,13 @@ was previously published. Assuming that the `my_web` service from the previous section still exists, use the following command to update it to publish port 80. -```bash +```console $ docker service update --publish-add 80 my_web ``` To verify that it worked, use `docker service ls`: -```bash +```console $ docker service ls ID NAME MODE REPLICAS IMAGE PORTS @@ -193,7 +193,7 @@ To remove a service, use the `docker service remove` command. You can remove a service by its ID or name, as shown in the output of the `docker service ls` command. The following command removes the `my_web` service. -```bash +```console $ docker service remove my_web ``` @@ -222,7 +222,7 @@ The following service's containers have an environment variable `$MYVAR` set to `myvalue`, run from the `/tmp/` directory, and run as the `my_user` user. -```bash +```console $ docker service create --name helloworld \ --env MYVAR=myvalue \ --workdir /tmp \ @@ -237,7 +237,7 @@ The following example updates an existing service called `helloworld` so that it runs the command `ping docker.com` instead of whatever command it was running before: -```bash +```console $ docker service update --args "ping docker.com" helloworld ``` @@ -255,7 +255,7 @@ An image version can be expressed in several different ways: When the request to create a container task is received on a worker node, the worker node only sees the digest, not the tag. - ```bash + ```console $ docker service create --name="myservice" ubuntu:16.04 ``` @@ -274,7 +274,7 @@ An image version can be expressed in several different ways: Thus, the following two commands are equivalent: - ```bash + ```console $ docker service create --name="myservice" ubuntu $ docker service create --name="myservice" ubuntu:latest @@ -283,7 +283,7 @@ An image version can be expressed in several different ways: - If you specify a digest directly, that exact version of the image is always used when creating service tasks. - ```bash + ```console $ docker service create \ --name="myservice" \ ubuntu:16.04@sha256:35bc48a1ca97c3971611dc4662d08d131869daa692acb281c7e9e052924e38b1 @@ -318,7 +318,7 @@ To see an image's current digest, issue the command following is the current digest for `ubuntu:latest` at the time this content was written. The output is truncated for clarity. -```bash +```console $ docker inspect ubuntu:latest ``` @@ -426,7 +426,7 @@ more details about swarm service networking, see Imagine that you have a 10-node swarm, and you deploy an Nginx service running three tasks on a 10-node swarm: -```bash +```console $ docker service create --name my_web \ --replicas 3 \ --publish published=8080,target=80 \ @@ -442,7 +442,7 @@ host, substitute the host's IP address or resolvable host name. The HTML output is truncated: -```bash +```console $ curl localhost:8080 @@ -483,7 +483,7 @@ web page for (effectively) **a random swarm node** running the service. The following example runs nginx as a service on each node in your swarm and exposes nginx port locally on each swarm node. -```bash +```console $ docker service create \ --mode global \ --publish mode=host,target=80,published=8080 \ @@ -506,7 +506,7 @@ You can use overlay networks to connect one or more services within the swarm. First, create overlay network on a manager node using the `docker network create` command with the `--driver overlay` flag. -```bash +```console $ docker network create --driver overlay my-network ``` @@ -516,7 +516,7 @@ to the network. You can create a new service and pass the `--network` flag to attach the service to the overlay network: -```bash +```console $ docker service create \ --replicas 3 \ --network my-network \ @@ -529,13 +529,13 @@ The swarm extends `my-network` to each node running the service. You can also connect an existing service to an overlay network using the `--network-add` flag. -```bash +```console $ docker service update --network-add my-network my-web ``` To disconnect a running service from a network, use the `--network-rm` flag. -```bash +```console $ docker service update --network-rm my-network my-web ``` @@ -627,7 +627,7 @@ mode, the service defaults to `replicated`. For replicated services, you specify the number of replica tasks you want to start using the `--replicas` flag. For example, to start a replicated nginx service with 3 replica tasks: -```bash +```console $ docker service create \ --name my_web \ --replicas 3 \ @@ -639,7 +639,7 @@ To start a global service on each available node, pass `--mode global` to places a task for the global service on the new node. For example to start a service that runs alpine on every node in the swarm: -```bash +```console $ docker service create \ --name myservice \ --mode global \ @@ -684,7 +684,7 @@ services run on the same node, or each node only runs one replica, or that some nodes don't run any replicas. For global services, the service runs on every node that meets the placement constraint and any [resource requirements](#reserve-memory-or-cpus-for-a-service). -```bash +```console $ docker service create \ --name my-nginx \ --replicas 5 \ @@ -699,7 +699,7 @@ If you specify multiple placement constraints, the service only deploys onto nodes where they are all met. The following example limits the service to run on all nodes where `region` is set to `east` and `type` is not set to `devel`: -```bash +```console $ docker service create \ --name my-nginx \ --mode global \ @@ -735,7 +735,7 @@ based on the value of the `datacenter` label. If some nodes have `datacenter=us-east` and others have `datacenter=us-west`, the service is deployed as evenly as possible across the two sets of nodes. -```bash +```console $ docker service create \ --replicas 9 \ --name redis_2 \ @@ -758,7 +758,7 @@ order they are encountered. The following example sets up a service with multiple placement preferences. Tasks are spread first over the various datacenters, and then over racks (as indicated by the respective labels): -```bash +```console $ docker service create \ --replicas 9 \ --name redis_2 \ @@ -807,7 +807,7 @@ In the example service below, the scheduler applies updates to a maximum of 2 replicas at a time. When an updated task returns either `RUNNING` or `FAILED`, the scheduler waits 10 seconds before stopping the next task to update: -```bash +```console $ docker service create \ --replicas 10 \ --name my_web \ @@ -840,7 +840,7 @@ to the configuration that was in place before the most recent Other options can be combined with `--rollback`; for example, `--update-delay 0s` to execute the rollback without a delay between tasks: -```bash +```console $ docker service update \ --rollback \ --update-delay 0s @@ -876,7 +876,7 @@ parallel. Tasks are monitored for 20 seconds after rollback to be sure they do not exit, and a maximum failure ratio of 20% is tolerated. Default values are used for `--rollback-delay` and `--rollback-failure-action`. -```bash +```console $ docker service create --name=my_redis \ --replicas=5 \ --rollback-parallelism=2 \ @@ -908,7 +908,7 @@ created automatically according to the volume specification on the service. To use existing data volumes with a service use the `--mount` flag: -```bash +```console $ docker service create \ --mount src=,dst= \ --name myservice \ @@ -920,7 +920,7 @@ scheduled to a particular host, then one is created. The default volume driver is `local`. To use a different volume driver with this create-on-demand pattern, specify the driver and its options with the `--mount` flag: -```bash +```console $ docker service create \ --mount type=volume,src=,dst=,volume-driver=,volume-opt==,volume-opt== --name myservice \ @@ -942,7 +942,7 @@ The following examples show bind mount syntax: - To mount a read-write bind: - ```bash + ```console $ docker service create \ --mount type=bind,src=,dst= \ --name myservice \ @@ -951,7 +951,7 @@ The following examples show bind mount syntax: - To mount a read-only bind: - ```bash + ```console $ docker service create \ --mount type=bind,src=,dst=,readonly \ --name myservice \ @@ -1005,7 +1005,7 @@ This example sets the template of the created containers based on the service's name and the ID of the node where the container is running: {% raw %} -```bash +```console $ docker service create --name hosttempl \ --hostname="{{.Node.ID}}-{{.Service.Name}}"\ busybox top @@ -1015,7 +1015,7 @@ $ docker service create --name hosttempl \ To see the result of using the template, use the `docker service ps` and `docker inspect` commands. -```bash +```console $ docker service ps va8ew30grofhjoychbr6iot8c ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS @@ -1023,7 +1023,7 @@ wo41w8hg8qan hosttempl.1 busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31 ``` {% raw %} -```bash +```console $ docker inspect --format="{{.Config.Hostname}}" hosttempl.1.wo41w8hg8qanxwjwsg4kxpprj ``` {% endraw %} diff --git a/engine/swarm/stack-deploy.md b/engine/swarm/stack-deploy.md index 7fb208aed6..51379e7a6e 100644 --- a/engine/swarm/stack-deploy.md +++ b/engine/swarm/stack-deploy.md @@ -39,13 +39,13 @@ a throwaway registry, which you can discard afterward. 1. Start the registry as a service on your swarm: - ```bash + ```console $ docker service create --name registry --publish published=5000,target=5000 registry:2 ``` 2. Check its status with `docker service ls`: - ```bash + ```console $ docker service ls ID NAME REPLICAS IMAGE COMMAND @@ -57,7 +57,7 @@ a throwaway registry, which you can discard afterward. 3. Check that it's working with `curl`: - ```bash + ```console $ curl http://localhost:5000/v2/ {} @@ -72,7 +72,7 @@ counter whenever you visit it. 1. Create a directory for the project: - ```bash + ```console $ mkdir stackdemo $ cd stackdemo ``` @@ -163,7 +163,7 @@ counter whenever you visit it. 2. Check that the app is running with `docker-compose ps`: - ```bash + ```console $ docker-compose ps Name Command State Ports @@ -174,7 +174,7 @@ counter whenever you visit it. You can test the app with `curl`: - ```bash + ```console $ curl http://localhost:8000 Hello World! I have been seen 1 times. @@ -187,7 +187,7 @@ counter whenever you visit it. 3. Bring the app down: - ```bash + ```console $ docker-compose down --volumes Stopping stackdemo_web_1 ... done @@ -203,7 +203,7 @@ counter whenever you visit it. To distribute the web app's image across the swarm, it needs to be pushed to the registry you set up earlier. With Compose, this is very simple: -```bash +```console $ docker-compose push Pushing web (127.0.0.1:5000/stackdemo:latest)... @@ -223,7 +223,7 @@ The stack is now ready to be deployed. 1. Create the stack with `docker stack deploy`: - ```bash + ```console $ docker stack deploy --compose-file docker-compose.yml stackdemo Ignoring unsupported options: build @@ -238,7 +238,7 @@ The stack is now ready to be deployed. 2. Check that it's running with `docker stack services stackdemo`: - ```bash + ```console $ docker stack services stackdemo ID NAME MODE REPLICAS IMAGE @@ -252,7 +252,7 @@ The stack is now ready to be deployed. As before, you can test the app with `curl`: - ```bash + ```console $ curl http://localhost:8000 Hello World! I have been seen 1 times. @@ -266,14 +266,14 @@ The stack is now ready to be deployed. Thanks to Docker's built-in routing mesh, you can access any node in the swarm on port 8000 and get routed to the app: - ```bash + ```console $ curl http://address-of-other-node:8000 Hello World! I have been seen 4 times. ``` 3. Bring the stack down with `docker stack rm`: - ```bash + ```console $ docker stack rm stackdemo Removing service stackdemo_web @@ -283,14 +283,14 @@ The stack is now ready to be deployed. 4. Bring the registry down with `docker service rm`: - ```bash + ```console $ docker service rm registry ``` 5. If you're just testing things out on a local machine and want to bring your Docker Engine out of swarm mode, use `docker swarm leave`: - ```bash + ```console $ docker swarm leave --force Node left the swarm. diff --git a/engine/swarm/swarm-mode.md b/engine/swarm/swarm-mode.md index 9a582c9382..956fda59e8 100644 --- a/engine/swarm/swarm-mode.md +++ b/engine/swarm/swarm-mode.md @@ -51,7 +51,7 @@ external to the swarm. The output for `docker swarm init` provides the connection command to use when you join new worker nodes to the swarm: -```bash +```console $ docker swarm init Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager. @@ -83,20 +83,20 @@ The subnet range comes from the `--default-addr-pool`, (such as `10.10.0.0/16`). The format of the command is: -```bash +```console $ docker swarm init --default-addr-pool [--default-addr-pool --default-addr-pool-mask-length ] ``` To create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this: -```bash +```console $ docker swarm init --default-addr-pool 10.20.0.0/16 ``` To create a default IP address pool with a `/16` (class B) for the `10.20.0.0` and `10.30.0.0` networks, and to create a subnet mask of `/26` for each network looks like this: -```bash +```console $ docker swarm init --default-addr-pool 10.20.0.0/16 --default-addr-pool 10.30.0.0/16 --default-addr-pool-mask-length 26 ``` @@ -120,7 +120,7 @@ single IP address. If so, Docker uses the IP address with the listening port correct `--advertise-addr` to enable inter-manager communication and overlay networking: -```bash +```console $ docker swarm init --advertise-addr ``` @@ -146,7 +146,7 @@ swarm. To retrieve the join command including the join token for worker nodes, run: -```bash +```console $ docker swarm join-token worker To add a worker to this swarm, run the following command: @@ -160,7 +160,7 @@ This node joined a swarm as a worker. To view the join command and token for manager nodes, run: -```bash +```console $ docker swarm join-token manager To add a worker to this swarm, run the following command: @@ -172,7 +172,7 @@ To add a worker to this swarm, run the following command: Pass the `--quiet` flag to print only the token: -```bash +```console $ docker swarm join-token --quiet worker SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c @@ -200,7 +200,7 @@ Run `swarm join-token --rotate` to invalidate the old token and generate a new token. Specify whether you want to rotate the token for `worker` or `manager` nodes: -```bash +```console $ docker swarm join-token --rotate worker To add a worker to this swarm, run the following command: diff --git a/engine/swarm/swarm-tutorial/add-nodes.md b/engine/swarm/swarm-tutorial/add-nodes.md index 4a67b137d0..ac5555438a 100644 --- a/engine/swarm/swarm-tutorial/add-nodes.md +++ b/engine/swarm/swarm-tutorial/add-nodes.md @@ -15,7 +15,7 @@ to add worker nodes. [Create a swarm](create-swarm.md) tutorial step to create a worker node joined to the existing swarm: - ```bash + ```console $ docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \ 192.168.99.100:2377 @@ -26,7 +26,7 @@ to add worker nodes. If you don't have the command available, you can run the following command on a manager node to retrieve the join command for a worker: - ```bash + ```console $ docker swarm join-token worker To add a worker to this swarm, run the following command: @@ -43,7 +43,7 @@ to add worker nodes. [Create a swarm](create-swarm.md) tutorial step to create a second worker node joined to the existing swarm: - ```bash + ```console $ docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \ 192.168.99.100:2377 @@ -54,7 +54,8 @@ to add worker nodes. 5. Open a terminal and ssh into the machine where the manager node runs and run the `docker node ls` command to see the worker nodes: - ```bash + ```console + $ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 03g1y59jwfg7cf99w4lt0f662 worker2 Ready Active 9j68exjopxe7wfl6yuxml7a7j worker1 Ready Active diff --git a/engine/swarm/swarm-tutorial/create-swarm.md b/engine/swarm/swarm-tutorial/create-swarm.md index ceb325825c..6b7c72e20e 100644 --- a/engine/swarm/swarm-tutorial/create-swarm.md +++ b/engine/swarm/swarm-tutorial/create-swarm.md @@ -13,13 +13,13 @@ machines. node. This tutorial uses a machine named `manager1`. If you use Docker Machine, you can connect to it via SSH using the following command: - ```bash + ```console $ docker-machine ssh manager1 ``` 2. Run the following command to create a new swarm: - ```bash + ```console $ docker swarm init --advertise-addr ``` @@ -33,7 +33,7 @@ machines. In the tutorial, the following command creates a swarm on the `manager1` machine: - ```bash + ```console $ docker swarm init --advertise-addr 192.168.99.100 Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager. @@ -56,7 +56,7 @@ machines. 2. Run `docker info` to view the current state of the swarm: - ```bash + ```console $ docker info Containers: 2 @@ -74,7 +74,7 @@ machines. 3. Run the `docker node ls` command to view information about nodes: - ```bash + ```console $ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS diff --git a/engine/swarm/swarm-tutorial/delete-service.md b/engine/swarm/swarm-tutorial/delete-service.md index 78d502f727..71b608c475 100644 --- a/engine/swarm/swarm-tutorial/delete-service.md +++ b/engine/swarm/swarm-tutorial/delete-service.md @@ -14,7 +14,7 @@ you can delete the service from the swarm. 2. Run `docker service rm helloworld` to remove the `helloworld` service. - ```bash + ```console $ docker service rm helloworld helloworld @@ -24,7 +24,7 @@ you can delete the service from the swarm. removed the service. The CLI returns a message that the service is not found: - ```bash + ```console $ docker service inspect helloworld [] Error: no such service: helloworld @@ -34,7 +34,7 @@ you can delete the service from the swarm. seconds to clean up. You can use `docker ps` on the nodes to verify when the tasks have been removed. - ```bash + ```console $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES diff --git a/engine/swarm/swarm-tutorial/deploy-service.md b/engine/swarm/swarm-tutorial/deploy-service.md index 9cc3990c6f..ede843651e 100644 --- a/engine/swarm/swarm-tutorial/deploy-service.md +++ b/engine/swarm/swarm-tutorial/deploy-service.md @@ -14,7 +14,7 @@ is not a requirement to deploy a service. 2. Run the following command: - ```bash + ```console $ docker service create --replicas 1 --name helloworld alpine ping docker.com 9uk4639qpg7npwf3fn2aasksr @@ -28,7 +28,7 @@ is not a requirement to deploy a service. 3. Run `docker service ls` to see the list of running services: - ```bash + ```console $ docker service ls ID NAME SCALE IMAGE COMMAND diff --git a/engine/swarm/swarm-tutorial/drain-node.md b/engine/swarm/swarm-tutorial/drain-node.md index ccda7c0bfc..bf38108d8f 100644 --- a/engine/swarm/swarm-tutorial/drain-node.md +++ b/engine/swarm/swarm-tutorial/drain-node.md @@ -26,7 +26,7 @@ node and launches replica tasks on a node with `ACTIVE` availability. 2. Verify that all your nodes are actively available. - ```bash + ```console $ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS @@ -38,7 +38,7 @@ node and launches replica tasks on a node with `ACTIVE` availability. 3. If you aren't still running the `redis` service from the [rolling update](rolling-update.md) tutorial, start it now: - ```bash + ```console $ docker service create --replicas 3 --name redis --update-delay 10s redis:3.0.6 c5uo6kdmzpon37mgj9mwglcfw @@ -47,7 +47,7 @@ node and launches replica tasks on a node with `ACTIVE` availability. 4. Run `docker service ps redis` to see how the swarm manager assigned the tasks to different nodes: - ```bash + ```console $ docker service ps redis NAME IMAGE NODE DESIRED STATE CURRENT STATE @@ -62,15 +62,15 @@ tasks to different nodes: 5. Run `docker node update --availability drain ` to drain a node that had a task assigned to it: - ```bash - docker node update --availability drain worker1 + ```console + $ docker node update --availability drain worker1 worker1 ``` 6. Inspect the node to check its availability: - ```bash + ```console $ docker node inspect --pretty worker1 ID: 38ciaotwjuritcdtn9npbnkuz @@ -86,7 +86,7 @@ had a task assigned to it: 7. Run `docker service ps redis` to see how the swarm manager updated the task assignments for the `redis` service: - ```bash + ```console $ docker service ps redis NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR @@ -103,7 +103,7 @@ task assignments for the `redis` service: 8. Run `docker node update --availability active ` to return the drained node to an active state: - ```bash + ```console $ docker node update --availability active worker1 worker1 @@ -111,7 +111,7 @@ drained node to an active state: 9. Inspect the node to see the updated state: - ```bash + ```console $ docker node inspect --pretty worker1 ID: 38ciaotwjuritcdtn9npbnkuz diff --git a/engine/swarm/swarm-tutorial/inspect-service.md b/engine/swarm/swarm-tutorial/inspect-service.md index dcba0ab55e..3665fdd483 100644 --- a/engine/swarm/swarm-tutorial/inspect-service.md +++ b/engine/swarm/swarm-tutorial/inspect-service.md @@ -17,7 +17,7 @@ the Docker CLI to see details about the service running in the swarm. To see the details on the `helloworld` service: - ```bash + ```console [manager1]$ docker service inspect --pretty helloworld ID: 9uk4639qpg7npwf3fn2aasksr @@ -37,7 +37,7 @@ the Docker CLI to see details about the service running in the swarm. >**Tip**: To return the service details in json format, run the same command without the `--pretty` flag. - ```bash + ```console [manager1]$ docker service inspect helloworld [ { @@ -89,7 +89,7 @@ the Docker CLI to see details about the service running in the swarm. 4. Run `docker service ps ` to see which nodes are running the service: - ```bash + ```console [manager1]$ docker service ps helloworld NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS @@ -110,7 +110,7 @@ the Docker CLI to see details about the service running in the swarm. >**Tip**: If `helloworld` is running on a node other than your manager node, you must ssh to that node. - ```bash + ```console [worker2]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES diff --git a/engine/swarm/swarm-tutorial/rolling-update.md b/engine/swarm/swarm-tutorial/rolling-update.md index 1e9170772a..b3c28c2f46 100644 --- a/engine/swarm/swarm-tutorial/rolling-update.md +++ b/engine/swarm/swarm-tutorial/rolling-update.md @@ -17,7 +17,7 @@ Redis 3.0.7 container image using rolling updates. 2. Deploy your Redis tag to the swarm and configure the swarm with a 10 second update delay. Note that the following example shows an older Redis tag: - ```bash + ```console $ docker service create \ --replicas 3 \ --name redis \ @@ -47,7 +47,7 @@ Redis 3.0.7 container image using rolling updates. 3. Inspect the `redis` service: - ```bash + ```console $ docker service inspect --pretty redis ID: 0u6a4s31ybk7yw2wyvtikmu50 @@ -68,7 +68,7 @@ Redis 3.0.7 container image using rolling updates. 4. Now you can update the container image for `redis`. The swarm manager applies the update to nodes according to the `UpdateConfig` policy: - ```bash + ```console $ docker service update --image redis:3.0.7 redis redis ``` @@ -86,7 +86,7 @@ Redis 3.0.7 container image using rolling updates. 5. Run `docker service inspect --pretty redis` to see the new image in the desired state: - ```bash + ```console $ docker service inspect --pretty redis ID: 0u6a4s31ybk7yw2wyvtikmu50 @@ -106,7 +106,7 @@ Redis 3.0.7 container image using rolling updates. The output of `service inspect` shows if your update paused due to failure: - ```bash + ```console $ docker service inspect --pretty redis ID: 0u6a4s31ybk7yw2wyvtikmu50 @@ -121,8 +121,8 @@ Redis 3.0.7 container image using rolling updates. To restart a paused update run `docker service update `. For example: - ```bash - docker service update redis + ```console + $ docker service update redis ``` To avoid repeating certain update failures, you may need to reconfigure the @@ -130,7 +130,7 @@ Redis 3.0.7 container image using rolling updates. 6. Run `docker service ps ` to watch the rolling update: - ```bash + ```console $ docker service ps redis NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR diff --git a/engine/swarm/swarm-tutorial/scale-service.md b/engine/swarm/swarm-tutorial/scale-service.md index be390a41da..82dc9f9271 100644 --- a/engine/swarm/swarm-tutorial/scale-service.md +++ b/engine/swarm/swarm-tutorial/scale-service.md @@ -16,13 +16,13 @@ the service. Containers running in a service are called "tasks." 2. Run the following command to change the desired state of the service running in the swarm: - ```bash + ```console $ docker service scale = ``` For example: - ```bash + ```console $ docker service scale helloworld=5 helloworld scaled to 5 @@ -30,7 +30,7 @@ the service. Containers running in a service are called "tasks." 3. Run `docker service ps ` to see the updated task list: - ```bash + ```console $ docker service ps helloworld NAME IMAGE NODE DESIRED STATE CURRENT STATE @@ -48,7 +48,7 @@ the service. Containers running in a service are called "tasks." 4. Run `docker ps` to see the containers running on the node where you're connected. The following example shows the tasks running on `manager1`: - ```bash + ```console $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES diff --git a/engine/swarm/swarm_manager_locking.md b/engine/swarm/swarm_manager_locking.md index 213509d2fd..5b9c7467a9 100644 --- a/engine/swarm/swarm_manager_locking.md +++ b/engine/swarm/swarm_manager_locking.md @@ -29,7 +29,7 @@ rotate this key encryption key at any time. When you initialize a new swarm, you can use the `--autolock` flag to enable autolocking of swarm manager nodes when Docker restarts. -```bash +```console $ docker swarm init --autolock Swarm initialized: current node (k1q27tfyx9rncpixhk69sa61v) is now a manager. @@ -54,7 +54,7 @@ When Docker restarts, you need to [unlock the swarm](#unlock-a-swarm). A locked swarm causes an error like the following when you try to start or restart a service: -```bash +```console $ sudo service docker restart $ docker service ls @@ -66,7 +66,7 @@ Error response from daemon: Swarm is encrypted and needs to be unlocked before i To enable autolock on an existing swarm, set the `autolock` flag to `true`. -```bash +```console $ docker swarm update --autolock=true Swarm updated. @@ -85,7 +85,7 @@ disk. There is a trade-off between the risk of storing the encryption key unencrypted at rest and the convenience of restarting a swarm without needing to unlock each manager. -```bash +```console $ docker swarm update --autolock=false ``` @@ -96,7 +96,7 @@ a manager goes down while it is still configured to lock using the old key. To unlock a locked swarm, use `docker swarm unlock`. -```bash +```console $ docker swarm unlock Please enter unlock key: @@ -116,7 +116,7 @@ If the key has not been rotated since the node left the swarm, and you have a quorum of functional manager nodes in the swarm, you can view the current unlock key using `docker swarm unlock-key` without any arguments. -```bash +```console $ docker swarm unlock-key To unlock a swarm manager after it restarts, run the `docker swarm unlock` @@ -136,7 +136,7 @@ the swarm and join it back to the swarm as a new manager. You should rotate the locked swarm's unlock key on a regular schedule. -```bash +```console $ docker swarm unlock-key --rotate Successfully rotated manager unlock key. diff --git a/engine/tutorials/networkingcontainers.md b/engine/tutorials/networkingcontainers.md index 85cfd3d7b9..d6660e5a3f 100644 --- a/engine/tutorials/networkingcontainers.md +++ b/engine/tutorials/networkingcontainers.md @@ -37,7 +37,7 @@ The network named `bridge` is a special network. Unless you tell it otherwise, D Inspecting the network is an easy way to find out the container's IP address. -```bash +```console $ docker network inspect bridge [ diff --git a/get-started/02_our_app.md b/get-started/02_our_app.md index 5e4551139e..6c32cfb8cd 100644 --- a/get-started/02_our_app.md +++ b/get-started/02_our_app.md @@ -56,8 +56,8 @@ see a few flaws in the Dockerfile below. But, don't worry. We'll go over them. 2. If you haven't already done so, open a terminal and go to the `app` directory with the `Dockerfile`. Now build the container image using the `docker build` command. - ```bash - docker build -t getting-started . + ```console + $ docker build -t getting-started . ``` This command used the Dockerfile to build a new container image. You might @@ -83,8 +83,8 @@ command (remember that from earlier?). 1. Start your container using the `docker run` command and specify the name of the image we just created: - ```bash - docker run -dp 3000:3000 getting-started + ```console + $ docker run -dp 3000:3000 getting-started ``` Remember the `-d` and `-p` flags? We're running the new container in "detached" mode (in the diff --git a/get-started/03_updating_app.md b/get-started/03_updating_app.md index dbddc87c0a..e9c7303b90 100644 --- a/get-started/03_updating_app.md +++ b/get-started/03_updating_app.md @@ -24,19 +24,19 @@ Pretty simple, right? Let's make the change. 2. Let's build our updated version of the image, using the same command we used before. - ```bash - docker build -t getting-started . + ```console + $ docker build -t getting-started . ``` 3. Let's start a new container using the updated code. - ```bash - docker run -dp 3000:3000 getting-started + ```console + $ docker run -dp 3000:3000 getting-started ``` **Uh oh!** You probably saw an error like this (the IDs will be different): -```bash +```console docker: Error response from daemon: driver failed programming external connectivity on endpoint laughing_burnell (bb242b2ca4d67eba76e79474fb36bb5125708ebdabd7f45c8eaf16caaabde9dd): Bind for 0.0.0.0:3000 failed: port is already allocated. ``` @@ -55,21 +55,21 @@ ways that we can remove the old container. Feel free to choose the path that you 1. Get the ID of the container by using the `docker ps` command. - ```bash - docker ps + ```console + $ docker ps ``` 2. Use the `docker stop` command to stop the container. - ```bash + ```console # Swap out with the ID from docker ps - docker stop + $ docker stop ``` 3. Once the container has stopped, you can remove it by using the `docker rm` command. - ```bash - docker rm + ```console + $ docker rm ``` >**Note** @@ -96,8 +96,8 @@ much easier than having to look up the container ID and remove it. 1. Now, start your updated app. - ```bash - docker run -dp 3000:3000 getting-started + ```console + $ docker run -dp 3000:3000 getting-started ``` 2. Refresh your browser on [http://localhost:3000](http://localhost:3000) and you should see your updated help text! diff --git a/get-started/04_sharing_app.md b/get-started/04_sharing_app.md index 6ef33f5a05..1b35797539 100644 --- a/get-started/04_sharing_app.md +++ b/get-started/04_sharing_app.md @@ -58,16 +58,16 @@ an example command that you will need to run to push to this repo. 3. Use the `docker tag` command to give the `getting-started` image a new name. Be sure to swap out `YOUR-USER-NAME` with your Docker ID. - ```bash - docker tag getting-started YOUR-USER-NAME/getting-started + ```console + $ docker tag getting-started YOUR-USER-NAME/getting-started ``` 4. Now try your push command again. If you're copying the value from Docker Hub, you can drop the `tagname` portion, as we didn't add a tag to the image name. If you don't specify a tag, Docker will use a tag called `latest`. - ```bash - docker push YOUR-USER-NAME/getting-started + ```console + $ docker push YOUR-USER-NAME/getting-started ``` ## Run the image on a new instance @@ -87,8 +87,8 @@ new instance that has never seen this container image! To do this, we will use P 5. In the terminal, start your freshly pushed app. - ```bash - docker run -dp 3000:3000 YOUR-USER-NAME/getting-started + ```console + $ docker run -dp 3000:3000 YOUR-USER-NAME/getting-started ``` You should see the image get pulled down and eventually start up! diff --git a/get-started/05_persisting_data.md b/get-started/05_persisting_data.md index e0dfa0a432..21bd517edc 100644 --- a/get-started/05_persisting_data.md +++ b/get-started/05_persisting_data.md @@ -21,8 +21,8 @@ What you'll see is that the files created in one container aren't available in a 1. Start an `ubuntu` container that will create a file named `/data.txt` with a random number between 1 and 10000. - ```bash - docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null" + ```console + $ docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null" ``` In case you're curious about the command, we're starting a bash shell and invoking two @@ -35,15 +35,15 @@ What you'll see is that the files created in one container aren't available in a You will see a terminal that is running a shell in the ubuntu container. Run the following command to see the content of the `/data.txt` file. Close this terminal afterwards again. - ```bash - cat /data.txt + ```console + $ cat /data.txt ``` If you prefer the command line you can use the `docker exec` command to do the same. You need to get the container's ID (use `docker ps` to get it) and get the content with the following command. - ```bash - docker exec cat /data.txt + ```console + $ docker exec cat /data.txt ``` You should see a random number! @@ -51,8 +51,8 @@ What you'll see is that the files created in one container aren't available in a 3. Now, let's start another `ubuntu` container (the same image) and we'll see we don't have the same file. - ```bash - docker run -it ubuntu ls / + ```console + $ docker run -it ubuntu ls / ``` And look! There's no `data.txt` file there! That's because it was written to the scratch space for @@ -91,8 +91,8 @@ Every time you use the volume, Docker will make sure the correct data is provide 1. Create a volume by using the `docker volume create` command. - ```bash - docker volume create todo-db + ```console + $ docker volume create todo-db ``` 2. Stop and remove the todo app container once again in the Dashboard (or with `docker rm -f `), as it is still running without using the persistent volume. @@ -100,8 +100,8 @@ Every time you use the volume, Docker will make sure the correct data is provide 3. Start the todo app container, but add the `-v` flag to specify a volume mount. We will use the named volume and mount it to `/etc/todos`, which will capture all files created at the path. - ```bash - docker run -dp 3000:3000 -v todo-db:/etc/todos getting-started + ```console + $ docker run -dp 3000:3000 -v todo-db:/etc/todos getting-started ``` 4. Once the container starts up, open the app and add a few items to your todo list. @@ -132,8 +132,8 @@ Hooray! You've now learned how to persist data! A lot of people frequently ask "Where is Docker _actually_ storing my data when I use a named volume?" If you want to know, you can use the `docker volume inspect` command. -```bash -docker volume inspect todo-db +```console +$ docker volume inspect todo-db [ { "CreatedAt": "2019-09-26T02:18:36Z", diff --git a/get-started/06_bind_mounts.md b/get-started/06_bind_mounts.md index b6068c08d7..79ea87ddcd 100644 --- a/get-started/06_bind_mounts.md +++ b/get-started/06_bind_mounts.md @@ -42,8 +42,8 @@ So, let's do it! 2. Run the following command. We'll explain what's going on afterwards: - ```bash - docker run -dp 3000:3000 \ + ```console + $ docker run -dp 3000:3000 \ -w /app -v "$(pwd):/app" \ node:12-alpine \ sh -c "yarn install && yarn run dev" @@ -68,9 +68,9 @@ So, let's do it! 3. You can watch the logs using `docker logs -f `. You'll know you're ready to go when you see this: - ```bash - docker logs -f - $ nodemon src/index.js + ```console + $ docker logs -f + nodemon src/index.js [nodemon] 1.19.2 [nodemon] to restart at any time, enter `rs` [nodemon] watching dir(s): *.* diff --git a/get-started/07_multi_container.md b/get-started/07_multi_container.md index 0814686c3b..52e29a74d3 100644 --- a/get-started/07_multi_container.md +++ b/get-started/07_multi_container.md @@ -38,15 +38,15 @@ For now, we will create the network first and attach the MySQL container at star 1. Create the network. - ```bash - docker network create todo-app + ```console + $ docker network create todo-app ``` 2. Start a MySQL container and attach it to the network. We're also going to define a few environment variables that the database will use to initialize the database (see the "Environment Variables" section in the [MySQL Docker Hub listing](https://hub.docker.com/_/mysql/)). - ```bash - docker run -d \ + ```console + $ docker run -d \ --network todo-app --network-alias mysql \ -v todo-mysql-data:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=secret \ @@ -73,8 +73,8 @@ For now, we will create the network first and attach the MySQL container at star 3. To confirm we have the database up and running, connect to the database and verify it connects. - ```bash - docker exec -it mysql -u root -p + ```console + $ docker exec -it mysql -u root -p ``` When the password prompt comes up, type in **secret**. In the MySQL shell, list the databases and verify @@ -112,15 +112,15 @@ which ships with a _lot_ of tools that are useful for troubleshooting or debuggi 1. Start a new container using the nicolaka/netshoot image. Make sure to connect it to the same network. - ```bash - docker run -it --network todo-app nicolaka/netshoot + ```console + $ docker run -it --network todo-app nicolaka/netshoot ``` 2. Inside the container, we're going to use the `dig` command, which is a useful DNS tool. We're going to look up the IP address for the hostname `mysql`. - ```bash - dig mysql + ```console + $ dig mysql ``` And you'll get an output like this... @@ -180,8 +180,8 @@ With all of that explained, let's start our dev-ready container! 1. We'll specify each of the environment variables above, as well as connect the container to our app network. - ```bash - docker run -dp 3000:3000 \ + ```console + $ docker run -dp 3000:3000 \ -w /app -v "$(pwd):/app" \ --network todo-app \ -e MYSQL_HOST=mysql \ @@ -225,8 +225,8 @@ With all of that explained, let's start our dev-ready container! 4. Connect to the mysql database and prove that the items are being written to the database. Remember, the password is **secret**. - ```bash - docker exec -it mysql -p todos + ```console + $ docker exec -it mysql -p todos ``` And in the mysql shell, run the following: diff --git a/get-started/08_using_compose.md b/get-started/08_using_compose.md index 6f528bd638..7e55a1670a 100644 --- a/get-started/08_using_compose.md +++ b/get-started/08_using_compose.md @@ -23,8 +23,8 @@ a Linux machine, you will need to [install Docker Compose](../compose/install.md After installation, you should be able to run the following and see version information. -```bash -docker-compose version +```console +$ docker-compose version ``` ## Create the Compose file @@ -53,8 +53,8 @@ And now, we'll start migrating a service at a time into the compose file. To remember, this was the command we were using to define our app container. -```bash -docker run -dp 3000:3000 \ +```console +$ docker run -dp 3000:3000 \ -w /app -v "$(pwd):/app" \ --network todo-app \ -e MYSQL_HOST=mysql \ @@ -161,8 +161,8 @@ docker run -dp 3000:3000 ` Now, it's time to define the MySQL service. The command that we used for that container was the following: -```bash -docker run -d \ +```console +$ docker run -d \ --network todo-app --network-alias mysql \ -v todo-mysql-data:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=secret \ @@ -276,8 +276,8 @@ Now that we have our `docker-compose.yml` file, we can start it up! 2. Start up the application stack using the `docker-compose up` command. We'll add the `-d` flag to run everything in the background. - ```bash - docker-compose up -d + ```console + $ docker-compose up -d ``` When we run this, we should see output like this: diff --git a/get-started/09_image_best.md b/get-started/09_image_best.md index bbb85b7002..22494a3c96 100644 --- a/get-started/09_image_best.md +++ b/get-started/09_image_best.md @@ -11,8 +11,8 @@ Docker has partnered with [Snyk](http://snyk.io){:target="_blank" rel="noopener" For example, to scan the `getting-started` image you created earlier in the tutorial, you can just type -```bash -docker scan getting-started +```console +$ docker scan getting-started ``` The scan uses a constantly updated database of vulnerabilities, so the output you see will vary as new @@ -56,8 +56,8 @@ command, you can see the command that was used to create each layer within an im 1. Use the `docker image history` command to see the layers in the `getting-started` image you created earlier in the tutorial. - ```bash - docker image history getting-started + ```console + $ docker image history getting-started ``` You should get output that looks something like this (dates/IDs may be different). @@ -86,8 +86,8 @@ command, you can see the command that was used to create each layer within an im 2. You'll notice that several of the lines are truncated. If you add the `--no-trunc` flag, you'll get the full output (yes... funny how you use a truncated flag to get untruncated output, huh?) - ```bash - docker image history --no-trunc getting-started + ```console + $ docker image history --no-trunc getting-started ``` ## Layer caching @@ -146,8 +146,8 @@ a change to the `package.json`. Make sense? 3. Build a new image using `docker build`. - ```bash - docker build -t getting-started . + ```console + $ docker build -t getting-started . ``` You should see output like this... diff --git a/get-started/kube-deploy.md b/get-started/kube-deploy.md index d39a58310b..ed61088537 100644 --- a/get-started/kube-deploy.md +++ b/get-started/kube-deploy.md @@ -75,8 +75,8 @@ All containers in Kubernetes are scheduled as _pods_, which are groups of co-loc 1. In a terminal, navigate to where you created `bb.yaml` and deploy your application to Kubernetes: - ```shell - kubectl apply -f bb.yaml + ```console + $ kubectl apply -f bb.yaml ``` you should see output that looks like the following, indicating your Kubernetes objects were created successfully: @@ -88,8 +88,8 @@ All containers in Kubernetes are scheduled as _pods_, which are groups of co-loc 2. Make sure everything worked by listing your deployments: - ```shell - kubectl get deployments + ```console + $ kubectl get deployments ``` if all is well, your deployment should be listed as follows: @@ -101,8 +101,8 @@ All containers in Kubernetes are scheduled as _pods_, which are groups of co-loc This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services: - ```shell - kubectl get services + ```console + $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bb-entrypoint NodePort 10.106.145.116 8080:30001/TCP 53s @@ -115,8 +115,8 @@ All containers in Kubernetes are scheduled as _pods_, which are groups of co-loc 4. Once satisfied, tear down your application: - ```shell - kubectl delete -f bb.yaml + ```console + $ kubectl delete -f bb.yaml ``` ## Conclusion diff --git a/get-started/orchestration.md b/get-started/orchestration.md index 41b6cff7a9..9f5089c928 100644 --- a/get-started/orchestration.md +++ b/get-started/orchestration.md @@ -51,14 +51,14 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set 4. In a terminal, navigate to where you created `pod.yaml` and create your pod: - ```shell - kubectl apply -f pod.yaml + ```console + $ kubectl apply -f pod.yaml ``` 5. Check that your pod is up and running: - ```shell - kubectl get pods + ```console + $ kubectl get pods ``` You should see something like: @@ -70,8 +70,8 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set 6. Check that you get the logs you'd expect for a ping process: - ```shell - kubectl logs demo + ```console + $ kubectl logs demo ``` You should see the output of a healthy ping process: @@ -86,8 +86,8 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set 7. Finally, tear down your test pod: - ```shell - kubectl delete -f pod.yaml + ```console + $ kubectl delete -f pod.yaml ``` {% endcapture %} @@ -121,14 +121,14 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set 4. In PowerShell, navigate to where you created `pod.yaml` and create your pod: - ```shell - kubectl apply -f pod.yaml + ```console + $ kubectl apply -f pod.yaml ``` 5. Check that your pod is up and running: - ```shell - kubectl get pods + ```console + $ kubectl get pods ``` You should see something like: @@ -140,8 +140,8 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set 6. Check that you get the logs you'd expect for a ping process: - ```shell - kubectl logs demo + ```console + $ kubectl logs demo ``` You should see the output of a healthy ping process: @@ -156,8 +156,8 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set 7. Finally, tear down your test pod: - ```shell - kubectl delete -f pod.yaml + ```console + $ kubectl delete -f pod.yaml ``` {% endcapture %} @@ -182,8 +182,8 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 1. Open a terminal, and initialize Docker Swarm mode: - ```shell - docker swarm init + ```console + $ docker swarm init ``` If all goes well, you should see a message similar to the following: @@ -200,14 +200,14 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8: - ```shell - docker service create --name demo alpine:3.5 ping 8.8.8.8 + ```console + $ docker service create --name demo alpine:3.5 ping 8.8.8.8 ``` 3. Check that your service created one running container: - ```shell - docker service ps demo + ```console + $ docker service ps demo ``` You should see something like: @@ -219,8 +219,8 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 4. Check that you get the logs you'd expect for a ping process: - ```shell - docker service logs demo + ```console + $ docker service logs demo ``` You should see the output of a healthy ping process: @@ -235,8 +235,8 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 5. Finally, tear down your test service: - ```shell - docker service rm demo + ```console + $ docker service rm demo ``` {% endcapture %} @@ -250,8 +250,8 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 1. Open a powershell, and initialize Docker Swarm mode: - ```shell - docker swarm init + ```console + $ docker swarm init ``` If all goes well, you should see a message similar to the following: @@ -268,14 +268,14 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8: - ```shell - docker service create --name demo alpine:3.5 ping 8.8.8.8 + ```console + $ docker service create --name demo alpine:3.5 ping 8.8.8.8 ``` 3. Check that your service created one running container: - ```shell - docker service ps demo + ```console + $ docker service ps demo ``` You should see something like: @@ -287,8 +287,8 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 4. Check that you get the logs you'd expect for a ping process: - ```shell - docker service logs demo + ```console + $ docker service logs demo ``` You should see the output of a healthy ping process: @@ -303,8 +303,8 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to 5. Finally, tear down your test service: - ```shell - docker service rm demo + ```console + $ docker service rm demo ``` {% endcapture %} diff --git a/get-started/overview.md b/get-started/overview.md index 76764c898d..0880c104ee 100644 --- a/get-started/overview.md +++ b/get-started/overview.md @@ -152,7 +152,7 @@ its state that are not stored in persistent storage disappear. The following command runs an `ubuntu` container, attaches interactively to your local command-line session, and runs `/bin/bash`. -```bash +```console $ docker run -i -t ubuntu /bin/bash ``` diff --git a/get-started/swarm-deploy.md b/get-started/swarm-deploy.md index e580feb179..e4d5f48cca 100644 --- a/get-started/swarm-deploy.md +++ b/get-started/swarm-deploy.md @@ -44,8 +44,8 @@ In this Swarm YAML file, we have just one object: a `service`, describing a scal 1. Deploy your application to Swarm: - ```shell - docker stack deploy -c bb-stack.yaml demo + ```console + $ docker stack deploy -c bb-stack.yaml demo ``` If all goes well, Swarm will report creating all your stack objects with no complaints: @@ -59,8 +59,8 @@ In this Swarm YAML file, we have just one object: a `service`, describing a scal 2. Make sure everything worked by listing your service: - ```shell - docker service ls + ```console + $ docker service ls ``` If all has gone well, your service will report with 1/1 of its replicas created: @@ -76,8 +76,8 @@ In this Swarm YAML file, we have just one object: a `service`, describing a scal 4. Once satisfied, tear down your application: - ```shell - docker stack rm demo + ```console + $ docker stack rm demo ``` ## Conclusion diff --git a/language/golang/build-images.md b/language/golang/build-images.md index 98df83fdf8..39a8cdea0e 100644 --- a/language/golang/build-images.md +++ b/language/golang/build-images.md @@ -37,7 +37,7 @@ The source code for the application is in the [olliefr/docker-gs-ping](https://g For our present study, we clone it to our local machine: -```shell +```console $ git clone https://github.com/olliefr/docker-gs-ping ``` @@ -84,7 +84,7 @@ func main() { Let’s start our application and make sure it’s running properly. Open your terminal and navigate to the directory into which you cloned the project's repo. From now on, we'll refer to this directory as the **working directory**. -```shell +```console $ go run main.go ``` @@ -104,7 +104,7 @@ ____________________________________O/_______ Let's run a quick _smoke test_ on the application. **In a new terminal**, run a request using `curl`. Alternatively, you can use your favourite web browser as well. -```shell +```console $ curl http://localhost:8080/ Hello, Docker! <3 ``` @@ -234,7 +234,7 @@ The build command optionally takes a `--tag` flag. This flag is used to label th Let's build our first Docker image! -```shell +```console $ docker build --tag docker-gs-ping . ``` @@ -271,7 +271,7 @@ To see the list of images we have on our local machine, we have two options. One To list images, simply run the `images` command: -```shell +```console $ docker images ``` @@ -291,7 +291,7 @@ An image is made up of a manifest and a list of layers. In simple terms, a “ta To create a new tag for our image, run the following command. -```shell +```console $ docker tag docker-gs-ping:latest docker-gs-ping:v1.0 ``` @@ -299,7 +299,7 @@ The Docker `tag` command creates a new tag for the image. It does not create a n Now run the `docker images` command to see the updated list of local images: -```shell +```console $ docker images ``` @@ -314,14 +314,14 @@ You can see that we have two images that start with `docker-gs-ping`. We know th Let’s remove the tag that we had just created. To do this, we’ll use the `rmi` command, which stands for "remove image": -```shell +```console $ docker rmi docker-gs-ping:v1.0 Untagged: docker-gs-ping:v1.0 ``` Notice that the response from Docker tells us that the image has not been removed but only "untagged". Verify this by running the images command: -```shell +```console $ docker images ``` @@ -379,8 +379,8 @@ ENTRYPOINT ["/docker-gs-ping"] Since we have two dockerfiles now, we have to tell Docker that we want to build using our new Dockerfile. We also tag the new image with `multistage` but this word has no special meaning, we only do so that we could compare this new image to the one we've built previously, that is the one we tagged with `latest`: -```shell -docker build -t docker-gs-ping:multistage -f Dockerfile.multistage . +```console +$ docker build -t docker-gs-ping:multistage -f Dockerfile.multistage . ``` Comparing the sizes of `docker-gs-ping:multistage` and `docker-gs-ping:latest` we see an order-of-magnitude difference! diff --git a/language/golang/configure-ci-cd.md b/language/golang/configure-ci-cd.md index 247a8d8551..0f20aa1da6 100644 --- a/language/golang/configure-ci-cd.md +++ b/language/golang/configure-ci-cd.md @@ -211,7 +211,7 @@ This workflow is similar to the CI workflow, with the following changes: Let's save this workflow and check the _Actions_ page for the repository on GitHub. Unlike the CI workflow, this new workflow cannot be triggered manually - this is how we set it up. So, in order to test it, we have to tag some commit. Let's tag the `HEAD` of the `main` branch: -```shell +```console $ git tag -a 0.0.1 -m "Test release workflow" $ git push --tags @@ -235,7 +235,7 @@ That's easy to fix. We follow our own guide and add the secrets to the repositor To remove the tag on the `remote`: -```shell +```console $ git push origin :refs/tags/0.0.1 To https://github.com/olliefr/docker-gs-ping.git - [deleted] 0.0.1 @@ -243,7 +243,7 @@ To https://github.com/olliefr/docker-gs-ping.git And to re-apply it locally and push: -```shell +```console $ git tag -fa 0.0.1 -m "Test release workflow" Updated tag '0.0.1' (was d7d3edc) $ git push origin --tags @@ -261,7 +261,7 @@ And the workflow is triggered again. This time it completes without issues: Since the image we've just pushed to Docker Hub is public, now it can be pulled by anyone, from anywhere: -```shell +```console $ docker pull olliefr/docker-gs-ping Using default tag: latest latest: Pulling from olliefr/docker-gs-ping diff --git a/language/golang/develop.md b/language/golang/develop.md index 7f8ab890c3..75a317354b 100644 --- a/language/golang/develop.md +++ b/language/golang/develop.md @@ -32,14 +32,14 @@ The point of a database is to have a persistent store of data. [Volumes](../../s To create a managed volume, run : -```shell +```console $ docker volume create roach roach ``` We can view the list of all managed volumes in our Docker instance with the following command: -```shell +```console $ docker volume list DRIVER VOLUME NAME local roach @@ -51,14 +51,14 @@ The example application and the database engine are going to talk to one another The following command creates a new bridge network named `mynet`: -```shell +```console $ docker network create -d bridge mynet 51344edd6430b5acd121822cacc99f8bc39be63dd125a3b3cd517b6485ab7709 ``` As it was the case with the managed volumes, there is a command to list all networks set up in your Docker instance: -```shell +```console $ docker network list NETWORK ID NAME DRIVER SCOPE 0ac2b1819fa4 bridge bridge local @@ -79,7 +79,7 @@ When choosing a name for a network or a managed volume, it's best to choose a na Now that the housekeeping chores are done, we can run CockroachDB in a container and attach it to the volume and network we had just created. When you run the folllowing command, Docker will pull the image from Docker Hub and run it for you locally: -```shell +```console $ docker run -d \ --name roach \ --hostname db \ @@ -105,7 +105,7 @@ Now that the database engine is live, there is some configuration to do before o We can do that with the help of CockroachDB built-in SQL shell. To start the SQL shell in the _same_ container where the database engine is running, type: -```shell +```console $ docker exec -it roach ./cockroach sql --insecure ``` @@ -175,7 +175,7 @@ The example application for this module is an extended version of `docker-gs-pin To checkout the example application, run: -```shell +```console $ git clone https://github.com/olliefr/docker-gs-ping-roach.git # ... output omitted ... ``` @@ -347,7 +347,7 @@ Regardless of whether we had updated the old example application, or checked out We can build the image with the familiar `build` command: -```shell +```console $ docker build --tag docker-gs-ping-roach . ``` @@ -361,7 +361,7 @@ Now, let’s run our container. This time we’ll need to set some environment v > > **Don't run in insecure mode in production, though!** -```shell +```console $ docker run -it --rm -d \ --network mynet \ --name rest-server \ @@ -378,14 +378,14 @@ There are a few points to note about this command. * We map container port `8080` to host port `80` this time. Thus, for `GET` requests we can get away with _literally_ `curl localhost`: - ```shell + ```console $ curl localhost Hello, Docker! (0) ``` Or, if you prefer, a proper URL would work just as well: - ```shell + ```console $ curl http://localhost/ Hello, Docker! (0) ``` @@ -396,7 +396,7 @@ There are a few points to note about this command. * The actual password does not matter, but it must be set to something to avoid confusing the example application. * The container we've just run is named `rest-server`. These names are useful for managing the container lifecycle: - ```shell + ```console # Don't do this just yet, it's only an example: $ docker container rm --force rest-server ``` @@ -405,7 +405,7 @@ There are a few points to note about this command. In the previous section, we have already tested querying our application with `GET` and it returned zero for the stored message counter. Now, let's post some messages to it: -```shell +```console $ curl --request POST \ --url http://localhost/send \ --header 'content-type: application/json' \ @@ -420,7 +420,7 @@ The application responds with the contents of the message, which means it had be Let's send another message: -```shell +```console $ curl --request POST \ --url http://localhost/send \ --header 'content-type: application/json' \ @@ -429,13 +429,13 @@ $ curl --request POST \ And again, we get the value of the message back: -```shell +```json {"value":"Hello, Oliver!"} ``` Let's see what the message counter says: -```shell +```console $ curl localhost Hello, Docker! (2) ``` @@ -444,7 +444,7 @@ Hey, that's exactly right! We sent two messages and the database kept them. Or h First, let's stop the containers: -```shell +```console $ docker container stop rest-server roach rest-server roach @@ -452,7 +452,7 @@ roach Then, let's remove them: -```shell +```console $ docker container rm rest-server roach rest-server roach @@ -460,15 +460,15 @@ roach Verify that they are gone: -```shell +```console $ docker container list --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` And start them again, database first: -```shell -docker run -d \ +```console +$ docker run -d \ --name roach \ --hostname db \ --network mynet \ @@ -481,8 +481,8 @@ docker run -d \ And the service next: -```shell -docker run -it --rm -d \ +```console +$ docker run -it --rm -d \ --network mynet \ --name rest-server \ -p 80:8080 \ @@ -496,7 +496,7 @@ docker run -it --rm -d \ Lastly, let's query our service: -```shell +```console $ curl localhost Hello, Docker! (2) ``` @@ -509,7 +509,7 @@ Such is the power of managed volumes. Use it wisely. Remember, that we are running CockroachDB in "insecure" mode. Now that we had built and tested our application, it's time to wind everything down before moving on. You can list the containers that you are running with the `list` command: -```shell +```console $ docker container list ``` @@ -610,7 +610,7 @@ Other ways of dealing with undefined or empty values exist, as documented in the Before you apply changes made to a Compose configuration file, there is an opportunity to validate the content of the configuration file with the following command: -```shell +```console $ docker-compose config ``` @@ -620,7 +620,7 @@ When this command is run, Docker Compose would read the file `docker-compose.yml Let’s start our application and confirm that it is running properly. -```shell +```console $ docker-compose up --build ``` @@ -669,7 +669,7 @@ This is not a big deal. All we have to do is to connect to CockroachDB instance So we login into the database engine from another terminal: -```shell +```console $ docker exec -it roach ./cockroach sql --insecure ``` @@ -681,7 +681,7 @@ It would have been possible to connect the volume that we had previously used, b Now let’s test our API endpoint. In the new terminal, run the following command: -```shell +```console $ curl http://localhost/ ``` @@ -701,7 +701,7 @@ You can run containers started by the `docker-compose` command in detached mode, To start the stack, defined by the Compose file in detached mode, run: -```shell +```console $ docker-compose up --build -d ``` diff --git a/language/golang/run-containers.md b/language/golang/run-containers.md index e4e04844b7..aa65e90d24 100644 --- a/language/golang/run-containers.md +++ b/language/golang/run-containers.md @@ -20,7 +20,7 @@ A container is a normal operating system process except that this process is iso To run an image inside of a container, we use the `docker run` command. It requires one parameter and that is the image name. Let’s start our image and make sure it is running correctly. Execute the following command in your terminal. -```shell +```console $ docker run docker-gs-ping ``` @@ -40,7 +40,7 @@ When you run this command, you’ll notice that you were not returned to the com Let’s make a GET request to the server using the curl command. -```shell +```console $ curl http://localhost:8080/ curl: (7) Failed to connect to localhost port 8080: Connection refused ``` @@ -53,13 +53,13 @@ To publish a port for our container, we’ll use the `--publish` flag (`-p` for Start the container and expose port `8080` to port `8080` on the host. -```shell +```console $ docker run --publish 8080:8080 docker-gs-ping ``` Now let’s rerun the curl command from above. -```shell +```console $ curl http://localhost:8080/ Hello, Docker! <3 ``` @@ -72,7 +72,7 @@ Press **ctrl-c** to stop the container. This is great so far, but our sample application is a web server and we should not have to have our terminal connected to the container. Docker can run your container in detached mode, that is in the background. To do this, we can use the `--detach` or `-d` for short. Docker will start your container the same as before but this time will “detach” from the container and return you to the terminal prompt. -```shell +```console $ docker run -d -p 8080:8080 docker-gs-ping d75e61fcad1e0c0eca69a3f767be6ba28a66625ce4dc42201a8a323e8313c14e ``` @@ -81,7 +81,7 @@ Docker started our container in the background and printed the container ID on t Again, let’s make sure that our container is running properly. Run the same `curl` command: -```shell +```console $ curl http://localhost:8080/ Hello, Docker! <3 ``` @@ -90,7 +90,7 @@ Hello, Docker! <3 Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, we can run the `docker ps` command. Just like on Linux, to see a list of processes on your machine we would run the `ps` command. In the same spirit, we can run the `docker ps` command which will show us a list of containers running on our machine. -```shell +```console $ docker ps ``` @@ -103,14 +103,14 @@ The `ps` command tells a bunch of stuff about our running containers. We can see You are probably wondering where the name of our container is coming from. Since we didn’t provide a name for the container when we started it, Docker generated a random name. We’ll fix this in a minute but first we need to stop the container. To stop the container, run the `docker stop` command which does just that, stops the container. You will need to pass the name of the container or you can use the container ID. -```shell +```console $ docker stop inspiring_ishizaka inspiring_ishizaka ``` Now rerun the `docker ps` command to see a list of running containers. -```shell +```console $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` @@ -119,7 +119,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers can be started, stopped and restarted. When we stop a container, it is not removed but the status is changed to stopped and the process inside of the container is stopped. When we ran the `docker ps` command, the default output is to only show running containers. If we pass the `--all` or `-a` for short, we will see all containers on our system, that is stopped containers and running containers. -```shell +```console $ docker ps -a ``` @@ -135,13 +135,13 @@ If you’ve been following along, you should see several containers listed. Thes Let’s restart the container that we have just stopped. Locate the name of the container and replace the name of the container below in the restart command: -```shell +```console $ docker restart inspiring_ishizaka ``` Now, list all the containers again using the `ps` command: -```shell +```console $ docker ps --all ``` @@ -159,7 +159,7 @@ Let’s stop and remove all of our containers and take a look at fixing the rand Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system: -```shell +```console $ docker stop inspiring_ishizaka inspiring_ishizaka ``` @@ -170,7 +170,7 @@ To remove a container, run the `docker rm` command passing the container name. Y Again, make sure you replace the containers names in the below command with the container names from your system: -```shell +```console $ docker rm inspiring_ishizaka wizardly_joliot magical_carson gifted_mestorf ``` @@ -187,12 +187,12 @@ Now let’s address the pesky random name issue. Standard practice is to name yo To name a container, we must pass the `--name` flag to the `run` command: -```shell +```console $ docker run -d -p 8080:8080 --name rest-server docker-gs-ping 3bbc6a3102ea368c8b966e1878a5ea9b1fc61187afaac1276c41db22e4b7f48f ``` -```shell +```console $ docker ps ``` diff --git a/language/golang/run-tests.md b/language/golang/run-tests.md index 5fe3f4c913..244e3e6a79 100644 --- a/language/golang/run-tests.md +++ b/language/golang/run-tests.md @@ -63,11 +63,8 @@ The second test in `main_test.go` has almost identical structure but it tests _a In order to run the tests, we must make sure that our application Docker image is up-to-date. -```shell -docker build -t docker-gs-ping:latest . -``` - -``` +```console +$ docker build -t docker-gs-ping:latest . [+] Building 3.0s (13/13) FINISHED ... ``` @@ -78,15 +75,15 @@ Note, that the image is tagged with `latest` which is the same label we've chose Now that the Docker image for our application had been built, we can run the tests that depend on it: -```shell -go test ./... +```console +$ go test ./... ok github.com/olliefr/docker-gs-ping 2.564s ``` That was a bit... underwhelming? Let's ask it to print a bit more detail, just to be sure: -```shell -go test -v ./... +```console +$ go test -v ./... ``` ``` diff --git a/language/java/configure-ci-cd.md b/language/java/configure-ci-cd.md index 28b4fc217f..46874eabfc 100644 --- a/language/java/configure-ci-cd.md +++ b/language/java/configure-ci-cd.md @@ -188,9 +188,9 @@ on: This ensures that the main CI will only trigger if we tag our commits with `V.n.n.n.` Let’s test this. For example, run the following command: -```bash -git tag -a v1.0.2 -git push origin v1.0.2 +```console +$ git tag -a v1.0.2 +$ git push origin v1.0.2 ``` Now, go to GitHub and check your Actions diff --git a/language/java/run-tests.md b/language/java/run-tests.md index a95c7ffa33..c22d9e20a1 100644 --- a/language/java/run-tests.md +++ b/language/java/run-tests.md @@ -18,7 +18,7 @@ Testing is an essential part of modern software development. Testing can mean a The **Spring Pet Clinic** source code has already tests defined in the test directory `src/test/java/org/springframework/samples/petclinic`. You just need to update the JaCoCo version in your `pom.xml` to ensure your tests work with JDK v15 or higher with `0.8.6`, so we can use the following Docker command to start the container and run tests: -```shell +```console $ docker run -it --rm --name springboot-test java-docker ./mvnw test ... [INFO] Results: @@ -68,7 +68,7 @@ We first add a label to the `FROM openjdk:16-alpine3.13` statement. This allows Now let’s rebuild our image and run our tests. We will run the `docker build` command as above, but this time we will add the `--target test` flag so that we specifically run the test build stage. -```shell +```console $ docker build -t java-docker --target test . [+] Building 0.7s (6/6) FINISHED ... @@ -78,7 +78,7 @@ $ docker build -t java-docker --target test . Now that our test image is built, we can run it as a container and see if our tests pass. -```shell +```console $ docker run -it --rm --name springboot-test java-docker [INFO] Scanning for projects... [INFO] @@ -133,7 +133,7 @@ CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/spring-petclin Now, to run our tests, we just need to run the `docker build` command as above. -```shell +```console $ docker build -t java-docker --target test . [+] Building 27.6s (11/12) => CACHED [base 3/6] COPY .mvn/ .mvn @@ -160,7 +160,7 @@ Open the `src/test/java/org/springframework/samples/petclinic/model/ValidatorTes Now, run the `docker build` command from above and observe that the build fails and the failing testing information is printed to the console. -```shell +```console $ docker build -t java-docker --target test . => [base 6/6] COPY src ./src => ERROR [test 1/1] RUN ["./mvnw", "test"] @@ -198,7 +198,7 @@ services: Now, let's run the Compose application. You should now see that application behaves as previously and you can still debug it. -```shell +```console $ docker-compose -f docker-compose.dev.yml up --build ``` diff --git a/language/nodejs/build-images.md b/language/nodejs/build-images.md index 9afa895ce6..02712ca526 100644 --- a/language/nodejs/build-images.md +++ b/language/nodejs/build-images.md @@ -28,7 +28,7 @@ To complete this tutorial, you need the following: Let’s create a simple Node.js application that we can use as our example. Create a directory in your local machine named `node-docker` and follow the steps below to create a simple REST API. -```shell +```console $ cd [path to your node-docker directory] $ npm init -y $ npm install ronin-server ronin-mocks @@ -55,13 +55,13 @@ The mocking server is called `Ronin.js` and will listen on port 8000 by default. Let’s start our application and make sure it’s running properly. Open your terminal and navigate to your working directory you created. -```shell +```console $ node server.js ``` To test that the application is working properly, we’ll first POST some JSON to the API and then make a GET request to see that the data has been saved. Open a new terminal and run the following curl commands: -```shell +```console $ curl --request POST \ --url http://localhost:8000/test \ --header 'content-type: application/json' \ @@ -199,11 +199,9 @@ The build command optionally takes a `--tag` flag. The tag is used to set the na Let’s build our first Docker image. -```shell +```console $ docker build --tag node-docker . -``` -```shell [+] Building 93.8s (11/11) FINISHED => [internal] load build definition from dockerfile 0.1s => => transferring dockerfile: 617B 0.0s @@ -221,7 +219,7 @@ To see a list of images we have on our local machine, we have two options. One i To list images, simply run the `images` command. -```shell +```console $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE node-docker latest 3809733582bc About a minute ago 945MB @@ -237,7 +235,7 @@ An image is made up of a manifest and a list of layers. In simple terms, a “ta To create a new tag for the image we built above, run the following command. -```shell +```console $ docker tag node-docker:latest node-docker:v1.0.0 ``` @@ -256,14 +254,14 @@ You can see that we have two images that start with `node-docker`. We know they Let’s remove the tag that we just created. To do this, we’ll use the rmi command. The rmi command stands for “remove image”. -```shell +```console $ docker rmi node-docker:v1.0.0 Untagged: node-docker:v1.0.0 ``` Notice that the response from Docker tells us that the image has not been removed but only “untagged”. Verify this by running the images command. -```shell +```console $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE node-docker latest 3809733582bc 32 minutes ago 945MB diff --git a/language/nodejs/configure-ci-cd.md b/language/nodejs/configure-ci-cd.md index 8da2eaef0f..9ce738c7cc 100644 --- a/language/nodejs/configure-ci-cd.md +++ b/language/nodejs/configure-ci-cd.md @@ -187,9 +187,9 @@ on: This ensures that the main CI will only trigger if we tag our commits with `V.n.n.n.` Let’s test this. For example, run the following command: -```bash -git tag -a v1.0.2 -git push origin v1.0.2 +```console +$ git tag -a v1.0.2 +$ git push origin v1.0.2 ``` Now, go to GitHub and check your Actions diff --git a/language/nodejs/develop.md b/language/nodejs/develop.md index a66c4b2fe3..22acf497a6 100644 --- a/language/nodejs/develop.md +++ b/language/nodejs/develop.md @@ -26,20 +26,20 @@ Before we run MongoDB in a container, we want to create a couple of volumes that Let’s create our volumes now. We’ll create one for the data and one for configuration of MongoDB. -```shell +```console $ docker volume create mongodb $ docker volume create mongodb_config ``` Now we’ll create a network that our application and database will use to talk with each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service which we can use when creating our connection string. -```shell +```console $ docker network create mongodb ``` Now we can run MongoDB in a container and attach to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally. -```shell +```console $ docker run -it --rm -d -v mongodb:/data/db \ -v mongodb_config:/data/configdb -p 27017:27017 \ --network mongodb \ @@ -64,19 +64,19 @@ We’ve add the `ronin-database` module and we updated the code to connect to th First let’s add the `ronin-database` module to our application using npm. -```shell +```console $ npm install ronin-database ``` Now we can build our image. -```shell +```console $ docker build --tag node-docker . ``` Now, let’s run our container. But this time we’ll need to set the `CONNECTIONSTRING` environment variable so our application knows what connection string to use to access the database. We’ll do this right in the `docker run` command. -```shell +```console $ docker run \ -it --rm -d \ --network mongodb \ @@ -88,7 +88,7 @@ $ docker run \ Let’s test that our application is connected to the database and is able to add a note. -```shell +```console $ curl --request POST \ --url http://localhost:8000/notes \ --header 'content-type: application/json' \ @@ -162,7 +162,7 @@ $ npm install nodemon Let’s start our application and confirm that it is running properly. -```shell +```console $ docker-compose -f docker-compose.dev.yml up --build ``` @@ -174,7 +174,7 @@ If all goes will you should see something similar: Now let’s test our API endpoint. Run the following curl command: -```shell +```console $ curl --request GET --url http://localhost:8000/notes ``` @@ -212,7 +212,7 @@ If you take a look at the terminal where our Compose application is running, you Navigate back to the Chrome DevTools and set a breakpoint on the line containing the `return res.json({ "foo": "bar" })` statement, and then run the following curl command to trigger the breakpoint. -```shell +```console $ curl --request GET --url http://localhost:8000/foo ``` diff --git a/language/nodejs/run-containers.md b/language/nodejs/run-containers.md index 3f6a352930..564f9057ac 100644 --- a/language/nodejs/run-containers.md +++ b/language/nodejs/run-containers.md @@ -20,7 +20,7 @@ A container is a normal operating system process except that this process is iso To run an image inside of a container, we use the `docker run` command. The `docker run` command requires one parameter and that is the image name. Let’s start our image and make sure it is running correctly. Execute the following command in your terminal. -```shell +```console $ docker run node-docker ``` @@ -28,7 +28,7 @@ When you run this command, you’ll notice that you were not returned to the com Let’s open a new terminal then make a GET request to the server using the curl command. -```shell +```console $ curl --request POST \ --url http://localhost:8000/test \ --header 'content-type: application/json' \ @@ -46,13 +46,13 @@ To publish a port for our container, we’ll use the `--publish` flag (`-p` for Start the container and expose port 8000 to port 8000 on the host. -```shell +```console $ docker run --publish 8000:8000 node-docker ``` Now let’s rerun the curl command from above. Remember to open a new terminal. -```shell +```console $ curl --request POST \ --url http://localhost:8000/test \ --header 'content-type: application/json' \ @@ -70,7 +70,7 @@ Press ctrl-c to stop the container. This is great so far, but our sample application is a web server and we should not have to have our terminal connected to the container. Docker can run your container in detached mode or in the background. To do this, we can use the `--detach` or `-d` for short. Docker will start your container the same as before but this time will “detach” from the container and return you to the terminal prompt. -```shell +```console $ docker run -d -p 8000:8000 node-docker ce02b3179f0f10085db9edfccd731101868f58631bdf918ca490ff6fd223a93b ``` @@ -79,7 +79,7 @@ Docker started our container in the background and printed the Container ID on t Again, let’s make sure that our container is running properly. Run the same curl command from above. -```shell +```console $ curl --request POST \ --url http://localhost:8000/test \ --header 'content-type: application/json' \ @@ -103,14 +103,14 @@ The `ps` command tells a bunch of stuff about our running containers. We can see You are probably wondering where the name of our container is coming from. Since we didn’t provide a name for the container when we started it, Docker generated a random name. We’ll fix this in a minute but first we need to stop the container. To stop the container, run the `docker stop` command which does just that, stops the container. You will need to pass the name of the container or you can use the container id. -```shell +```console $ docker stop wonderful_kalam wonderful_kalam ``` Now rerun the `docker ps` command to see a list of running containers. -```shell +```console $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` @@ -119,7 +119,7 @@ CONTAINER ID IMAGE COMMAND CREATED Docker containers can be started, stopped and restarted. When we stop a container, it is not removed but the status is changed to stopped and the process inside of the container is stopped. When we ran the `docker ps` command, the default output is to only show running containers. If we pass the `--all` or `-a` for short, we will see all containers on our system whether they are stopped or started. -```shell +```console $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce02b3179f0f node-docker "docker-entrypoint.s…" 16 minutes ago Exited (0) 5 minutes ago wonderful_kalam @@ -131,13 +131,13 @@ If you’ve been following along, you should see several containers listed. Thes Let’s restart the container that we just stopped. Locate the name of the container we just stopped and replace the name of the container below in the restart command. -```shell +```console $ docker restart wonderful_kalam ``` Now, list all the containers again using the ps command. -```shell +```console $ docker ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce02b3179f0f node-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam @@ -151,14 +151,14 @@ Let’s stop and remove all of our containers and take a look at fixing the rand Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system. -```shell +```console $ docker stop wonderful_kalam wonderful_kalam ``` Now that all of our containers are stopped, let’s remove them. When a container is removed, it is no longer running nor is it in the stopped status. However, the process inside the container has been stopped and the metadata for the container has been removed. -```shell +```console $ docker ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce02b3179f0f node-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam @@ -170,7 +170,7 @@ To remove a container, simple run the `docker rm` command passing the container Again, make sure you replace the containers names in the below command with the container names from your system. -```shell +```console $ docker rm wonderful_kalam agitated_moser goofy_khayyam wonderful_kalam agitated_moser @@ -183,7 +183,7 @@ Now let’s address the pesky random name issue. Standard practice is to name yo To name a container, we just need to pass the `--name` flag to the run command. -```shell +```console $ docker run -d -p 8000:8000 --name rest-server node-docker 1aa5d46418a68705c81782a58456a4ccdb56a309cb5e6bd399478d01eaa5cdda $ docker ps diff --git a/language/nodejs/run-tests.md b/language/nodejs/run-tests.md index a0db321eb7..e4a7cce034 100644 --- a/language/nodejs/run-tests.md +++ b/language/nodejs/run-tests.md @@ -18,7 +18,7 @@ Testing is an essential part of modern software development. Testing can mean a Let's define a Mocha test in a `./test` directory within our application. -```shell +```console $ mkdir -p test ``` @@ -39,13 +39,13 @@ describe('Array', function() { Let’s build our Docker image and confirm everything is running properly. Run the following command to build and run your Docker image in a container. -```shell +```console $ docker-compose -f docker-compose.dev.yml up --build ``` Now let’s test our application by POSTing a JSON payload and then make an HTTP GET request to make sure our JSON was saved correctly. -```shell +```console $ curl --request POST \ --url http://localhost:8000/test \ --header 'content-type: application/json' \ @@ -56,7 +56,7 @@ $ curl --request POST \ Now, perform a GET request to the same endpoint to make sure our JSON payload was saved and retrieved correctly. The “id” and “createDate” will be different for you. -```shell +```console $ curl http://localhost:8000/test {"code":"success","payload":[{"msg":"testing","id":"e88acedb-203d-4a7d-8269-1df6c1377512","createDate":"2020-10-11T23:21:16.378Z"}]} @@ -66,7 +66,7 @@ $ curl http://localhost:8000/test Run the following command to install Mocha and add it to the developer dependencies: -```shell +```console $ npm install --save-dev mocha ``` @@ -87,7 +87,7 @@ Okay, now that we know our application is running properly, let’s try and run Below is the Docker command to start the container and run tests: -```shell +```console $ docker-compose -f docker-compose.dev.yml run notes npm run test Creating node-docker_notes_run ... @@ -133,7 +133,7 @@ We first add a label `as base` to the `FROM node:14.15.4` statement. This allows Now let’s rebuild our image and run our tests. We will run the same docker build command as above but this time we will add the `--target test` flag so that we specifically run the test build stage. -```shell +```console $ docker build -t node-docker --target test . [+] Building 66.5s (12/12) FINISHED => [internal] load build definition from Dockerfile 0.0s @@ -151,8 +151,8 @@ $ docker build -t node-docker --target test . Now that our test image is built, we can run it in a container and see if our tests pass. -```shell -docker run -it --rm -p 8000:8000 node-docker +```console +$ docker run -it --rm -p 8000:8000 node-docker > node-docker@1.0.0 test /code > mocha ./**/*.js diff --git a/language/python/build-images.md b/language/python/build-images.md index a8f38d9490..d862129ee5 100644 --- a/language/python/build-images.md +++ b/language/python/build-images.md @@ -26,7 +26,7 @@ To complete this tutorial, you need the following: Let’s create a simple Python application using the Flask framework that we’ll use as our example. Create a directory in your local machine named `python-docker` and follow the steps below to create a simple web server. -```shell +```console $ cd /path/to/python-docker $ pip3 install Flask $ pip3 freeze > requirements.txt @@ -35,7 +35,7 @@ $ touch app.py Now, let’s add some code to handle simple web requests. Open this working directory in your favorite IDE and enter the following code into the `app.py` file. -```shell +```python from flask import Flask app = Flask(__name__) @@ -48,7 +48,7 @@ def hello_world(): Let’s start our application and make sure it’s running properly. Open your terminal and navigate to the working directory you created. -```shell +```console $ python3 -m flask run ``` @@ -170,7 +170,7 @@ The build command optionally takes a `--tag` flag. The tag is used to set the na Let’s build our first Docker image. -```shell +```console $ docker build --tag python-docker . [+] Building 2.7s (10/10) FINISHED => [internal] load build definition from Dockerfile @@ -198,7 +198,7 @@ To see a list of images we have on our local machine, we have two options. One i To list images, simply run the `docker images` command. -```shell +```console $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE python-docker latest 8cae92a8fbd6 3 minutes ago 123MB @@ -215,7 +215,7 @@ An image is made up of a manifest and a list of layers. Do not worry too much ab To create a new tag for the image we’ve built above, run the following command. -```shell +```console $ docker tag python-docker:latest python-docker:v1.0.0 ``` @@ -223,7 +223,7 @@ The `docker tag` command creates a new tag for an image. It does not create a ne Now, run the `docker images` command to see a list of our local images. -```shell +```console $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE python-docker latest 8cae92a8fbd6 4 minutes ago 123MB @@ -235,14 +235,14 @@ You can see that we have two images that start with `python-docker`. We know the Let’s remove the tag that we just created. To do this, we’ll use the `rmi` command. The `rmi` command stands for remove image. -```shell +```console $ docker rmi python-docker:v1.0.0 Untagged: python-docker:v1.0.0 ``` Note that the response from Docker tells us that the image has not been removed but only “untagged”. You can check this by running the `docker images` command. -```shell +```console $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE python-docker latest 8cae92a8fbd6 6 minutes ago 123MB diff --git a/language/python/configure-ci-cd.md b/language/python/configure-ci-cd.md index 2da21b6e7e..1afc9f005a 100644 --- a/language/python/configure-ci-cd.md +++ b/language/python/configure-ci-cd.md @@ -187,9 +187,9 @@ on: This ensures that the main CI will only trigger if we tag our commits with `Vn.n.n.` Let’s test this. For example, run the following command: -```bash -git tag -a v1.0.2 -git push origin v1.0.2 +```console +$ git tag -a v1.0.2 +$ git push origin v1.0.2 ``` Now, go to GitHub and check your Actions diff --git a/language/python/develop.md b/language/python/develop.md index 9d33dd3d01..45e3a0bd96 100644 --- a/language/python/develop.md +++ b/language/python/develop.md @@ -24,21 +24,21 @@ Before we run MySQL in a container, we'll create a couple of volumes that Docker Let’s create our volumes now. We’ll create one for the data and one for configuration of MySQL. -```shell +```console $ docker volume create mysql $ docker volume create mysql_config ``` Now we’ll create a network that our application and database will use to talk to each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service which we can use when creating our connection string. -```shell +```console $ docker network create mysqlnet ``` Now we can run MySQL in a container and attach to the volumes and network we created above. Docker pulls the image from Hub and runs it for you locally. In the following command, option `-v` is for starting the container with volumes. For more information, see [Docker volumes](../../storage/volumes.md). -```shell +```console $ docker run --rm -d -v mysql:/var/lib/mysql \ -v mysql_config:/etc/mysql -p 3306:3306 \ --network mysqlnet \ @@ -49,7 +49,7 @@ $ docker run --rm -d -v mysql:/var/lib/mysql \ Now, let’s make sure that our MySQL database is running and that we can connect to it. Connect to the running MySQL database inside the container using the following command and enter "p@ssw0rd1" when prompted for the password: -```shell +```console $ docker exec -ti mysqldb mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. @@ -75,7 +75,7 @@ Next, we'll update the sample application we created in the [Build images](build Okay, now that we have a running MySQL, let’s update the `app.py` to use MySQL as a datastore. Let’s also add some routes to our server. One for fetching records and one for inserting records. -```shell +```python import mysql.connector import json from flask import Flask @@ -145,20 +145,20 @@ We’ve added the MySQL module and updated the code to connect to the database s First, let’s add the `mysql-connector-python` module to our application using pip. -```shell +```console $ pip3 install mysql-connector-python $ pip3 freeze > requirements.txt ``` Now we can build our image. -```shell +```console $ docker build --tag python-docker-dev . ``` Now, let’s add the container to the database network and then run our container. This allows us to access the database by its container name. -```shell +```console $ docker run \ --rm -d \ --network mysqlnet \ @@ -169,14 +169,14 @@ $ docker run \ Let’s test that our application is connected to the database and is able to add a note. -```shell +```console $ curl http://localhost:5000/initdb $ curl http://localhost:5000/widgets ``` You should receive the following JSON back from our service. -```shell +```json [] ``` @@ -221,7 +221,7 @@ Another really cool feature of using a Compose file is that we have service reso Now, to start our application and to confirm that it is running properly, run the following command: -```shell +```console $ docker-compose -f docker-compose.dev.yml up --build ``` @@ -229,14 +229,14 @@ We pass the `--build` flag so Docker will compile our image and then starts the Now let’s test our API endpoint. Open a new terminal then make a GET request to the server using the curl commands: -```shell +```console $ curl http://localhost:5000/initdb $ curl http://localhost:5000/widgets ``` You should receive the following response: -```shell +```json [] ``` diff --git a/language/python/run-containers.md b/language/python/run-containers.md index b628699e6a..43b00c1581 100644 --- a/language/python/run-containers.md +++ b/language/python/run-containers.md @@ -18,7 +18,7 @@ A container is a normal operating system process except that this process is iso To run an image inside of a container, we use the `docker run` command. The `docker run` command requires one parameter which is the name of the image. Let’s start our image and make sure it is running correctly. Run the following command in your terminal. -```shell +```console $ docker run python-docker ``` @@ -26,7 +26,7 @@ After running this command, you’ll notice that you were not returned to the co Let’s open a new terminal then make a `GET` request to the server using the `curl` command. -```shell +```console $ curl localhost:5000 curl: (7) Failed to connect to localhost port 5000: Connection refused ``` @@ -39,13 +39,13 @@ To publish a port for our container, we’ll use the `--publish flag` (`-p` for We did not specify a port when running the flask application in the container and the default is 5000. If we want our previous request going to port 5000 to work we can map the host's port 5000 to the container's port 5000: -```shell +```console $ docker run --publish 5000:5000 python-docker ``` Now, let’s rerun the curl command from above. Remember to open a new terminal. -```shell +```console $ curl localhost:5000 Hello, Docker! ``` @@ -62,7 +62,7 @@ Press ctrl-c to stop the container. This is great so far, but our sample application is a web server and we don't have to be connected to the container. Docker can run your container in detached mode or in the background. To do this, we can use the `--detach` or `-d` for short. Docker starts your container the same as before but this time will “detach” from the container and return you to the terminal prompt. -```shell +```console $ docker run -d -p 5000:5000 python-docker ce02b3179f0f10085db9edfccd731101868f58631bdf918ca490ff6fd223a93b ``` @@ -71,7 +71,7 @@ Docker started our container in the background and printed the Container ID on t Again, let’s make sure that our container is running properly. Run the same curl command from above. -```shell +```console $ curl localhost:5000 Hello, Docker! ``` @@ -80,7 +80,7 @@ Hello, Docker! Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, we can run the `docker ps` command. Just like on Linux to see a list of processes on your machine, we would run the `ps` command. In the same spirit, we can run the `docker ps` command which displays a list of containers running on our machine. -```shell +```console $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce02b3179f0f python-docker "python3 -m flask ru…" 6 minutes ago Up 6 minutes 0.0.0.0:5000->5000/tcp wonderful_kalam @@ -90,14 +90,14 @@ The `docker ps` command provides a bunch of information about our running contai You are probably wondering where the name of our container is coming from. Since we didn’t provide a name for the container when we started it, Docker generated a random name. We’ll fix this in a minute, but first we need to stop the container. To stop the container, run the `docker stop` command which does just that, stops the container. You need to pass the name of the container or you can use the container ID. -```shell +```console $ docker stop wonderful_kalam wonderful_kalam ``` Now, rerun the `docker ps` command to see a list of running containers. -```shell +```console $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` @@ -106,7 +106,7 @@ CONTAINER ID IMAGE COMMAND CREATED You can start, stop, and restart Docker containers. When we stop a container, it is not removed, but the status is changed to stopped and the process inside the container is stopped. When we ran the `docker ps` command in the previous module, the default output only shows running containers. When we pass the `--all` or `-a` for short, we see all containers on our machine, irrespective of their start or stop status. -```shell +```console $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce02b3179f0f python-docker "python3 -m flask ru…" 16 minutes ago Exited (0) 5 minutes ago wonderful_kalam @@ -118,13 +118,13 @@ You should now see several containers listed. These are containers that we start Let’s restart the container that we just stopped. Locate the name of the container we just stopped and replace the name of the container below in the restart command. -```shell +```console $ docker restart wonderful_kalam ``` Now list all the containers again using the `docker ps` command. -```shell +```console $ docker ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce02b3179f0f python-docker "python3 -m flask ru…" 19 minutes ago Up 8 seconds 0.0.0.0:5000->5000/tcp wonderful_kalam @@ -136,14 +136,14 @@ Notice that the container we just restarted has been started in detached mode an Now, let’s stop and remove all of our containers and take a look at fixing the random naming issue. Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system. -```shell +```console $ docker stop wonderful_kalam wonderful_kalam ``` Now that all of our containers are stopped, let’s remove them. When you remove a container, it is no longer running, nor it is in the stopped status, but the process inside the container has been stopped and the metadata for the container has been removed. -```shell +```console $ docker ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce02b3179f0f python-docker "python3 -m flask ru…" 19 minutes ago Up 8 seconds 0.0.0.0:5000->5000/tcp wonderful_kalam @@ -153,7 +153,7 @@ fb7a41809e5d python-docker "python3 -m flask ru…" 40 minutes To remove a container, simple run the `docker rm` command passing the container name. You can pass multiple container names to the command using a single command. Again, replace the container names in the following command with the container names from your system. -```shell +```console $ docker rm wonderful_kalam agitated_moser goofy_khayyam wonderful_kalam agitated_moser @@ -166,7 +166,7 @@ Now, let’s address the random naming issue. Standard practice is to name your To name a container, we just need to pass the `--name` flag to the `docker run` command. -```shell +```console $ docker run -d -p 5000:5000 --name rest-server python-docker 1aa5d46418a68705c81782a58456a4ccdb56a309cb5e6bd399478d01eaa5cdda $ docker ps diff --git a/machine/completion.md b/machine/completion.md index 2bfba81d4f..d4a755a505 100644 --- a/machine/completion.md +++ b/machine/completion.md @@ -20,14 +20,14 @@ Place the completion script in `/etc/bash_completion.d/` as follows: * On a Mac: - ```shell - sudo curl -L https://raw.githubusercontent.com/docker/machine/v{{site.machine_version}}/contrib/completion/bash/docker-machine.bash -o `brew --prefix`/etc/bash_completion.d/docker-machine + ```console + $ sudo curl -L https://raw.githubusercontent.com/docker/machine/v{{site.machine_version}}/contrib/completion/bash/docker-machine.bash -o `brew --prefix`/etc/bash_completion.d/docker-machine ``` * On a standard Linux installation: - ```shell - sudo curl -L https://raw.githubusercontent.com/docker/machine/v{{site.machine_version}}/contrib/completion/bash/docker-machine.bash -o /etc/bash_completion.d/docker-machine + ```console + $ sudo curl -L https://raw.githubusercontent.com/docker/machine/v{{site.machine_version}}/contrib/completion/bash/docker-machine.bash -o /etc/bash_completion.d/docker-machine ``` Completion is available upon next login. @@ -38,9 +38,9 @@ Completion is available upon next login. Place the completion script in a `completion` directory within the ZSH configuration directory, such as `~/.zsh/completion/`. -```shell -mkdir -p ~/.zsh/completion -curl -L https://raw.githubusercontent.com/docker/machine/v{{site.machine_version}}/contrib/completion/zsh/_docker-machine > ~/.zsh/completion/_docker-machine +```console +$ mkdir -p ~/.zsh/completion +$ curl -L https://raw.githubusercontent.com/docker/machine/v{{site.machine_version}}/contrib/completion/zsh/_docker-machine > ~/.zsh/completion/_docker-machine ``` Include the directory in your `$fpath`, by adding a line like the following to the diff --git a/machine/drivers/hyper-v.md b/machine/drivers/hyper-v.md index 36f62a66cd..e38f7a8870 100644 --- a/machine/drivers/hyper-v.md +++ b/machine/drivers/hyper-v.md @@ -95,13 +95,13 @@ Reboot your desktop system to clear out any routing table problems. Without a re * Use the Microsoft Hyper-V driver and reference the new virtual switch you created. - ```shell - docker-machine create -d hyperv --hyperv-virtual-switch + ```console + $ docker-machine create -d hyperv --hyperv-virtual-switch ``` Here is an example of creating a `manager1` node: - ```shell + ```console PS C:\WINDOWS\system32> docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" manager1 Running pre-create checks... Creating machine... @@ -131,9 +131,9 @@ Reboot your desktop system to clear out any routing table problems. Without a re For our example, the commands are: - ```shell - docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" worker1 - docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" worker2 + ```console + $ docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" worker1 + $ docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" worker2 ``` ## Where to go next diff --git a/machine/drivers/linode.md b/machine/drivers/linode.md index 146e879d87..575e9ba302 100644 --- a/machine/drivers/linode.md +++ b/machine/drivers/linode.md @@ -19,8 +19,8 @@ Then, install the latest release of the Linode machine driver for your environme ### Usage -```bash -docker-machine create -d linode --linode-token= linode +```console +$ docker-machine create -d linode --linode-token= linode ``` See the [Linode Docker machine driver project page](https://github.com/linode/docker-machine-driver-linode) for more examples. @@ -55,7 +55,7 @@ See the [Linode Docker machine driver project page](https://github.com/linode/do Detailed run output will be emitted when using the LinodeGo `LINODE_DEBUG=1` option along with the `docker-machine` `--debug` option. -```bash -LINODE_DEBUG=1 docker-machine --debug create -d linode --linode-token=$LINODE_TOKEN machinename +```console +$ LINODE_DEBUG=1 docker-machine --debug create -d linode --linode-token=$LINODE_TOKEN machinename ``` diff --git a/machine/drivers/os-base.md b/machine/drivers/os-base.md index 6bab20fe8b..035bfa6394 100644 --- a/machine/drivers/os-base.md +++ b/machine/drivers/os-base.md @@ -16,7 +16,7 @@ forth. For example, to create an Azure machine: Grab your subscription ID from the portal, then run `docker-machine create` with these details: -```bash +```console $ docker-machine create -d azure --azure-subscription-id="SUB_ID" --azure-subscription-cert="mycert.pem" A-VERY-UNIQUE-NAME ``` diff --git a/machine/examples/aws.md b/machine/examples/aws.md index 02917e46e7..a32b1268d9 100644 --- a/machine/examples/aws.md +++ b/machine/examples/aws.md @@ -41,8 +41,8 @@ Follow along with this example to create a Dockerized [Amazon Web Services (AWS) 2. Run `docker-machine create` with the `amazonec2` driver, credentials, inbound port, region, and a name for the new instance. For example: - ```bash - docker-machine create --driver amazonec2 --amazonec2-open-port 8000 --amazonec2-region us-west-1 aws-sandbox + ```console + $ docker-machine create --driver amazonec2 --amazonec2-open-port 8000 --amazonec2-region us-west-1 aws-sandbox ``` > **Note**: For all aws create flags, run: `docker-machine create --driver amazonec2 --help` @@ -52,8 +52,8 @@ Follow along with this example to create a Dockerized [Amazon Web Services (AWS) If you set your keys in a credentials file, the command looks like this to create an instance called `aws-sandbox`: - ```bash - docker-machine create --driver amazonec2 aws-sandbox + ```console + $ docker-machine create --driver amazonec2 aws-sandbox ``` **Specify keys at the command line** @@ -61,16 +61,16 @@ Follow along with this example to create a Dockerized [Amazon Web Services (AWS) If you don't have a credentials file, you can use the flags `--amazonec2-access-key` and `--amazonec2-secret-key` on the command line: - ```bash - docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key 8T93C******* aws-sandbox + ```console + $ docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key 8T93C******* aws-sandbox ``` **Expose a port** To expose an inbound port to the new machine, use the flag, `--amazonec2-open-port`: - ```bash - docker-machine create --driver amazonec2 --amazonec2-open-port 8000 aws-sandbox + ```console + $ docker-machine create --driver amazonec2 --amazonec2-open-port 8000 aws-sandbox ``` **Specify a region** @@ -80,8 +80,8 @@ Follow along with this example to create a Dockerized [Amazon Web Services (AWS) `--amazonec2-region` flag. For example, create aws-sandbox in us-west-1 (Northern California). - ```bash - docker-machine create --driver amazonec2 --amazonec2-region us-west-1 aws-sandbox + ```console + $ docker-machine create --driver amazonec2 --amazonec2-region us-west-1 aws-sandbox ``` 3. Go to the AWS EC2 Dashboard to view the new instance. @@ -97,7 +97,7 @@ Follow along with this example to create a Dockerized [Amazon Web Services (AWS) 4. Ensure that your new machine is the active host. - ```bash + ```console $ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS aws-sandbox * amazonec2 Running tcp://52.90.113.128:2376 v1.10.0 @@ -118,7 +118,7 @@ Follow along with this example to create a Dockerized [Amazon Web Services (AWS) the host IP address and `docker-machine inspect ` lists all the details. - ```bash + ```console $ docker-machine ip aws-sandbox 192.168.99.100 @@ -141,8 +141,8 @@ You can run docker commands from a local terminal to the active docker machine. 1. Run docker on the active docker machine by downloading and running the hello-world image: - ```bash - docker run hello-world + ```console + $ docker run hello-world ``` 2. Ensure that you ran hello-world on aws-sandbox (and not localhost or some @@ -151,25 +151,25 @@ other machine): Log on to aws-sandbox with ssh and list all containers. You should see hello-world (with a recent exited status): - ```bash - docker-machine ssh aws-sandbox - sudo docker container ls -a - exit + ```console + $ docker-machine ssh aws-sandbox + $ sudo docker container ls -a + $ exit ``` Log off aws-sandbox and unset this machine as active. Then list images again. You should not see hello-world (at least not with the same exited status): - ```bash - eval $(docker-machine env -u) - docker container ls -a + ```console + $ eval $(docker-machine env -u) + $ docker container ls -a ``` 3. Reset aws-sandbox as the active docker machine: - ```bash - eval $(docker-machine env aws-sandbox) + ```console + $ eval $(docker-machine env aws-sandbox) ``` For a more interesting test, run a Dockerized webserver on your new machine. @@ -181,7 +181,7 @@ other machine): In this example, the `-p` option is used to expose port 80 from the `nginx` container and make it accessible on port `8000` of the `aws-sandbox` host: - ```bash + ```console $ docker run -d -p 8000:80 --name webserver kitematic/hello-world-nginx Unable to find image 'kitematic/hello-world-nginx:latest' locally latest: Pulling from kitematic/hello-world-nginx diff --git a/machine/get-started-cloud.md b/machine/get-started-cloud.md index a80de6b669..14020ef413 100644 --- a/machine/get-started-cloud.md +++ b/machine/get-started-cloud.md @@ -24,7 +24,7 @@ examples below for DigitalOcean and AWS. For DigitalOcean, this command creates a Droplet (cloud host) called "docker-sandbox". -```shell +```console $ docker-machine create --driver digitalocean --digitalocean-access-token xxxxx docker-sandbox ``` @@ -35,7 +35,7 @@ Ocean, see the [DigitalOcean Example](examples/ocean.md). For AWS EC2, this command creates an instance called "aws-sandbox": -```shell +```console $ docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key 8T93C******* aws-sandbox ``` diff --git a/machine/reference/active.md b/machine/reference/active.md index 9d3f1a5ace..a576adb57f 100644 --- a/machine/reference/active.md +++ b/machine/reference/active.md @@ -7,7 +7,7 @@ title: docker-machine active See which machine is "active" (a machine is considered active if the `DOCKER_HOST` environment variable points to it). -```bash +```console $ docker-machine ls NAME ACTIVE DRIVER STATE URL diff --git a/machine/reference/config.md b/machine/reference/config.md index 10c29bdb28..282fadd83d 100644 --- a/machine/reference/config.md +++ b/machine/reference/config.md @@ -19,7 +19,7 @@ Options: For example: -```bash +```console $ docker-machine config dev \ --tlsverify \ --tlscacert="/Users/ehazlett/.docker/machines/dev/ca.pem" \ diff --git a/machine/reference/create.md b/machine/reference/create.md index 6d0f743417..bfedb98ed6 100644 --- a/machine/reference/create.md +++ b/machine/reference/create.md @@ -18,7 +18,7 @@ information on how to use them, see [Machine drivers](../drivers/index.md). Here is an example of using the `--virtualbox` driver to create a machine called `dev`. -```bash +```console $ docker-machine create --driver virtualbox dev Creating CA: /home/username/.docker/machine/certs/ca.pem @@ -40,7 +40,7 @@ drivers. These largely control aspects of Machine's provisioning process (including the creation of Docker Swarm containers) that the user may wish to customize. -```bash +```console $ docker-machine create Docker Machine Version: 0.5.0 (45e3688) @@ -79,7 +79,7 @@ geographical region (`--amazonec2-region us-west-1`), and so on. To see the provider-specific flags, simply pass a value for `--driver` when invoking the `create` help text. -```bash +```console $ docker-machine create --driver virtualbox --help Usage: docker-machine create [OPTIONS] [arg...] @@ -147,7 +147,7 @@ filesystem has been created, and so on. The following is an example usage: -```bash +```console $ docker-machine create -d virtualbox \ --engine-label foo=bar \ --engine-label spam=eggs \ @@ -162,7 +162,7 @@ labels on the engine, and allows pushing / pulling from the insecure registry located at `registry.myco.com`. You can verify much of this by inspecting the output of `docker info`: -```bash +```console $ eval $(docker-machine env foobarmachine) $ docker info @@ -195,7 +195,7 @@ For example, to specify that the daemon should use `8.8.8.8` as the DNS server for all containers, and always use the `syslog` [log driver](../../config/containers/logging/configure.md) you could run the following create command: -```bash +```console $ docker-machine create -d virtualbox \ --engine-opt dns=8.8.8.8 \ --engine-opt log-driver=syslog \ @@ -205,7 +205,7 @@ $ docker-machine create -d virtualbox \ Additionally, Docker Machine supports a flag, `--engine-env`, which can be used to specify arbitrary environment variables to be set within the engine with the syntax `--engine-env name=value`. For example, to specify that the engine should use `example.com` as the proxy server, you could run the following create command: -```bash +```console $ docker-machine create -d virtualbox \ --engine-env HTTP_PROXY=http://example.com:8080 \ --engine-env HTTPS_PROXY=https://example.com:8080 \ @@ -234,7 +234,7 @@ you don't need to worry about it. Example create: -```bash +```console $ docker-machine create -d virtualbox \ --swarm \ --swarm-master \ diff --git a/machine/reference/env.md b/machine/reference/env.md index a7e6aa77ad..e06a17e806 100644 --- a/machine/reference/env.md +++ b/machine/reference/env.md @@ -7,7 +7,7 @@ title: docker-machine env Set environment variables to dictate that `docker` should run a command against a particular machine. -```bash +```console $ docker-machine env --help Usage: docker-machine env [OPTIONS] [arg...] @@ -29,7 +29,7 @@ Options: run in a subshell. Running `docker-machine env -u` prints `unset` commands which reverse this effect. -```bash +```console $ env | grep DOCKER $ eval "$(docker-machine env dev)" $ env | grep DOCKER @@ -69,7 +69,7 @@ If you are on Windows and using either PowerShell or `cmd.exe`, `docker-machine For PowerShell: -```bash +```console $ docker-machine.exe env --shell powershell dev $Env:DOCKER_TLS_VERIFY = "1" @@ -82,7 +82,7 @@ $Env:DOCKER_MACHINE_NAME = "dev" For `cmd.exe`: -```bash +```console $ docker-machine.exe env --shell cmd dev set DOCKER_TLS_VERIFY=1 @@ -104,7 +104,7 @@ This is useful when using `docker-machine` with a local VM provider, such as `virtualbox` or `vmwarefusion`, in network environments where an HTTP proxy is required for internet access. -```bash +```console $ docker-machine env --no-proxy default export DOCKER_TLS_VERIFY="1" diff --git a/machine/reference/help.md b/machine/reference/help.md index a521cde0a5..f2aa96cb1e 100644 --- a/machine/reference/help.md +++ b/machine/reference/help.md @@ -14,7 +14,7 @@ Usage: docker-machine help _subcommand_ For example: -```bash +```console $ docker-machine help config Usage: docker-machine config [OPTIONS] [arg...] diff --git a/machine/reference/inspect.md b/machine/reference/inspect.md index 6012f485cd..f8b82d6faf 100644 --- a/machine/reference/inspect.md +++ b/machine/reference/inspect.md @@ -31,7 +31,7 @@ In addition to the `text/template` syntax, there are some additional functions, This is the default usage of `inspect`. -```bash +```console $ docker-machine inspect dev { @@ -54,7 +54,7 @@ For the most part, you can pick out any field from the JSON in a fairly straightforward manner. {% raw %} -```bash +```console $ docker-machine inspect --format='{{.Driver.IPAddress}}' dev 192.168.5.99 @@ -66,7 +66,7 @@ $ docker-machine inspect --format='{{.Driver.IPAddress}}' dev If you want a subset of information formatted as JSON, you can use the `json` function in the template. -```bash +```console $ docker-machine inspect --format='{{json .Driver}}' dev-fusion {"Boot2DockerURL":"","CPUS":8,"CPUs":8,"CaCertPath":"/Users/hairyhenderson/.docker/machine/certs/ca.pem","DiskSize":20000,"IPAddress":"172.16.62.129","ISO":"/Users/hairyhenderson/.docker/machine/machines/dev-fusion/boot2docker-1.5.0-GH747.iso","MachineName":"dev-fusion","Memory":1024,"PrivateKeyPath":"/Users/hairyhenderson/.docker/machine/certs/ca-key.pem","SSHPort":22,"SSHUser":"docker","SwarmDiscovery":"","SwarmHost":"tcp://0.0.0.0:3376","SwarmMaster":false} @@ -76,7 +76,7 @@ While this is usable, it's not very human-readable. For this reason, there is `prettyjson`: {% raw %} -```bash +```console $ docker-machine inspect --format='{{prettyjson .Driver}}' dev-fusion { diff --git a/machine/reference/ip.md b/machine/reference/ip.md index 31c1ed4237..a42c579cdd 100644 --- a/machine/reference/ip.md +++ b/machine/reference/ip.md @@ -6,7 +6,7 @@ title: docker-machine ip Get the IP address of one or more machines. -```bash +```console $ docker-machine ip dev 192.168.99.104 diff --git a/machine/reference/kill.md b/machine/reference/kill.md index ed511f837c..691630296d 100644 --- a/machine/reference/kill.md +++ b/machine/reference/kill.md @@ -15,7 +15,7 @@ Description: For example: -```bash +```console $ docker-machine ls NAME ACTIVE DRIVER STATE URL diff --git a/machine/reference/mount.md b/machine/reference/mount.md index 73647668bc..b9e487179b 100644 --- a/machine/reference/mount.md +++ b/machine/reference/mount.md @@ -12,7 +12,7 @@ The notation is `machinename:/path/to/dir` for the argument; you can also supply Consider the following example: -```bash +```console $ mkdir foo $ docker-machine ssh dev mkdir foo $ docker-machine mount dev:/home/docker/foo foo @@ -25,7 +25,7 @@ bar Now you can use the directory on the machine, for mounting into containers. Any changes done in the local directory, is reflected in the machine too. -```bash +```console $ eval $(docker-machine env dev) $ docker run -v /home/docker/foo:/tmp/foo busybox ls /tmp/foo bar @@ -42,7 +42,7 @@ so this program ("sftp") needs to be present on the machine - but it usually is. To unmount the directory again, you can use the same options but the `-u` flag. You can also call `fuserunmount` (or `fusermount -u`) commands directly. -```bash +```console $ docker-machine mount -u dev:/home/docker/foo foo $ rmdir foo ``` diff --git a/machine/reference/provision.md b/machine/reference/provision.md index 987c501293..c7333498a2 100644 --- a/machine/reference/provision.md +++ b/machine/reference/provision.md @@ -13,7 +13,7 @@ originally specified Swarm or Engine configuration). Usage is `docker-machine provision [name]`. Multiple names may be specified. -```bash +```console $ docker-machine provision foo bar Copying certs to the local machine directory... diff --git a/machine/reference/regenerate-certs.md b/machine/reference/regenerate-certs.md index 712876fd5c..e55754ee5c 100644 --- a/machine/reference/regenerate-certs.md +++ b/machine/reference/regenerate-certs.md @@ -22,7 +22,7 @@ Regenerate TLS certificates and update the machine with new certs. For example: -```bash +```console $ docker-machine regenerate-certs dev Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y @@ -32,7 +32,7 @@ Regenerating TLS certificates If your certificates have expired, you'll need to regenerate the client certs as well using the `--client-certs` option: -```bash +```console $ docker-machine regenerate-certs --client-certs dev Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y diff --git a/machine/reference/restart.md b/machine/reference/restart.md index f939b11c0c..1b5f5ed9a8 100644 --- a/machine/reference/restart.md +++ b/machine/reference/restart.md @@ -17,7 +17,7 @@ Restart a machine. Oftentimes this is equivalent to `docker-machine stop; docker-machine start`. But some cloud driver try to implement a clever restart which keeps the same IP address. -```bash +```console $ docker-machine restart dev Waiting for VM to start... diff --git a/machine/reference/rm.md b/machine/reference/rm.md index 0b7c11ea8a..a77d41032a 100644 --- a/machine/reference/rm.md +++ b/machine/reference/rm.md @@ -7,7 +7,7 @@ title: docker-machine rm Remove a machine. This removes the local reference and deletes it on the cloud provider or virtualization management platform. -```bash +```console $ docker-machine rm --help Usage: docker-machine rm [OPTIONS] [arg...] @@ -25,7 +25,7 @@ Options: ## Examples -```bash +```console $ docker-machine ls NAME ACTIVE URL STATE URL SWARM DOCKER ERRORS diff --git a/machine/reference/scp.md b/machine/reference/scp.md index 0fc388e0c8..3176718f76 100644 --- a/machine/reference/scp.md +++ b/machine/reference/scp.md @@ -14,7 +14,7 @@ machine's case, you don't need to specify the name, just the path. Consider the following example: -```bash +```console $ cat foo.txt cat: foo.txt: No such file or directory @@ -41,7 +41,7 @@ transferring all of the files. When transferring directories and not just files, avoid rsync surprises by using trailing slashes on both the source and destination. For example: -```bash +```console $ mkdir -p bar $ touch bar/baz $ docker-machine scp -r -d bar/ dev:/home/docker/bar/ @@ -62,7 +62,7 @@ For example, imagine you want to transfer your local directory container on the remote host. If the remote user is `ubuntu`, use a command like this: -```bash +```console $ docker-machine scp -r /Users//webapp MACHINE-NAME:/home/ubuntu/webapp ``` @@ -80,7 +80,7 @@ services: And we can try it out like so: -```bash +```console $ eval $(docker-machine env MACHINE-NAME) $ docker-compose run webapp ``` diff --git a/machine/reference/ssh.md b/machine/reference/ssh.md index f710c29bf7..568b08a57f 100644 --- a/machine/reference/ssh.md +++ b/machine/reference/ssh.md @@ -8,7 +8,7 @@ Log into or run a command on a machine using SSH. To login, just run `docker-machine ssh machinename`: -```bash +```console $ docker-machine ssh dev ## . @@ -34,7 +34,7 @@ bin/ etc/ init linuxrc opt/ root/ sbin/ tmp var/ You can also specify commands to run remotely by appending them directly to the `docker-machine ssh` command, much like the regular `ssh` program works: -```bash +```console $ docker-machine ssh dev free total used free shared buffers @@ -45,7 +45,7 @@ Swap: 1212036 0 1212036 Commands with flags work as well: -```bash +```console $ docker-machine ssh dev df -h Filesystem Size Used Available Use% Mounted on @@ -65,7 +65,7 @@ the command generated by Docker Machine). For instance, the following command forwards port 8080 from the `default` machine to `localhost` on your host computer: -```bash +```console $ docker-machine ssh default -L 8080:localhost:8080 ``` @@ -86,7 +86,7 @@ and Docker Machine acts sensibly out of the box. However, if you deliberately want to use the Go native version, you can do so with a global command line flag / environment variable like so: -```bash +```console $ docker-machine --native-ssh ssh dev ``` diff --git a/machine/reference/start.md b/machine/reference/start.md index 5e96266fa7..4419b55e20 100644 --- a/machine/reference/start.md +++ b/machine/reference/start.md @@ -16,7 +16,7 @@ Description: For example: -```bash +```console $ docker-machine start dev Starting VM... diff --git a/machine/reference/status.md b/machine/reference/status.md index 9a61b2c47d..6e2849f9e4 100644 --- a/machine/reference/status.md +++ b/machine/reference/status.md @@ -15,7 +15,7 @@ Description: For example: -```bash +```console $ docker-machine status dev Running diff --git a/machine/reference/stop.md b/machine/reference/stop.md index 4f34f22f55..1fd7b7ff59 100644 --- a/machine/reference/stop.md +++ b/machine/reference/stop.md @@ -15,7 +15,7 @@ Description: For example: -```bash +```console $ docker-machine ls NAME ACTIVE DRIVER STATE URL diff --git a/machine/reference/upgrade.md b/machine/reference/upgrade.md index e2d6e40e4c..3cd1b8dee5 100644 --- a/machine/reference/upgrade.md +++ b/machine/reference/upgrade.md @@ -14,7 +14,7 @@ example, if the machine uses boot2docker for its OS, this command downloads the latest boot2docker ISO and replace the machine's existing ISO with the latest. -```bash +```console $ docker-machine upgrade default Stopping machine to do the upgrade... diff --git a/machine/reference/url.md b/machine/reference/url.md index 21d22b2eae..1861bac94c 100644 --- a/machine/reference/url.md +++ b/machine/reference/url.md @@ -6,7 +6,7 @@ title: docker-machine url Get the URL of a host -```bash +```console $ docker-machine url dev tcp://192.168.99.109:2376 diff --git a/network/bridge.md b/network/bridge.md index ece00401db..af1177fddb 100644 --- a/network/bridge.md +++ b/network/bridge.md @@ -104,7 +104,7 @@ flag. Use the `docker network create` command to create a user-defined bridge network. -```bash +```console $ docker network create my-net ``` @@ -118,7 +118,7 @@ network. If containers are currently connected to the network, [disconnect them](#disconnect-a-container-from-a-user-defined-bridge) first. -```bash +```console $ docker network rm my-net ``` @@ -139,7 +139,7 @@ publishes port 80 in the container to port 8080 on the Docker host, so external clients can access that port. Any other container connected to the `my-net` network has access to all ports on the `my-nginx` container, and vice versa. -```bash +```console $ docker create --name my-nginx \ --network my-net \ --publish 8080:80 \ @@ -150,7 +150,7 @@ To connect a **running** container to an existing user-defined bridge, use the `docker network connect` command. The following command connects an already-running `my-nginx` container to an already-existing `my-net` network: -```bash +```console $ docker network connect my-net my-nginx ``` @@ -160,7 +160,7 @@ To disconnect a running container from a user-defined bridge, use the `docker network disconnect` command. The following command disconnects the `my-nginx` container from the `my-net` network. -```bash +```console $ docker network disconnect my-net my-nginx ``` @@ -183,14 +183,14 @@ kernel. 1. Configure the Linux kernel to allow IP forwarding. - ```bash + ```console $ sysctl net.ipv4.conf.all.forwarding=1 ``` 2. Change the policy for the `iptables` `FORWARD` policy from `DROP` to `ACCEPT`. - ```bash + ```console $ sudo iptables -P FORWARD ACCEPT ``` diff --git a/network/iptables.md b/network/iptables.md index d68faf13b3..1a48b5fa16 100644 --- a/network/iptables.md +++ b/network/iptables.md @@ -40,7 +40,7 @@ To allow only a specific IP or network to access the containers, insert a negated rule at the top of the `DOCKER-USER` filter chain. For example, the following rule restricts external access from all IP addresses except `192.168.1.1`: -```bash +```console $ iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP ``` @@ -48,14 +48,14 @@ Please note that you will need to change `ext_if` to correspond with your host's actual external interface. You could instead allow connections from a source subnet. The following rule only allows access from the subnet `192.168.1.0/24`: -```bash +```console $ iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.0/24 -j DROP ``` Finally, you can specify a range of IP addresses to accept using `--src-range` (Remember to also add `-m iprange` when using `--src-range` or `--dst-range`): -```bash +```console $ iptables -I DOCKER-USER -m iprange -i ext_if ! --src-range 192.168.1.1-192.168.1.3 -j DROP ``` @@ -76,7 +76,7 @@ any traffic anymore. If you want your system to continue functioning as a router, you can add explicit `ACCEPT` rules to the `DOCKER-USER` chain to allow it: -```bash +```console $ iptables -I DOCKER-USER -i src_if -o dst_if -j ACCEPT ``` @@ -100,7 +100,7 @@ If you are running Docker version 20.10.0 or higher with [firewalld](https://fir Consider running the following `firewalld` command to remove the docker interface from the zone. -```bash +```console # Please substitute the appropriate zone and docker interface $ firewall-cmd --zone=trusted --remove-interface=docker0 --permanent $ firewall-cmd --reload diff --git a/network/macvlan.md b/network/macvlan.md index 0ec4643b6d..4830764870 100644 --- a/network/macvlan.md +++ b/network/macvlan.md @@ -46,7 +46,7 @@ interface, use `--driver macvlan` with the `docker network create` command. You also need to specify the `parent`, which is the interface the traffic will physically go through on the Docker host. -```bash +```console $ docker network create -d macvlan \ --subnet=172.16.86.0/24 \ --gateway=172.16.86.1 \ @@ -56,7 +56,7 @@ $ docker network create -d macvlan \ If you need to exclude IP addresses from being used in the `macvlan` network, such as when a given IP address is already in use, use `--aux-addresses`: -```bash +```console $ docker network create -d macvlan \ --subnet=192.168.32.0/24 \ --ip-range=192.168.32.128/25 \ @@ -71,7 +71,7 @@ If you specify a `parent` interface name with a dot included, such as `eth0.50`, Docker interprets that as a sub-interface of `eth0` and creates the sub-interface automatically. -```bash +```console $ docker network create -d macvlan \ --subnet=192.168.50.0/24 \ --gateway=192.168.50.1 \ @@ -83,7 +83,7 @@ $ docker network create -d macvlan \ In the above example, you are still using a L3 bridge. You can use `ipvlan` instead, and get an L2 bridge. Specify `-o ipvlan_mode=l2`. -```bash +```console $ docker network create -d ipvlan \ --subnet=192.168.210.0/24 \ --subnet=192.168.212.0/24 \ @@ -97,7 +97,7 @@ $ docker network create -d ipvlan \ If you have [configured the Docker daemon to allow IPv6](../config/daemon/ipv6.md), you can use dual-stack IPv4/IPv6 `macvlan` networks. -```bash +```console $ docker network create -d macvlan \ --subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \ --gateway=192.168.216.1 --gateway=192.168.218.1 \ diff --git a/network/network-tutorial-host.md b/network/network-tutorial-host.md index 7b5441aeea..ac8714ecce 100644 --- a/network/network-tutorial-host.md +++ b/network/network-tutorial-host.md @@ -30,8 +30,8 @@ host. 1. Create and start the container as a detached process. The `--rm` option means to remove the container once it exits/stops. The `-d` flag means to start the container detached (in the background). - ```bash - docker run --rm -d --network host --name my_nginx nginx + ```console + $ docker run --rm -d --network host --name my_nginx nginx ``` 2. Access Nginx by browsing to @@ -41,16 +41,16 @@ host. - Examine all network interfaces and verify that a new one was not created. - ```bash - ip addr show + ```console + $ ip addr show ``` - Verify which process is bound to port 80, using the `netstat` command. You need to use `sudo` because the process is owned by the Docker daemon user and you otherwise won't be able to see its name or PID. - ```bash - sudo netstat -tulpn | grep :80 + ```console + $ sudo netstat -tulpn | grep :80 ``` 4. Stop the container. It will be removed automatically as it was started using the `--rm` option. diff --git a/network/network-tutorial-macvlan.md b/network/network-tutorial-macvlan.md index 8ef143f5a2..b1a9d6cf1e 100644 --- a/network/network-tutorial-macvlan.md +++ b/network/network-tutorial-macvlan.md @@ -39,7 +39,7 @@ on your network, your container appears to be physically attached to the network 1. Create a `macvlan` network called `my-macvlan-net`. Modify the `subnet`, `gateway`, and `parent` values to values that make sense in your environment. - ```bash + ```console $ docker network create -d macvlan \ --subnet=172.16.86.0/24 \ --gateway=172.16.86.1 \ @@ -54,7 +54,7 @@ on your network, your container appears to be physically attached to the network `-dit` flags start the container in the background but allow you to attach to it. The `--rm` flag means the container is removed when it is stopped. - ```bash + ```console $ docker run --rm -dit \ --network my-macvlan-net \ --name my-macvlan-alpine \ @@ -94,7 +94,7 @@ on your network, your container appears to be physically attached to the network 4. Check out how the container sees its own network interfaces by running a couple of `docker exec` commands. - ```bash + ```console $ docker exec my-macvlan-alpine ip addr show eth0 9: eth0@tunl0: mtu 1500 qdisc noqueue state UP @@ -103,7 +103,7 @@ on your network, your container appears to be physically attached to the network valid_lft forever preferred_lft forever ``` - ```bash + ```console $ docker exec my-macvlan-alpine ip route default via 172.16.86.1 dev eth0 @@ -113,7 +113,7 @@ on your network, your container appears to be physically attached to the network 5. Stop the container (Docker removes it because of the `--rm` flag), and remove the network. - ```bash + ```console $ docker container stop my-macvlan-alpine $ docker network rm my-macvlan-net @@ -130,7 +130,7 @@ be physically attached to the network. `subnet`, `gateway`, and `parent` values to values that make sense in your environment. - ```bash + ```console $ docker network create -d macvlan \ --subnet=172.16.86.0/24 \ --gateway=172.16.86.1 \ @@ -148,7 +148,7 @@ be physically attached to the network. you to attach to it. The `--rm` flag means the container is removed when it is stopped. - ```bash + ```console $ docker run --rm -itd \ --network my-8021q-macvlan-net \ --name my-second-macvlan-alpine \ @@ -188,7 +188,7 @@ be physically attached to the network. 4. Check out how the container sees its own network interfaces by running a couple of `docker exec` commands. - ```bash + ```console $ docker exec my-second-macvlan-alpine ip addr show eth0 11: eth0@if10: mtu 1500 qdisc noqueue state UP @@ -197,7 +197,7 @@ be physically attached to the network. valid_lft forever preferred_lft forever ``` - ```bash + ```console $ docker exec my-second-macvlan-alpine ip route default via 172.16.86.1 dev eth0 @@ -207,7 +207,7 @@ be physically attached to the network. 5. Stop the container (Docker removes it because of the `--rm` flag), and remove the network. - ```bash + ```console $ docker container stop my-second-macvlan-alpine $ docker network rm my-8021q-macvlan-net diff --git a/network/network-tutorial-overlay.md b/network/network-tutorial-overlay.md index 279df40a18..8fb9ae0675 100644 --- a/network/network-tutorial-overlay.md +++ b/network/network-tutorial-overlay.md @@ -74,7 +74,7 @@ and will be connected together using an overlay network called `ingress`. 1. On `manager`. initialize the swarm. If the host only has one network interface, the `--advertise-addr` flag is optional. - ```bash + ```console $ docker swarm init --advertise-addr= ``` @@ -85,7 +85,7 @@ and will be connected together using an overlay network called `ingress`. 2. On `worker-1`, join the swarm. If the host only has one network interface, the `--advertise-addr` flag is optional. - ```bash + ```console $ docker swarm join --token \ --advertise-addr \ :2377 @@ -94,7 +94,7 @@ and will be connected together using an overlay network called `ingress`. 3. On `worker-2`, join the swarm. If the host only has one network interface, the `--advertise-addr` flag is optional. - ```bash + ```console $ docker swarm join --token \ --advertise-addr \ :2377 @@ -103,7 +103,7 @@ and will be connected together using an overlay network called `ingress`. 4. On `manager`, list all the nodes. This command can only be done from a manager. - ```bash + ```console $ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS @@ -114,7 +114,7 @@ and will be connected together using an overlay network called `ingress`. You can also use the `--filter` flag to filter by role: - ```bash + ```console $ docker node ls --filter role=manager ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS @@ -132,7 +132,7 @@ and will be connected together using an overlay network called `ingress`. network called `docker_gwbridge`. Only the listing for `manager` is shown here: - ```bash + ```console $ docker network ls NETWORK ID NAME DRIVER SCOPE @@ -155,7 +155,7 @@ connect a service to each of them. 1. On `manager`, create a new overlay network called `nginx-net`: - ```bash + ```console $ docker network create -d overlay nginx-net ``` @@ -169,7 +169,7 @@ connect a service to each of them. > **Note**: Services can only be created on a manager. - ```bash + ```console $ docker service create \ --name my-nginx \ --publish target=80,published=80 \ @@ -204,11 +204,11 @@ connect a service to each of them. 6. Create a new network `nginx-net-2`, then update the service to use this network instead of `nginx-net`: - ```bash + ```console $ docker network create -d overlay nginx-net-2 ``` - ```bash + ```console $ docker service update \ --network-add nginx-net-2 \ --network-rm nginx-net \ @@ -228,7 +228,7 @@ connect a service to each of them. commands. The manager will direct the workers to remove the networks automatically. - ```bash + ```console $ docker service rm my-nginx $ docker network rm nginx-net nginx-net-2 ``` @@ -243,14 +243,14 @@ This tutorial assumes the swarm is already set up and you are on a manager. 1. Create the user-defined overlay network. - ```bash + ```console $ docker network create -d overlay my-overlay ``` 2. Start a service using the overlay network and publishing port 80 to port 8080 on the Docker host. - ```bash + ```console $ docker service create \ --name my-nginx \ --network my-overlay \ @@ -264,7 +264,7 @@ This tutorial assumes the swarm is already set up and you are on a manager. 4. Remove the service and the network. - ```bash + ```console $ docker service rm my-nginx $ docker network rm my-overlay @@ -311,7 +311,7 @@ example also uses Linux hosts, but the same commands work on Windows. hosts in the swarm, for instance, the private IP address on AWS): - ```bash + ```console $ docker swarm init Swarm initialized: current node (vz1mm9am11qcmo979tlrlox42) is now a manager. @@ -324,7 +324,7 @@ example also uses Linux hosts, but the same commands work on Windows. b. On `host2`, join the swarm as instructed above: - ```bash + ```console $ docker swarm join --token :2377 This node joined a swarm as a worker. ``` @@ -335,7 +335,7 @@ example also uses Linux hosts, but the same commands work on Windows. 2. On `host1`, create an attachable overlay network called `test-net`: - ```bash + ```console $ docker network create --driver=overlay --attachable test-net uqsof8phj3ak0rq9k86zta6ht ``` @@ -344,14 +344,14 @@ example also uses Linux hosts, but the same commands work on Windows. 3. On `host1`, start an interactive (`-it`) container (`alpine1`) that connects to `test-net`: - ```bash + ```console $ docker run -it --name alpine1 --network test-net alpine / # ``` 4. On `host2`, list the available networks -- notice that `test-net` does not yet exist: - ```bash + ```console $ docker network ls NETWORK ID NAME DRIVER SCOPE ec299350b504 bridge bridge local @@ -363,7 +363,7 @@ example also uses Linux hosts, but the same commands work on Windows. 5. On `host2`, start a detached (`-d`) and interactive (`-it`) container (`alpine2`) that connects to `test-net`: - ```bash + ```console $ docker run -dit --name alpine2 --network test-net alpine fb635f5ece59563e7b8b99556f816d24e6949a5f6a5b1fbd92ca244db17a4342 ``` @@ -372,7 +372,7 @@ example also uses Linux hosts, but the same commands work on Windows. 6. On `host2`, verify that `test-net` was created (and has the same NETWORK ID as `test-net` on `host1`): - ```bash + ```console $ docker network ls NETWORK ID NAME DRIVER SCOPE ... @@ -381,7 +381,7 @@ example also uses Linux hosts, but the same commands work on Windows. 7. On `host1`, ping `alpine2` within the interactive terminal of `alpine1`: - ```bash + ```console / # ping -c 2 alpine2 PING alpine2 (10.0.0.5): 56 data bytes 64 bytes from 10.0.0.5: seq=0 ttl=64 time=0.600 ms @@ -405,7 +405,7 @@ example also uses Linux hosts, but the same commands work on Windows. 8. On `host1`, close the `alpine1` session (which also stops the container): - ```bash + ```console / # exit ``` @@ -418,7 +418,7 @@ example also uses Linux hosts, but the same commands work on Windows. a. On `host2`, stop `alpine2`, check that `test-net` was removed, then remove `alpine2`: - ```bash + ```console $ docker container stop alpine2 $ docker network ls $ docker container rm alpine2 @@ -426,7 +426,7 @@ example also uses Linux hosts, but the same commands work on Windows. a. On `host1`, remove `alpine1` and `test-net`: - ```bash + ```console $ docker container rm alpine1 $ docker network rm test-net ``` @@ -442,7 +442,7 @@ need to have Docker installed and running. swarm on this Docker daemon. You may see different networks, but you should at least see these (the network IDs will be different): - ```bash + ```console $ docker network ls NETWORK ID NAME DRIVER SCOPE @@ -465,7 +465,7 @@ need to have Docker installed and running. container's ID will be printed. Because you have not specified any `--network` flags, the containers connect to the default `bridge` network. - ```bash + ```console $ docker run -dit --name alpine1 alpine ash $ docker run -dit --name alpine2 alpine ash @@ -473,7 +473,7 @@ need to have Docker installed and running. Check that both containers are actually started: - ```bash + ```console $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -483,7 +483,7 @@ need to have Docker installed and running. 3. Inspect the `bridge` network to see what containers are connected to it. - ```bash + ```console $ docker network inspect bridge [ @@ -544,7 +544,7 @@ need to have Docker installed and running. 4. The containers are running in the background. Use the `docker attach` command to connect to `alpine1`. - ```bash + ```console $ docker attach alpine1 / # @@ -554,7 +554,7 @@ need to have Docker installed and running. the container. Use the `ip addr show` command to show the network interfaces for `alpine1` as they look from within the container: - ```bash + ```console # ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 @@ -579,7 +579,7 @@ need to have Docker installed and running. pinging `google.com`. The `-c 2` flag limits the command two two `ping` attempts. - ```bash + ```console # ping -c 2 google.com PING google.com (172.217.3.174): 56 data bytes @@ -594,7 +594,7 @@ need to have Docker installed and running. 6. Now try to ping the second container. First, ping it by its IP address, `172.17.0.3`: - ```bash + ```console # ping -c 2 172.17.0.3 PING 172.17.0.3 (172.17.0.3): 56 data bytes @@ -609,7 +609,7 @@ need to have Docker installed and running. This succeeds. Next, try pinging the `alpine2` container by container name. This will fail. - ```bash + ```console # ping -c 2 alpine2 ping: bad address 'alpine2' @@ -622,7 +622,7 @@ need to have Docker installed and running. 8. Stop and remove both containers. - ```bash + ```console $ docker container stop alpine1 alpine2 $ docker container rm alpine1 alpine2 ``` diff --git a/network/network-tutorial-standalone.md b/network/network-tutorial-standalone.md index 0395db1f30..bc8143fb42 100644 --- a/network/network-tutorial-standalone.md +++ b/network/network-tutorial-standalone.md @@ -37,7 +37,7 @@ need to have Docker installed and running. swarm on this Docker daemon. You may see different networks, but you should at least see these (the network IDs will be different): - ```bash + ```console $ docker network ls NETWORK ID NAME DRIVER SCOPE @@ -60,7 +60,7 @@ need to have Docker installed and running. container's ID will be printed. Because you have not specified any `--network` flags, the containers connect to the default `bridge` network. - ```bash + ```console $ docker run -dit --name alpine1 alpine ash $ docker run -dit --name alpine2 alpine ash @@ -68,7 +68,7 @@ need to have Docker installed and running. Check that both containers are actually started: - ```bash + ```console $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -78,7 +78,7 @@ need to have Docker installed and running. 3. Inspect the `bridge` network to see what containers are connected to it. - ```bash + ```console $ docker network inspect bridge [ @@ -139,7 +139,7 @@ need to have Docker installed and running. 4. The containers are running in the background. Use the `docker attach` command to connect to `alpine1`. - ```bash + ```console $ docker attach alpine1 / # @@ -149,7 +149,7 @@ need to have Docker installed and running. the container. Use the `ip addr show` command to show the network interfaces for `alpine1` as they look from within the container: - ```bash + ```console # ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 @@ -174,7 +174,7 @@ need to have Docker installed and running. pinging `google.com`. The `-c 2` flag limits the command to two `ping` attempts. - ```bash + ```console # ping -c 2 google.com PING google.com (172.217.3.174): 56 data bytes @@ -189,7 +189,7 @@ need to have Docker installed and running. 6. Now try to ping the second container. First, ping it by its IP address, `172.17.0.3`: - ```bash + ```console # ping -c 2 172.17.0.3 PING 172.17.0.3 (172.17.0.3): 56 data bytes @@ -204,7 +204,7 @@ need to have Docker installed and running. This succeeds. Next, try pinging the `alpine2` container by container name. This will fail. - ```bash + ```console # ping -c 2 alpine2 ping: bad address 'alpine2' @@ -217,7 +217,7 @@ need to have Docker installed and running. 8. Stop and remove both containers. - ```bash + ```console $ docker container stop alpine1 alpine2 $ docker container rm alpine1 alpine2 ``` @@ -238,13 +238,13 @@ connected to both networks. 1. Create the `alpine-net` network. You do not need the `--driver bridge` flag since it's the default, but this example shows how to specify it. - ```bash + ```console $ docker network create --driver bridge alpine-net ``` 2. List Docker's networks: - ```bash + ```console $ docker network ls NETWORK ID NAME DRIVER SCOPE @@ -257,7 +257,7 @@ connected to both networks. Inspect the `alpine-net` network. This shows you its IP address and the fact that no containers are connected to it: - ```bash + ```console $ docker network inspect alpine-net [ @@ -296,7 +296,7 @@ connected to both networks. `docker network connect` afterward to connect `alpine4` to the `bridge` network as well. - ```bash + ```console $ docker run -dit --name alpine1 --network alpine-net alpine ash $ docker run -dit --name alpine2 --network alpine-net alpine ash @@ -310,7 +310,7 @@ connected to both networks. Verify that all containers are running: - ```bash + ```console $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -322,7 +322,7 @@ connected to both networks. 4. Inspect the `bridge` network and the `alpine-net` network again: - ```bash + ```console $ docker network inspect bridge [ @@ -376,7 +376,7 @@ connected to both networks. Containers `alpine3` and `alpine4` are connected to the `bridge` network. - ```bash + ```console $ docker network inspect alpine-net [ @@ -437,7 +437,7 @@ connected to both networks. connect to `alpine1` and test this out. `alpine1` should be able to resolve `alpine2` and `alpine4` (and `alpine1`, itself) to IP addresses. - ```bash + ```console $ docker container attach alpine1 # ping -c 2 alpine2 @@ -474,7 +474,7 @@ connected to both networks. 6. From `alpine1`, you should not be able to connect to `alpine3` at all, since it is not on the `alpine-net` network. - ```bash + ```console # ping -c 2 alpine3 ping: bad address 'alpine3' @@ -485,7 +485,7 @@ connected to both networks. `bridge` network and find `alpine3`'s IP address: `172.17.0.2` Try to ping it. - ```bash + ```console # ping -c 2 172.17.0.2 PING 172.17.0.2 (172.17.0.2): 56 data bytes @@ -502,7 +502,7 @@ connected to both networks. However, you will need to address `alpine3` by its IP address. Attach to it and run the tests. - ```bash + ```console $ docker container attach alpine4 # ping -c 2 alpine1 @@ -556,7 +556,7 @@ connected to both networks. connect to `alpine1` (which is only connected to the `alpine-net` network) and try again. - ```bash + ```console # ping -c 2 google.com PING google.com (172.217.3.174): 56 data bytes diff --git a/network/none.md b/network/none.md index 328be64ac0..4434d38d05 100644 --- a/network/none.md +++ b/network/none.md @@ -10,7 +10,7 @@ only the loopback device is created. The following example illustrates this. 1. Create the container. - ```bash + ```console $ docker run --rm -dit \ --network none \ --name no-net-alpine \ @@ -21,7 +21,7 @@ only the loopback device is created. The following example illustrates this. 2. Check the container's network stack, by executing some common networking commands within the container. Notice that no `eth0` was created. - ```bash + ```console $ docker exec no-net-alpine ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 @@ -32,7 +32,7 @@ only the loopback device is created. The following example illustrates this. link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 ``` - ```bash + ```console $ docker exec no-net-alpine ip route ``` @@ -41,7 +41,7 @@ only the loopback device is created. The following example illustrates this. 3. Stop the container. It is removed automatically because it was created with the `--rm` flag. - ```bash + ```console $ docker stop no-net-alpine ``` diff --git a/network/overlay.md b/network/overlay.md index 597307d595..3df27fbc8f 100644 --- a/network/overlay.md +++ b/network/overlay.md @@ -61,7 +61,7 @@ apply to overlay networks used by standalone containers. To create an overlay network for use with swarm services, use a command like the following: -```bash +```console $ docker network create -d overlay my-overlay ``` @@ -69,7 +69,7 @@ To create an overlay network which can be used by swarm services **or** standalone containers to communicate with other standalone containers running on other Docker daemons, add the `--attachable` flag: -```bash +```console $ docker network create -d overlay --attachable my-attachable-overlay ``` @@ -105,7 +105,7 @@ automatically rotate the keys every 12 hours. You can use the overlay network feature with both `--opt encrypted --attachable` and attach unmanaged containers to that network: -```bash +```console $ docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network ``` @@ -133,7 +133,7 @@ services which publish ports, such as a WordPress service which publishes port 2. Remove the existing `ingress` network: - ```bash + ```console $ docker network rm ingress WARNING! Before removing the routing-mesh network, make sure all the nodes @@ -147,7 +147,7 @@ services which publish ports, such as a WordPress service which publishes port custom options you want to set. This example sets the MTU to 1200, sets the subnet to `10.11.0.0/16`, and sets the gateway to `10.11.0.2`. - ```bash + ```console $ docker network create \ --driver overlay \ --ingress \ @@ -177,7 +177,7 @@ from the swarm. 2. Delete the existing `docker_gwbridge` interface. - ```bash + ```console $ sudo ip link set docker_gwbridge down $ sudo ip link del dev docker_gwbridge @@ -190,7 +190,7 @@ from the swarm. This example uses the subnet `10.11.0.0/16`. For a full list of customizable options, see [Bridge driver options](../engine/reference/commandline/network_create.md#bridge-driver-options). - ```bash + ```console $ docker network create \ --subnet 10.11.0.0/16 \ --opt com.docker.network.bridge.name=docker_gwbridge \ diff --git a/registry/deploying.md b/registry/deploying.md index 8253ed4040..c56277c75e 100644 --- a/registry/deploying.md +++ b/registry/deploying.md @@ -20,7 +20,7 @@ If you have an air-gapped datacenter, see Use a command like the following to start the registry container: -```bash +```console $ docker run -d -p 5000:5000 --restart=always --name registry registry:2 ``` @@ -42,7 +42,7 @@ as `my-ubuntu`, then pushes it to the local registry. Finally, the 1. Pull the `ubuntu:16.04` image from Docker Hub. - ```bash + ```console $ docker pull ubuntu:16.04 ``` @@ -50,13 +50,13 @@ as `my-ubuntu`, then pushes it to the local registry. Finally, the for the existing image. When the first part of the tag is a hostname and port, Docker interprets this as the location of a registry, when pushing. - ```bash + ```console $ docker tag ubuntu:16.04 localhost:5000/my-ubuntu ``` 3. Push the image to the local registry running at `localhost:5000`: - ```bash + ```console $ docker push localhost:5000/my-ubuntu ``` @@ -64,14 +64,14 @@ as `my-ubuntu`, then pushes it to the local registry. Finally, the images, so that you can test pulling the image from your registry. This does not remove the `localhost:5000/my-ubuntu` image from your registry. - ```bash + ```console $ docker image remove ubuntu:16.04 $ docker image remove localhost:5000/my-ubuntu ``` 5. Pull the `localhost:5000/my-ubuntu` image from your local registry. - ```bash + ```console $ docker pull localhost:5000/my-ubuntu ``` @@ -80,13 +80,13 @@ as `my-ubuntu`, then pushes it to the local registry. Finally, the To stop the registry, use the same `docker container stop` command as with any other container. -```bash +```console $ docker container stop registry ``` To remove the container, use `docker container rm`. -```bash +```console $ docker container stop registry && docker container rm -v registry ``` @@ -105,7 +105,7 @@ should set it to restart automatically when Docker restarts or if it exits. This example uses the `--restart always` flag to set a restart policy for the registry. -```bash +```console $ docker run -d \ -p 5000:5000 \ --restart=always \ @@ -122,7 +122,7 @@ port settings. This example runs the registry on port 5001 and also names it and the second part is the port within the container. Within the container, the registry listens on port `5000` by default. -```bash +```console $ docker run -d \ -p 5001:5000 \ --name registry-test \ @@ -133,7 +133,7 @@ If you want to change the port the registry listens on within the container, you can use the environment variable `REGISTRY_HTTP_ADDR` to change it. This command causes the registry to listen on port 5001 within the container: -```bash +```console $ docker run -d \ -e REGISTRY_HTTP_ADDR=0.0.0.0:5001 \ -p 5001:5001 \ @@ -154,7 +154,7 @@ is more dependent on the filesystem layout of the Docker host, but more performa in many situations. The following example bind-mounts the host directory `/mnt/registry` into the registry container at `/var/lib/registry/`. -```bash +```console $ docker run -d \ -p 5000:5000 \ --restart=always \ @@ -194,7 +194,7 @@ If you have been issued an _intermediate_ certificate instead, see 1. Create a `certs` directory. - ```bash + ```console $ mkdir -p certs ``` @@ -204,7 +204,7 @@ If you have been issued an _intermediate_ certificate instead, see 2. Stop the registry if it is currently running. - ```bash + ```console $ docker container stop registry ``` @@ -213,7 +213,7 @@ If you have been issued an _intermediate_ certificate instead, see environment variables that tell the container where to find the `domain.crt` and `domain.key` file. The registry runs on port 443, the default HTTPS port. - ```bash + ```console $ docker run -d \ --restart=always \ --name registry \ @@ -228,7 +228,7 @@ If you have been issued an _intermediate_ certificate instead, see 4. Docker clients can now pull from and push to your registry using its external address. The following commands demonstrate this: - ```bash + ```console $ docker pull ubuntu:16.04 $ docker tag ubuntu:16.04 myregistry.domain.com/my-ubuntu $ docker push myregistry.domain.com/my-ubuntu @@ -241,7 +241,7 @@ A certificate issuer may supply you with an *intermediate* certificate. In this case, you must concatenate your certificate with the intermediate certificate to form a *certificate bundle*. You can do this using the `cat` command: -```bash +```console cat domain.crt intermediate-certificates.pem > certs/domain.crt ``` @@ -291,7 +291,7 @@ TLS certificates as in the previous examples. First, save the TLS certificate and key as secrets: -```bash +```console $ docker secret create domain.crt certs/domain.crt $ docker secret create domain.key certs/domain.key @@ -301,7 +301,7 @@ Next, add a label to the node where you want to run the registry. To get the node's name, use `docker node ls`. Substitute your node's name for `node1` below. -```bash +```console $ docker node update --label-add registry=true node1 ``` @@ -315,7 +315,7 @@ running the following `docker service create` command. By default, secrets are mounted into a service at `/run/secrets/`. -```bash +```console $ docker service create \ --name registry \ --secret domain.crt \ @@ -405,7 +405,7 @@ secrets. 1. Create a password file with one entry for the user `testuser`, with password `testpassword`: - ```bash + ```console $ mkdir auth $ docker run \ --entrypoint htpasswd \ @@ -420,13 +420,13 @@ secrets. 2. Stop the registry. - ```bash + ```console $ docker container stop registry ``` 3. Start the registry with basic authentication. - ```bash + ```console $ docker run -d \ -p 5000:5000 \ --restart=always \ @@ -446,7 +446,7 @@ secrets. 5. Log in to the registry. - ```bash + ```console $ docker login myregistrydomain.com:5000 ``` @@ -505,7 +505,7 @@ directories. Start your registry by issuing the following command in the directory containing the `docker-compose.yml` file: -```bash +```console $ docker-compose up -d ``` diff --git a/registry/insecure.md b/registry/insecure.md index 3446a85f37..767278f6ae 100644 --- a/registry/insecure.md +++ b/registry/insecure.md @@ -63,7 +63,7 @@ This is more secure than the insecure registry solution. 1. Generate your own certificate: - ```bash + ```console $ mkdir -p certs $ openssl req \ @@ -130,21 +130,21 @@ certificate at the OS level. #### Ubuntu -```bash +```console $ cp certs/domain.crt /usr/local/share/ca-certificates/myregistrydomain.com.crt update-ca-certificates ``` #### Red Hat Enterprise Linux -```bash -cp certs/domain.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt +```console +$ cp certs/domain.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt update-ca-trust ``` #### Oracle Linux -```bash +```console $ update-ca-trust enable ``` diff --git a/registry/recipes/nginx.md b/registry/recipes/nginx.md index 892e132a45..b4ba138f02 100644 --- a/registry/recipes/nginx.md +++ b/registry/recipes/nginx.md @@ -80,8 +80,8 @@ Review the [requirements](index.md#requirements), then follow these steps. 1. Create the required directories - ```bash - mkdir -p auth data + ```console + $ mkdir -p auth data ``` 2. Create the main nginx configuration. Paste this code block into a new file called `auth/nginx.conf`: @@ -154,7 +154,7 @@ Review the [requirements](index.md#requirements), then follow these steps. 3. Create a password file `auth/nginx.htpasswd` for "testuser" and "testpassword". - ```bash + ```console $ docker run --rm --entrypoint htpasswd registry:2 -Bbn testuser testpassword > auth/nginx.htpasswd ``` @@ -162,7 +162,7 @@ Review the [requirements](index.md#requirements), then follow these steps. 4. Copy your certificate files to the `auth/` directory. - ```bash + ```console $ cp domain.crt auth $ cp domain.key auth ``` diff --git a/samples/apt-cacher-ng.md b/samples/apt-cacher-ng.md index 9af7e02e70..3716c49126 100644 --- a/samples/apt-cacher-ng.md +++ b/samples/apt-cacher-ng.md @@ -40,20 +40,20 @@ CMD chmod 777 /var/cache/apt-cacher-ng && /etc/init.d/apt-cacher-ng start && To build the image using: -```bash +```console $ docker build -t eg_apt_cacher_ng . ``` Then run it, mapping the exposed port to one on the host -```bash +```console $ docker run -d -p 3142:3142 --name test_apt_cacher_ng eg_apt_cacher_ng ``` To see the logfiles that are `tailed` in the default command, you can use: -```bash +```console $ docker logs -f test_apt_cacher_ng ``` @@ -86,7 +86,7 @@ RUN apt-get update && apt-get install -y vim git **Option 2** is good for testing, but breaks other HTTP clients which obey `http_proxy`, such as `curl`, `wget` and others: -```bash +```console $ docker run --rm -t -i -e http_proxy=http://dockerhost:3142/ debian bash ``` @@ -95,13 +95,13 @@ from your `Dockerfile` too. **Option 4** links Debian-containers to the proxy server using following command: -```bash +```console $ docker run -i -t --link test_apt_cacher_ng:apt_proxy -e http_proxy=http://apt_proxy:3142/ debian bash ``` **Option 5** creates a custom network of APT proxy server and Debian-based containers: -```bash +```console $ docker network create mynetwork $ docker run -d -p 3142:3142 --network=mynetwork --name test_apt_cacher_ng eg_apt_cacher_ng $ docker run --rm -it --network=mynetwork -e http_proxy=http://test_apt_cacher_ng:3142/ debian bash @@ -111,7 +111,7 @@ Apt-cacher-ng has some tools that allow you to manage the repository, and they can be used by leveraging the `VOLUME` instruction, and the image we built to run the service: -```bash +```console $ docker run --rm -t -i --volumes-from test_apt_cacher_ng eg_apt_cacher_ng bash root@f38c87f2a42d:/# /usr/lib/apt-cacher-ng/distkill.pl @@ -137,7 +137,7 @@ WARNING: The removal action may wipe out whole directories containing Finally, clean up after your test by stopping and removing the container, and then removing the image. -```bash +```console $ docker container stop test_apt_cacher_ng $ docker container rm test_apt_cacher_ng $ docker image rm eg_apt_cacher_ng diff --git a/samples/aspnet-mssql-compose.md b/samples/aspnet-mssql-compose.md index 93c4d3cdcc..bde6c47888 100644 --- a/samples/aspnet-mssql-compose.md +++ b/samples/aspnet-mssql-compose.md @@ -35,7 +35,7 @@ configure this app to use our SQL Server database, and then create a sample web application within the container under the `/app` directory and into your host machine in the working directory: - ```bash + ```console $ docker run -v ${PWD}:/app --workdir /app microsoft/dotnet:2.1-sdk dotnet new mvc --auth Individual ``` @@ -170,7 +170,7 @@ configure this app to use our SQL Server database, and then create a 1. Ready! You can now run the `docker-compose build` command. - ```bash + ```console $ docker-compose build ``` @@ -185,7 +185,7 @@ configure this app to use our SQL Server database, and then create a sample website. The application is listening on port 80 by default, but we mapped it to port 8000 in the `docker-compose.yml`. - ```bash + ```console $ docker-compose up ``` diff --git a/samples/couchdb_data_volumes.md b/samples/couchdb_data_volumes.md index 065993214d..0b879881cb 100644 --- a/samples/couchdb_data_volumes.md +++ b/samples/couchdb_data_volumes.md @@ -18,7 +18,7 @@ different versions of CouchDB on the same data, etc. We're marking `/var/lib/couchdb` as a data volume. -```bash +```console $ COUCH1=$(docker run -d -p 5984 -v /var/lib/couchdb shykes/couchdb:2013-05-03) ``` @@ -27,7 +27,7 @@ $ COUCH1=$(docker run -d -p 5984 -v /var/lib/couchdb shykes/couchdb:2013-05-03) We're assuming your Docker host is reachable at `localhost`. If not, replace `localhost` with the public IP of your Docker host. -```bash +```console $ HOST=localhost $ URL="http://$HOST:$(docker port $COUCH1 5984 | grep -o '[1-9][0-9]*$')/_utils/" $ echo "Navigate to $URL in your browser, and use the couch interface to add data" @@ -37,13 +37,13 @@ $ echo "Navigate to $URL in your browser, and use the couch interface to add dat This time, we're requesting shared access to `$COUCH1`'s volumes. -```bash +```console $ COUCH2=$(docker run -d -p 5984 --volumes-from $COUCH1 shykes/couchdb:2013-05-03) ``` ## Browse data on the second database -```bash +```console $ HOST=localhost $ URL="http://$HOST:$(docker port $COUCH2 5984 | grep -o '[1-9][0-9]*$')/_utils/" $ echo "Navigate to $URL in your browser. You should see the same data as in the first database"'!' diff --git a/samples/postgresql_service.md b/samples/postgresql_service.md index ff0d65cdf0..38ca61765a 100644 --- a/samples/postgresql_service.md +++ b/samples/postgresql_service.md @@ -68,13 +68,13 @@ CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main Build an image from the Dockerfile and assign it a name. -```bash +```console $ docker build -t eg_postgresql . ``` Run the PostgreSQL server container (in the foreground): -```bash +```console $ docker run --rm -P --name pg_test eg_postgresql ``` @@ -92,7 +92,7 @@ Containers can be linked to another container's ports directly using `docker run`. This sets a number of environment variables that can then be used to connect: -```bash +```console $ docker run --rm -t -i --link pg_test:pg eg_postgresql bash postgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password @@ -105,7 +105,7 @@ host-mapped port to test as well. You need to use `docker ps` to find out what local host port the container is mapped to first: -```bash +```console $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -142,7 +142,7 @@ $ docker=# select * from cities; You can use the defined volumes to inspect the PostgreSQL log files and to backup your configuration and data: -```bash +```console $ docker run --rm --volumes-from pg_test -t -i busybox sh / # ls diff --git a/samples/running_riak_service.md b/samples/running_riak_service.md index 5a48d7c9a1..fcf83be464 100644 --- a/samples/running_riak_service.md +++ b/samples/running_riak_service.md @@ -13,7 +13,7 @@ Riak pre-installed. Create an empty file called `Dockerfile`: -```bash +```console $ touch Dockerfile ``` @@ -84,8 +84,8 @@ CMD ["/usr/bin/supervisord"] Create an empty file called `supervisord.conf`. Make sure it's at the same directory level as your `Dockerfile`: -```bash -touch supervisord.conf +```console +$ touch supervisord.conf ``` Populate it with the following program definitions: @@ -109,7 +109,7 @@ stderr_logfile=/var/log/supervisor/%(program_name)s.log Now you can build a Docker image for Riak: -```bash +```console $ docker build -t "/riak" . ``` diff --git a/storage/bind-mounts.md b/storage/bind-mounts.md index ca62e9fced..70968dd87c 100644 --- a/storage/bind-mounts.md +++ b/storage/bind-mounts.md @@ -103,7 +103,7 @@ first one.
-```bash +```console $ docker run -d \ -it \ --name devtest \ @@ -114,7 +114,7 @@ $ docker run -d \
-```bash +```console $ docker run -d \ -it \ --name devtest \ @@ -147,7 +147,7 @@ set to `rprivate`. Stop the container: -```bash +```console $ docker container stop devtest $ docker container rm devtest @@ -174,7 +174,7 @@ The `--mount` and `-v` examples have the same end result.
-```bash +```console $ docker run -d \ -it \ --name broken-container \ @@ -188,7 +188,7 @@ starting container process caused "exec: \"nginx\": executable file not found in
-```bash +```console $ docker run -d \ -it \ --name broken-container \ @@ -204,7 +204,7 @@ starting container process caused "exec: \"nginx\": executable file not found in The container is created but does not start. Remove it: -```bash +```console $ docker container rm broken-container ``` @@ -228,7 +228,7 @@ The `--mount` and `-v` examples have the same result.
-```bash +```console $ docker run -d \ -it \ --name devtest \ @@ -239,7 +239,7 @@ $ docker run -d \
-```bash +```console $ docker run -d \ -it \ --name devtest \ @@ -268,7 +268,7 @@ correctly. Look for the `Mounts` section: Stop the container: -```bash +```console $ docker container stop devtest $ docker container rm devtest @@ -316,7 +316,7 @@ The `--mount` and `-v` examples have the same result.
-```bash +```console $ docker run -d \ -it \ --name devtest \ @@ -328,7 +328,7 @@ $ docker run -d \
-```bash +```console $ docker run -d \ -it \ --name devtest \ @@ -367,7 +367,7 @@ the bind mount's contents: It is not possible to modify the selinux label using the `--mount` flag. -```bash +```console $ docker run -d \ -it \ --name devtest \ diff --git a/storage/storagedriver/aufs-driver.md b/storage/storagedriver/aufs-driver.md index f7d681fff4..c49c575cf9 100644 --- a/storage/storagedriver/aufs-driver.md +++ b/storage/storagedriver/aufs-driver.md @@ -37,7 +37,7 @@ storage driver is configured, Docker uses it by default. 1. Use the following command to verify that your kernel supports AUFS. - ```bash + ```console $ grep aufs /proc/filesystems nodev aufs @@ -45,7 +45,7 @@ storage driver is configured, Docker uses it by default. 2. Check which storage driver Docker is using. - ```bash + ```console $ docker info @@ -88,7 +88,7 @@ minimize overhead. The following `docker pull` command shows a Docker host downloading a Docker image comprising five layers. -```bash +```console $ docker pull ubuntu Using default tag: latest diff --git a/storage/storagedriver/btrfs-driver.md b/storage/storagedriver/btrfs-driver.md index 3ddec45559..397c3706e2 100644 --- a/storage/storagedriver/btrfs-driver.md +++ b/storage/storagedriver/btrfs-driver.md @@ -41,7 +41,7 @@ Btrfs Filesystem as Btrfs. - `btrfs` support must exist in your kernel. To check this, run the following command: - ```bash + ```console $ grep btrfs /proc/filesystems btrfs @@ -60,7 +60,7 @@ This procedure is essentially identical on SLES and Ubuntu. 2. Copy the contents of `/var/lib/docker/` to a backup location, then empty the contents of `/var/lib/docker/`: - ```bash + ```console $ sudo cp -au /var/lib/docker /var/lib/docker.bk $ sudo rm -rf /var/lib/docker/* ``` @@ -70,7 +70,7 @@ This procedure is essentially identical on SLES and Ubuntu. `/dev/xvdg`. Double-check the block device names because this is a destructive operation. - ```bash + ```console $ sudo mkfs.btrfs -f /dev/xvdf /dev/xvdg ``` @@ -80,7 +80,7 @@ This procedure is essentially identical on SLES and Ubuntu. 4. Mount the new Btrfs filesystem on the `/var/lib/docker/` mount point. You can specify any of the block devices used to create the Btrfs filesystem. - ```bash + ```console $ sudo mount -t btrfs /dev/xvdf /var/lib/docker ``` @@ -89,7 +89,7 @@ This procedure is essentially identical on SLES and Ubuntu. 5. Copy the contents of `/var/lib/docker.bk` to `/var/lib/docker/`. - ```bash + ```console $ sudo cp -au /var/lib/docker.bk/* /var/lib/docker/ ``` @@ -112,7 +112,7 @@ This procedure is essentially identical on SLES and Ubuntu. 7. Start Docker. After it is running, verify that `btrfs` is being used as the storage driver. - ```bash + ```console $ docker info Containers: 0 @@ -140,7 +140,7 @@ roughly 1 GB. To add a block device to a Btrfs volume, use the `btrfs device add` and `btrfs filesystem balance` commands. -```bash +```console $ sudo btrfs device add /dev/svdh /var/lib/docker $ sudo btrfs filesystem balance /var/lib/docker diff --git a/storage/storagedriver/device-mapper-driver.md b/storage/storagedriver/device-mapper-driver.md index b429b50700..790492b644 100644 --- a/storage/storagedriver/device-mapper-driver.md +++ b/storage/storagedriver/device-mapper-driver.md @@ -57,7 +57,7 @@ For production systems, see 1. Stop Docker. - ```bash + ```console $ sudo systemctl stop docker ``` @@ -77,14 +77,14 @@ For production systems, see 3. Start Docker. - ```bash + ```console $ sudo systemctl start docker ``` 4. Verify that the daemon is using the `devicemapper` storage driver. Use the `docker info` command and look for `Storage Driver`. - ```bash + ```console $ docker info Containers: 0 @@ -204,7 +204,7 @@ assumes that the Docker daemon is in the `stopped` state. 2. Stop Docker. - ```bash + ```console $ sudo systemctl stop docker ``` @@ -222,7 +222,7 @@ assumes that the Docker daemon is in the `stopped` state. > **Warning**: The next few steps are destructive, so be sure that you have > specified the correct device! - ```bash + ```console $ sudo pvcreate /dev/xvdf Physical volume "/dev/xvdf" successfully created. @@ -231,7 +231,7 @@ assumes that the Docker daemon is in the `stopped` state. 5. Create a `docker` volume group on the same device, using the `vgcreate` command. - ```bash + ```console $ sudo vgcreate docker /dev/xvdf Volume group "docker" successfully created @@ -242,7 +242,7 @@ assumes that the Docker daemon is in the `stopped` state. to allow for automatic expanding of the data or metadata if space runs low, as a temporary stop-gap. These are the recommended values. - ```bash + ```console $ sudo lvcreate --wipesignatures y -n thinpool docker -l 95%VG Logical volume "thinpool" created. @@ -255,7 +255,7 @@ assumes that the Docker daemon is in the `stopped` state. 7. Convert the volumes to a thin pool and a storage location for metadata for the thin pool, using the `lvconvert` command. - ```bash + ```console $ sudo lvconvert -y \ --zero n \ -c 512K \ @@ -270,7 +270,7 @@ assumes that the Docker daemon is in the `stopped` state. 8. Configure autoextension of thin pools via an `lvm` profile. - ```bash + ```console $ sudo vi /etc/lvm/profile/docker-thinpool.profile ``` @@ -297,7 +297,7 @@ assumes that the Docker daemon is in the `stopped` state. 10. Apply the LVM profile, using the `lvchange` command. - ```bash + ```console $ sudo lvchange --metadataprofile docker-thinpool docker/thinpool Logical volume docker/thinpool changed. @@ -305,7 +305,7 @@ assumes that the Docker daemon is in the `stopped` state. 11. Ensure monitoring of the logical volume is enabled. - ```bash + ```console $ sudo lvs -o+seg_monitor LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Monitor @@ -317,7 +317,7 @@ assumes that the Docker daemon is in the `stopped` state. this step, automatic extension of the logical volume will not occur, regardless of any settings in the applied profile. - ```bash + ```console $ sudo lvchange --monitor y docker/thinpool ``` @@ -329,7 +329,7 @@ assumes that the Docker daemon is in the `stopped` state. exists, move it out of the way so that Docker can use the new LVM pool to store the contents of image and containers. - ```bash + ```console $ sudo su - # mkdir /var/lib/docker.bk # mv /var/lib/docker/* /var/lib/docker.bk @@ -358,19 +358,19 @@ assumes that the Docker daemon is in the `stopped` state. **systemd**: - ```bash + ```console $ sudo systemctl start docker ``` **service**: - ```bash + ```console $ sudo service docker start ``` 15. Verify that Docker is using the new configuration using `docker info`. - ```bash + ```console $ docker info Containers: 0 @@ -407,7 +407,7 @@ assumes that the Docker daemon is in the `stopped` state. 16. After you have verified that the configuration is correct, you can remove the `/var/lib/docker.bk` directory which contains the previous configuration. - ```bash + ```console $ sudo rm -rf /var/lib/docker.bk ``` @@ -422,7 +422,7 @@ tool at the OS level, such as Nagios. To view the LVM logs, you can use `journalctl`: -```bash +```console $ sudo journalctl -fu dm-event.service ``` @@ -464,7 +464,7 @@ If you do not want to use `device_tool`, you can [resize the thin pool manually] 2. Use the tool. The following example resizes the thin pool to 200GB. - ```bash + ```console $ ./device_tool resize 200GB ``` @@ -480,7 +480,7 @@ it has significant performance and stability drawbacks. If you are using `loop-lvm` mode, the output of `docker info` shows file paths for `Data loop file` and `Metadata loop file`: -```bash +```console $ docker info |grep 'loop file' Data loop file: /var/lib/docker/devicemapper/data @@ -492,7 +492,7 @@ thin pool is 100 GB, and is increased to 200 GB. 1. List the sizes of the devices. - ```bash + ```console $ sudo ls -lh /var/lib/docker/devicemapper/ total 1175492 @@ -504,13 +504,13 @@ thin pool is 100 GB, and is increased to 200 GB. which is used to increase **or** decrease the size of a file. Note that decreasing the size is a destructive operation. - ```bash + ```console $ sudo truncate -s 200G /var/lib/docker/devicemapper/data ``` 3. Verify the file size changed. - ```bash + ```console $ sudo ls -lh /var/lib/docker/devicemapper/ total 1.2G @@ -522,7 +522,7 @@ thin pool is 100 GB, and is increased to 200 GB. the loopback device in memory, in GB. Reload it, then list the size again. After the reload, the size is 200 GB. - ```bash + ```console $ echo $[ $(sudo blockdev --getsize64 /dev/loop0) / 1024 / 1024 / 1024 ] 100 @@ -578,7 +578,7 @@ block device and other parameters to suit your situation. Use the `pvdisplay` command to find the physical block devices currently in use by your thin pool, and the volume group's name. - ```bash + ```console $ sudo pvdisplay |grep 'VG Name' PV Name /dev/xvdf @@ -591,7 +591,7 @@ block device and other parameters to suit your situation. 2. Extend the volume group, using the `vgextend` command with the `VG Name` from the previous step, and the name of your **new** block device. - ```bash + ```console $ sudo vgextend docker /dev/xvdg Physical volume "/dev/xvdg" successfully created. @@ -602,7 +602,7 @@ block device and other parameters to suit your situation. volume right away, without auto-extend. To extend the metadata thinpool instead, use `docker/thinpool_tmeta`. - ```bash + ```console $ sudo lvextend -l+100%FREE -n docker/thinpool Size of logical volume docker/thinpool_tdata changed from 95.00 GiB (24319 extents) to 198.00 GiB (50688 extents). @@ -636,8 +636,8 @@ If you reboot the host and find that the `docker` service failed to start, look for the error, "Non existing device". You need to re-activate the logical volumes with this command: -```bash -sudo lvchange -ay docker/thinpool +```console +$ sudo lvchange -ay docker/thinpool ``` ## How the `devicemapper` storage driver works @@ -648,7 +648,7 @@ sudo lvchange -ay docker/thinpool Use the `lsblk` command to see the devices and their pools, from the operating system's point of view: -```bash +```console $ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT @@ -663,7 +663,7 @@ xvdf 202:80 0 100G 0 disk Use the `mount` command to see the mount-point Docker is using: -```bash +```console $ mount |grep devicemapper /dev/xvda1 on /var/lib/docker/devicemapper type xfs (rw,relatime,seclabel,attr2,inode64,noquota) ``` diff --git a/storage/storagedriver/index.md b/storage/storagedriver/index.md index 90df885614..45e75eb3d8 100644 --- a/storage/storagedriver/index.md +++ b/storage/storagedriver/index.md @@ -14,14 +14,22 @@ stores images, and how these images are used by containers. You can use this information to make informed choices about the best way to persist data from your applications and avoid performance problems along the way. -Storage drivers allow you to create data in the writable layer of your container. -The files won't be persisted after the container is deleted, and both read and -write speeds are lower than native file system performance. +## Storage drivers versus Docker volumes - > **Note**: Operations that are known to be problematic include write-intensive database storage, -particularly when pre-existing data exists in the read-only layer. More details are provided in this document. +Docker uses storage drivers to store image layers, and to store data in the +writable layer of a container. The container's writable layer does not persist +after the container is deleted, but is suitable for storing ephemeral data that +is generated at runtime. Storage drivers are optimized for space efficiency, but +(depending on the storage driver) write speeds are lower than native file system +performance, especially for storage drivers that a use copy-on-write filesystem. +Write-intensive applications, such as database storage, are impacted by a +performance overhead, particularly if pre-existing data exists in the read-only +layer. -[Learn how to use volumes](../volumes.md) to persist data and improve performance. +Use Docker volumes for write-intensive data, data that must persist beyond the +container's lifespan, and data that must be shared between containers. Refer to +the [volumes section](../volumes.md) to learn how to use volumes to persist data +and improve performance. ## Images and layers @@ -32,24 +40,37 @@ read-only. Consider the following Dockerfile: ```dockerfile # syntax=docker/dockerfile:1 FROM ubuntu:18.04 +LABEL org.opencontainers.image.authors="org@example.com" COPY . /app RUN make /app +RUN rm -r $HOME/.cache CMD python /app/app.py ``` -This Dockerfile contains four commands, each of which creates a layer. The -`FROM` statement starts out by creating a layer from the `ubuntu:18.04` image. -The `COPY` command adds some files from your Docker client's current directory. -The `RUN` command builds your application using the `make` command. Finally, -the last layer specifies what command to run within the container. +This Dockerfile contains four commands. Commands that modify the filesystem create +a layer. The`FROM` statement starts out by creating a layer from the `ubuntu:18.04` +image. The `LABEL` command only modifies the image's metadata, and does not produce +a new layer. The `COPY` command adds some files from your Docker client's current +directory. The first `RUN` command builds your application using the `make` command, +and writes the result to a new layer. The second `RUN` command removes a cache +directory, and writes the result to a new layer. Finally, the `CMD` instruction +specifies what command to run within the container, which only modifies the +image's metadata, which does not produce an image layer. -Each layer is only a set of differences from the layer before it. The layers are -stacked on top of each other. When you create a new container, you add a new -writable layer on top of the underlying layers. This layer is often called the -"container layer". All changes made to the running container, such as writing -new files, modifying existing files, and deleting files, are written to this thin -writable container layer. The diagram below shows a container based on the Ubuntu -15.04 image. +Each layer is only a set of differences from the layer before it. Note that both +_adding_, and _removing_ files will result in a new layer. In the example above, +the `$HOME/.cache` directory is removed, but will still be available in the +previous layer and add up to the image's total size. Refer to the +[Best practices for writing Dockerfiles](../../develop/develop-images/dockerfile_best-practices.md) +and [use multi-stage builds](../../develop/develop-images/multistage-build.md) +sections to learn how to optimize your Dockerfiles for efficient images. + +The layers are stacked on top of each other. When you create a new container, +you add a new writable layer on top of the underlying layers. This layer is often +called the "container layer". All changes made to the running container, such as +writing new files, modifying existing files, and deleting files, are written to +this thin writable container layer. The diagram below shows a container based +on an `ubuntu:15.04` image. ![Layers of a container based on the Ubuntu image](images/container-layers.jpg) @@ -71,15 +92,18 @@ multiple containers sharing the same Ubuntu 15.04 image. ![Containers sharing same image](images/sharing-layers.jpg) -> **Note**: If you need multiple images to have shared access to the exact -> same data, store this data in a Docker volume and mount it into your -> containers. - Docker uses storage drivers to manage the contents of the image layers and the writable container layer. Each storage driver handles the implementation differently, but all drivers use stackable image layers and the copy-on-write (CoW) strategy. +> **Note** +> +> Use Docker volumes if you need multiple containers to have shared access to +> the exact same data. Refer to the [volumes section](../volumes.md) to learn +> about volumes. + + ## Container size on disk To view the approximate size of a running container, you can use the `docker ps -s` @@ -87,7 +111,6 @@ command. Two different columns relate to size. - `size`: the amount of data (on disk) that is used for the writable layer of each container. - - `virtual size`: the amount of data used for the read-only image data used by the container plus the container's writable layer `size`. Multiple containers may share some or all read-only @@ -106,9 +129,9 @@ these containers would be SUM (`size` of containers) plus one image size This also does not count the following additional ways a container can take up disk space: -- Disk space used for log files if you use the `json-file` logging driver. This - can be non-trivial if your container generates a large amount of logging data - and log rotation is not configured. +- Disk space used for log files stored by the [logging-driver](../../config/containers/logging/index.md). + This can be non-trivial if your container generates a large amount of logging + data and log rotation is not configured. - Volumes and bind mounts used by the container. - Disk space used for the container's configuration files, which are typically small. @@ -133,7 +156,7 @@ pulled down separately, and stored in Docker's local storage area, which is usually `/var/lib/docker/` on Linux hosts. You can see these layers being pulled in this example: -```bash +```console $ docker pull ubuntu:18.04 18.04: Pulling from library/ubuntu f476d66f5408: Pull complete @@ -149,7 +172,7 @@ local storage area. To examine the layers on the filesystem, list the contents of `/var/lib/docker/`. This example uses the `overlay2` storage driver: -```bash +```console $ ls /var/lib/docker/overlay2 16802227a96c24dcbeab5b37821e2b67a9f921749cd9a2e386d5a6d5bc6fc6d3 377d73dbb466e0bc7c9ee23166771b35ebdbe02ef17753d79fd3571d4ce659d7 @@ -158,16 +181,15 @@ ec1ec45792908e90484f7e629330666e7eee599f08729c93890a7205a6ba35f5 l ``` -The directory names do not correspond to the layer IDs (this has been true since -Docker 1.10). +The directory names do not correspond to the layer IDs. Now imagine that you have two different Dockerfiles. You use the first one to create an image called `acme/my-base-image:1.0`. ```dockerfile # syntax=docker/dockerfile:1 -FROM ubuntu:18.04 -COPY . /app +FROM alpine +RUN apk add --no-cache bash ``` The second one is based on `acme/my-base-image:1.0`, but has some additional @@ -176,16 +198,18 @@ layers: ```dockerfile # syntax=docker/dockerfile:1 FROM acme/my-base-image:1.0 +COPY . /app +RUN chmod +x /app/hello.sh CMD /app/hello.sh ``` -The second image contains all the layers from the first image, plus a new layer -with the `CMD` instruction, and a read-write container layer. Docker already -has all the layers from the first image, so it does not need to pull them again. -The two images share any layers they have in common. +The second image contains all the layers from the first image, plus new layers +created by the `COPY` and `RUN` instructions, and a read-write container layer. +Docker already has all the layers from the first image, so it does not need to +pull them again. The two images share any layers they have in common. If you build images from the two Dockerfiles, you can use `docker image ls` and -`docker history` commands to verify that the cryptographic IDs of the shared +`docker image history` commands to verify that the cryptographic IDs of the shared layers are the same. 1. Make a new directory `cow-test/` and change into it. @@ -193,16 +217,10 @@ layers are the same. 2. Within `cow-test/`, create a new file called `hello.sh` with the following contents: ```bash - #!/bin/sh + #!/usr/bin/env bash echo "Hello world" ``` - Save the file, and make it executable: - - ```bash - chmod +x hello.sh - ``` - 3. Copy the contents of the first Dockerfile above into a new file called `Dockerfile.base`. @@ -215,14 +233,23 @@ layers are the same. ```console $ docker build -t acme/my-base-image:1.0 -f Dockerfile.base . - Sending build context to Docker daemon 812.4MB - Step 1/2 : FROM ubuntu:18.04 - ---> d131e0fa2585 - Step 2/2 : COPY . /app - ---> Using cache - ---> bd09118bcef6 - Successfully built bd09118bcef6 - Successfully tagged acme/my-base-image:1.0 + [+] Building 6.0s (11/11) FINISHED + => [internal] load build definition from Dockerfile.base 0.4s + => => transferring dockerfile: 116B 0.0s + => [internal] load .dockerignore 0.3s + => => transferring context: 2B 0.0s + => resolve image config for docker.io/docker/dockerfile:1 1.5s + => [auth] docker/dockerfile:pull token for registry-1.docker.io 0.0s + => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9e2c9eca7367393aecc68795c671... 0.0s + => [internal] load .dockerignore 0.0s + => [internal] load build definition from Dockerfile.base 0.0s + => [internal] load metadata for docker.io/library/alpine:latest 0.0s + => CACHED [1/2] FROM docker.io/library/alpine 0.0s + => [2/2] RUN apk add --no-cache bash 3.1s + => exporting to image 0.2s + => => exporting layers 0.2s + => => writing image sha256:da3cf8df55ee9777ddcd5afc40fffc3ead816bda99430bad2257de4459625eaa 0.0s + => => naming to docker.io/acme/my-base-image:1.0 0.0s ``` 6. Build the second image. @@ -230,15 +257,25 @@ layers are the same. ```console $ docker build -t acme/my-final-image:1.0 -f Dockerfile . - Sending build context to Docker daemon 4.096kB - Step 1/2 : FROM acme/my-base-image:1.0 - ---> bd09118bcef6 - Step 2/2 : CMD /app/hello.sh - ---> Running in a07b694759ba - ---> dbf995fc07ff - Removing intermediate container a07b694759ba - Successfully built dbf995fc07ff - Successfully tagged acme/my-final-image:1.0 + [+] Building 3.6s (12/12) FINISHED + => [internal] load build definition from Dockerfile 0.1s + => => transferring dockerfile: 156B 0.0s + => [internal] load .dockerignore 0.1s + => => transferring context: 2B 0.0s + => resolve image config for docker.io/docker/dockerfile:1 0.5s + => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9e2c9eca7367393aecc68795c671... 0.0s + => [internal] load .dockerignore 0.0s + => [internal] load build definition from Dockerfile 0.0s + => [internal] load metadata for docker.io/acme/my-base-image:1.0 0.0s + => [internal] load build context 0.2s + => => transferring context: 340B 0.0s + => [1/3] FROM docker.io/acme/my-base-image:1.0 0.2s + => [2/3] COPY . /app 0.1s + => [3/3] RUN chmod +x /app/hello.sh 0.4s + => exporting to image 0.1s + => => exporting layers 0.1s + => => writing image sha256:8bd85c42fa7ff6b33902ada7dcefaaae112bf5673873a089d73583b0074313dd 0.0s + => => naming to docker.io/acme/my-final-image:1.0 0.0s ``` 7. Check out the sizes of the images: @@ -246,47 +283,99 @@ layers are the same. ```console $ docker image ls - REPOSITORY TAG IMAGE ID CREATED SIZE - acme/my-final-image 1.0 dbf995fc07ff 58 seconds ago 103MB - acme/my-base-image 1.0 bd09118bcef6 3 minutes ago 103MB + REPOSITORY TAG IMAGE ID CREATED SIZE + acme/my-final-image 1.0 8bd85c42fa7f About a minute ago 7.75MB + acme/my-base-image 1.0 da3cf8df55ee 2 minutes ago 7.75MB ``` -8. Check out the layers that comprise each image: +8. Check out the history of each image: ```console - $ docker history bd09118bcef6 - IMAGE CREATED CREATED BY SIZE COMMENT - bd09118bcef6 4 minutes ago /bin/sh -c #(nop) COPY dir:35a7eb158c1504e... 100B - d131e0fa2585 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B - 3 months ago /bin/sh -c mkdir -p /run/systemd && echo '... 7B - 3 months ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\... 2.78kB - 3 months ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B - 3 months ago /bin/sh -c set -xe && echo '#!/bin/sh' >... 745B - 3 months ago /bin/sh -c #(nop) ADD file:eef57983bd66e3a... 103MB + $ docker image history acme/my-base-image:1.0 + + IMAGE CREATED CREATED BY SIZE COMMENT + da3cf8df55ee 5 minutes ago RUN /bin/sh -c apk add --no-cache bash # bui… 2.15MB buildkit.dockerfile.v0 + 7 weeks ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B + 7 weeks ago /bin/sh -c #(nop) ADD file:f278386b0cef68136… 5.6MB ``` + Some steps do not have a size (`0B`), and are metadata-only changes, which do + not produce an image layer and do not take up any size, other than the metadata + itself. The output above shows that this image consists of 2 image layers. + ```console - $ docker history dbf995fc07ff + $ docker image history acme/my-final-image:1.0 - IMAGE CREATED CREATED BY SIZE COMMENT - dbf995fc07ff 3 minutes ago /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "/a... 0B - bd09118bcef6 5 minutes ago /bin/sh -c #(nop) COPY dir:35a7eb158c1504e... 100B - d131e0fa2585 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B - 3 months ago /bin/sh -c mkdir -p /run/systemd && echo '... 7B - 3 months ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\... 2.78kB - 3 months ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B - 3 months ago /bin/sh -c set -xe && echo '#!/bin/sh' >... 745B - 3 months ago /bin/sh -c #(nop) ADD file:eef57983bd66e3a... 103MB + IMAGE CREATED CREATED BY SIZE COMMENT + 8bd85c42fa7f 3 minutes ago CMD ["/bin/sh" "-c" "/app/hello.sh"] 0B buildkit.dockerfile.v0 + 3 minutes ago RUN /bin/sh -c chmod +x /app/hello.sh # buil… 39B buildkit.dockerfile.v0 + 3 minutes ago COPY . /app # buildkit 222B buildkit.dockerfile.v0 + 4 minutes ago RUN /bin/sh -c apk add --no-cache bash # bui… 2.15MB buildkit.dockerfile.v0 + 7 weeks ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B + 7 weeks ago /bin/sh -c #(nop) ADD file:f278386b0cef68136… 5.6MB ``` - Notice that all the layers are identical except the top layer of the second - image. All the other layers are shared between the two images, and are only - stored once in `/var/lib/docker/`. The new layer actually doesn't take any - room at all, because it is not changing any files, but only running a command. + Notice that all steps of the first image are also included in the final + image. The final image includes the two layers from the first image, and + two layers that were added in the second image. - > **Note**: The `` lines in the `docker history` output indicate - > that those layers were built on another system and are not available - > locally. This can be ignored. + > What are the `` steps? + > + > The `` lines in the `docker history` output indicate that those + > steps were either built on another system and part of the `alpine` image + > that was pulled from Docker Hub, or were built with BuildKit as builder. + > Before BuildKit, the "classic" builder would produce a new "intermediate" + > image for each step for caching purposes, and the `IMAGE` column would show + > the ID of that image. + > BuildKit uses its own caching mechanism, and no longer requires intermediate + > images for caching. Refer to [build images with BuildKit](../../develop/develop-images/build_enhancements.md) + > to learn more about other enhancements made in BuildKit. + + +9. Check out the layers for each image + + Use the `docker image inspect` command to view the cryptographic IDs of the + layers in each image: + + {% raw %} + ```console + $ docker image inspect --format "{{json .RootFS.Layers}}" acme/my-final-image:1.0 + [ + "sha256:72e830a4dff5f0d5225cdc0a320e85ab1ce06ea5673acfe8d83a7645cbd0e9cf", + "sha256:07b4a9068b6af337e8b8f1f1dae3dd14185b2c0003a9a1f0a6fd2587495b204a" + ] + ``` + {% endraw %} + + {% raw %} + ```console + $ docker image inspect --format "{{json .RootFS.Layers}}" acme/my-base-image:1.0 + [ + "sha256:72e830a4dff5f0d5225cdc0a320e85ab1ce06ea5673acfe8d83a7645cbd0e9cf", + "sha256:07b4a9068b6af337e8b8f1f1dae3dd14185b2c0003a9a1f0a6fd2587495b204a", + "sha256:cc644054967e516db4689b5282ee98e4bc4b11ea2255c9630309f559ab96562e", + "sha256:e84fb818852626e89a09f5143dbc31fe7f0e0a6a24cd8d2eb68062b904337af4" + ] + ``` + {% endraw %} + + Notice that the first two layers are identical in both images. The second + image adds two additional layers. Shared image layers are only stored once + in `/var/lib/docker/` and are also shared when pushing and pulling and image + to an image registry. Shared image layers can therefore reduce network + bandwidth and storage. + + > Tip: format output of Docker commands with the `--format` option + > + > The examples above use the `docker image inspect` command with the `--format` + > option to view the layer IDs, formatted as a JSON array. The `--format` + > option on Docker commands can be a powerful feature that allows you to + > extract and format specific information from the output, without requiring + > additional tools such as `awk` or `sed`. To learn more about formatting + > the output of docker commands using the `--format` flag, refer to the + > [format command and log output section](../../config/formatting.md). + > We also pretty-printed the JSON output using the [`jq` utility](https://stedolan.github.io/jq/){: target="_blank" rel="noopener" class="_" } + > for readability. ### Copying makes containers efficient @@ -297,16 +386,14 @@ layer. This means that the writable layer is as small as possible. When an existing file in a container is modified, the storage driver performs a copy-on-write operation. The specifics steps involved depend on the specific -storage driver. For the `aufs`, `overlay`, and `overlay2` drivers, the +storage driver. For the `overlay2`, `overlay`, and `aufs` drivers, the copy-on-write operation follows this rough sequence: * Search through the image layers for the file to update. The process starts at the newest layer and works down to the base layer one layer at a time. When results are found, they are added to a cache to speed future operations. - * Perform a `copy_up` operation on the first copy of the file that is found, to copy the file to the container's writable layer. - * Any modifications are made to this copy of the file, and the container cannot see the read-only copy of the file that exists in the lower layer. @@ -316,13 +403,20 @@ descriptions. Containers that write a lot of data consume more space than containers that do not. This is because most write operations consume new space in the -container's thin writable top layer. +container's thin writable top layer. Note that changing the metadata of files, +for example, changing file permissions or ownership of a file, can also result +in a `copy_up` operation, therefore duplicating the file to the writable layer. -> **Note**: for write-heavy applications, you should not store the data in -> the container. Instead, use Docker volumes, which are independent of the -> running container and are designed to be efficient for I/O. In addition, -> volumes can be shared among containers and do not increase the size of your -> container's writable layer. +> Tip: Use volumes for write-heavy applications +> +> For write-heavy applications, you should not store the data in the container. +> Applications, such as write-intensive database storage, are known to be +> problematic particularly when pre-existing data exists in the read-only layer. +> +> Instead, use Docker volumes, which are independent of the running container, +> and designed to be efficient for I/O. In addition, volumes can be shared among +> containers and do not increase the size of your container's writable layer. +> Refer to the [use volumes](../volumes.md) section to learn about volumes. A `copy_up` operation can incur a noticeable performance overhead. This overhead is different depending on which storage driver is in use. Large files, @@ -334,72 +428,107 @@ To verify the way that copy-on-write works, the following procedures spins up 5 containers based on the `acme/my-final-image:1.0` image we built earlier and examines how much room they take up. -> **Note**: This procedure doesn't work on Docker Desktop for Mac or Docker Desktop for Windows. - 1. From a terminal on your Docker host, run the following `docker run` commands. The strings at the end are the IDs of each container. - ```bash + ```console $ docker run -dit --name my_container_1 acme/my-final-image:1.0 bash \ && docker run -dit --name my_container_2 acme/my-final-image:1.0 bash \ && docker run -dit --name my_container_3 acme/my-final-image:1.0 bash \ && docker run -dit --name my_container_4 acme/my-final-image:1.0 bash \ && docker run -dit --name my_container_5 acme/my-final-image:1.0 bash - c36785c423ec7e0422b2af7364a7ba4da6146cbba7981a0951fcc3fa0430c409 - dcad7101795e4206e637d9358a818e5c32e13b349e62b00bf05cd5a4343ea513 - 1e7264576d78a3134fbaf7829bc24b1d96017cf2bc046b7cd8b08b5775c33d0c - 38fa94212a419a082e6a6b87a8e2ec4a44dd327d7069b85892a707e3fc818544 - 1a174fc216cccf18ec7d4fe14e008e30130b11ede0f0f94a87982e310cf2e765 + 40ebdd7634162eb42bdb1ba76a395095527e9c0aa40348e6c325bd0aa289423c + a5ff32e2b551168b9498870faf16c9cd0af820edf8a5c157f7b80da59d01a107 + 3ed3c1a10430e09f253704116965b01ca920202d52f3bf381fbb833b8ae356bc + 939b3bf9e7ece24bcffec57d974c939da2bdcc6a5077b5459c897c1e2fa37a39 + cddae31c314fbab3f7eabeb9b26733838187abc9a2ed53f97bd5b04cd7984a5a ``` +2. Run the `docker ps` command with the `--size` option to verify the 5 containers + are running, and to see each container's size. -2. Run the `docker ps` command to verify the 5 containers are running. + {% raw %} + ```console + $ docker ps --size --format "table {{.ID}}\t{{.Image}}\t{{.Names}}\t{{.Size}}" - ```bash - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 1a174fc216cc acme/my-final-image:1.0 "bash" About a minute ago Up About a minute my_container_5 - 38fa94212a41 acme/my-final-image:1.0 "bash" About a minute ago Up About a minute my_container_4 - 1e7264576d78 acme/my-final-image:1.0 "bash" About a minute ago Up About a minute my_container_3 - dcad7101795e acme/my-final-image:1.0 "bash" About a minute ago Up About a minute my_container_2 - c36785c423ec acme/my-final-image:1.0 "bash" About a minute ago Up About a minute my_container_1 + CONTAINER ID IMAGE NAMES SIZE + cddae31c314f acme/my-final-image:1.0 my_container_5 0B (virtual 7.75MB) + 939b3bf9e7ec acme/my-final-image:1.0 my_container_4 0B (virtual 7.75MB) + 3ed3c1a10430 acme/my-final-image:1.0 my_container_3 0B (virtual 7.75MB) + a5ff32e2b551 acme/my-final-image:1.0 my_container_2 0B (virtual 7.75MB) + 40ebdd763416 acme/my-final-image:1.0 my_container_1 0B (virtual 7.75MB) ``` + {% endraw %} + The output above shows that all containers share the image's read-only layers + (7.75MB), but no data was written to the container's filesystem, so no additional + storage is used for the containers. -3. List the contents of the local storage area. + > Advanced: metadata and logs storage used for containers + > + > **Note**: This step requires a Linux machine, and does not work on Docker + > Desktop for Mac or Docker Desktop for Windows, as it requires access to + > the Docker Daemon's file storage. + > + > While the output of `docker ps` provides you information about disk space + > consumed by a container's writable layer, it does not include information + > about metadata and log-files stored for each container. + > + > More details can be obtained my exploring the Docker Daemon's storage location + > (`/var/lib/docker` by default). + > + > ```console + > $ sudo du -sh /var/lib/docker/containers/* + > + > 36K /var/lib/docker/containers/3ed3c1a10430e09f253704116965b01ca920202d52f3bf381fbb833b8ae356bc + > 36K /var/lib/docker/containers/40ebdd7634162eb42bdb1ba76a395095527e9c0aa40348e6c325bd0aa289423c + > 36K /var/lib/docker/containers/939b3bf9e7ece24bcffec57d974c939da2bdcc6a5077b5459c897c1e2fa37a39 + > 36K /var/lib/docker/containers/a5ff32e2b551168b9498870faf16c9cd0af820edf8a5c157f7b80da59d01a107 + > 36K /var/lib/docker/containers/cddae31c314fbab3f7eabeb9b26733838187abc9a2ed53f97bd5b04cd7984a5a + > ``` + > + > Each of these containers only takes up 36k of space on the filesystem. - ```bash - $ sudo ls /var/lib/docker/containers +3. Per-container storage - 1a174fc216cccf18ec7d4fe14e008e30130b11ede0f0f94a87982e310cf2e765 - 1e7264576d78a3134fbaf7829bc24b1d96017cf2bc046b7cd8b08b5775c33d0c - 38fa94212a419a082e6a6b87a8e2ec4a44dd327d7069b85892a707e3fc818544 - c36785c423ec7e0422b2af7364a7ba4da6146cbba7981a0951fcc3fa0430c409 - dcad7101795e4206e637d9358a818e5c32e13b349e62b00bf05cd5a4343ea513 + To demonstrate this, run the following command to write the word 'hello' to + a file on the container's writable layer in containers `my_container_1`, + `my_container_2`, and `my_container_3`: + + ```console + $ for i in {1..3}; do docker exec my_container_$i sh -c 'printf hello > /out.txt'; done ``` + + Running the `docker ps` command again afterward shows that those containers + now consume 5 bytes each. This data is unique to each container, and not + shared. The read-only layers of the containers are not affected, and are still + shared by all containers. -4. Now check out their sizes: + {% raw %} + ```console + $ docker ps --size --format "table {{.ID}}\t{{.Image}}\t{{.Names}}\t{{.Size}}" - ```bash - $ sudo du -sh /var/lib/docker/containers/* - - 32K /var/lib/docker/containers/1a174fc216cccf18ec7d4fe14e008e30130b11ede0f0f94a87982e310cf2e765 - 32K /var/lib/docker/containers/1e7264576d78a3134fbaf7829bc24b1d96017cf2bc046b7cd8b08b5775c33d0c - 32K /var/lib/docker/containers/38fa94212a419a082e6a6b87a8e2ec4a44dd327d7069b85892a707e3fc818544 - 32K /var/lib/docker/containers/c36785c423ec7e0422b2af7364a7ba4da6146cbba7981a0951fcc3fa0430c409 - 32K /var/lib/docker/containers/dcad7101795e4206e637d9358a818e5c32e13b349e62b00bf05cd5a4343ea513 + CONTAINER ID IMAGE NAMES SIZE + cddae31c314f acme/my-final-image:1.0 my_container_5 0B (virtual 7.75MB) + 939b3bf9e7ec acme/my-final-image:1.0 my_container_4 0B (virtual 7.75MB) + 3ed3c1a10430 acme/my-final-image:1.0 my_container_3 5B (virtual 7.75MB) + a5ff32e2b551 acme/my-final-image:1.0 my_container_2 5B (virtual 7.75MB) + 40ebdd763416 acme/my-final-image:1.0 my_container_1 5B (virtual 7.75MB) ``` + {% endraw %} - Each of these containers only takes up 32k of space on the filesystem. - -Not only does copy-on-write save space, but it also reduces start-up time. -When you start a container (or multiple containers from the same image), Docker -only needs to create the thin writable container layer. +The examples above illustrate how copy-on-write filesystems help making containers +efficient. Not only does copy-on-write save space, but it also reduces container +start-up time. When you create a container (or multiple containers from the same +image), Docker only needs to create the thin writable container layer. If Docker had to make an entire copy of the underlying image stack each time it -started a new container, container start times and disk space used would be +created a new container, container create times and disk space used would be significantly increased. This would be similar to the way that virtual machines -work, with one or more virtual disks per virtual machine. +work, with one or more virtual disks per virtual machine. The [`vfs` storage](vfs-driver.md) +does not provide a CoW filesystem or other optimizations. When using this storage +driver, a full copy of the image's data is created for each container. ## Related information diff --git a/storage/storagedriver/overlayfs-driver.md b/storage/storagedriver/overlayfs-driver.md index f323d90c64..8c31ae5cd7 100644 --- a/storage/storagedriver/overlayfs-driver.md +++ b/storage/storagedriver/overlayfs-driver.md @@ -70,13 +70,13 @@ need to use the legacy `overlay` driver, specify it instead. 1. Stop Docker. - ```bash + ```console $ sudo systemctl stop docker ``` 2. Copy the contents of `/var/lib/docker` to a temporary location. - ```bash + ```console $ cp -au /var/lib/docker /var/lib/docker.bk ``` @@ -97,7 +97,7 @@ need to use the legacy `overlay` driver, specify it instead. 5. Start Docker. - ```bash + ```console $ sudo systemctl start docker ``` @@ -105,7 +105,7 @@ need to use the legacy `overlay` driver, specify it instead. Use the `docker info` command and look for `Storage Driver` and `Backing filesystem`. - ```bash + ```console $ docker info Containers: 0 @@ -149,7 +149,7 @@ six directories under `/var/lib/docker/overlay2`. > **Warning**: Do not directly manipulate any files or directories within > `/var/lib/docker/`. These files and directories are managed by Docker. -```bash +```console $ ls -l /var/lib/docker/overlay2 total 24 @@ -165,7 +165,7 @@ The new `l` (lowercase `L`) directory contains shortened layer identifiers as symbolic links. These identifiers are used to avoid hitting the page size limitation on arguments to the `mount` command. -```bash +```console $ ls -l /var/lib/docker/overlay2/l total 20 @@ -180,7 +180,7 @@ The lowest layer contains a file called `link`, which contains the name of the shortened identifier, and a directory called `diff` which contains the layer's contents. -```bash +```console $ ls /var/lib/docker/overlay2/3a36935c9df35472229c57f4a27105a136f5e4dbef0f87905b2e506e494e348b/ diff link @@ -200,7 +200,7 @@ contents. It also contains a `merged` directory, which contains the unified contents of its parent layer and itself, and a `work` directory which is used internally by OverlayFS. -```bash +```console $ ls /var/lib/docker/overlay2/223c2864175491657d238e2664251df13b63adb8d050924fd1bfcdb278b866f7 diff link lower merged work @@ -217,7 +217,7 @@ etc sbin usr var To view the mounts which exist when you use the `overlay` storage driver with Docker, use the `mount` command. The output below is truncated for readability. -```bash +```console $ mount | grep overlay overlay on /var/lib/docker/overlay2/9186877cdf386d0a3b016149cf30c208f326dca307529e646afce5b3f83f5304/merged @@ -273,7 +273,7 @@ the container is the `upperdir` and is writable. The following `docker pull` command shows a Docker host downloading a Docker image comprising five layers. -```bash +```console $ docker pull ubuntu Using default tag: latest @@ -297,7 +297,7 @@ the directory IDs. > **Warning**: Do not directly manipulate any files or directories within > `/var/lib/docker/`. These files and directories are managed by Docker. -```bash +```console $ ls -l /var/lib/docker/overlay/ total 20 @@ -312,7 +312,7 @@ The image layer directories contain the files unique to that layer as well as hard links to the data that is shared with lower layers. This allows for efficient use of disk space. -```bash +```console $ ls -i /var/lib/docker/overlay/38f3ed2eac129654acef11c32670b534670c3a06e483fce313d72e3e0a15baa8/root/bin/ls 19793696 /var/lib/docker/overlay/38f3ed2eac129654acef11c32670b534670c3a06e483fce313d72e3e0a15baa8/root/bin/ls @@ -328,7 +328,7 @@ Containers also exist on-disk in the Docker host's filesystem under `/var/lib/docker/overlay/`. If you list a running container's subdirectory using the `ls -l` command, three directories and one file exist: -```bash +```console $ ls -l /var/lib/docker/overlay/ total 16 @@ -341,7 +341,7 @@ drwx------ 3 root root 4096 Jun 20 16:39 work The `lower-id` file contains the ID of the top layer of the image the container is based on, which is the OverlayFS `lowerdir`. -```bash +```console $ cat /var/lib/docker/overlay/ec444863a55a9f1ca2df72223d459c5d940a721b2288ff86a3f27be28b53be6c/lower-id 55f1e14c361b90570df46371b20ce6d480c434981cbda5fd68c6ff61aa0a5358 @@ -358,7 +358,7 @@ The `work` directory is internal to OverlayFS. To view the mounts which exist when you use the `overlay` storage driver with Docker, use the `mount` command. The output below is truncated for readability. -```bash +```console $ mount | grep overlay overlay on /var/lib/docker/overlay/ec444863a55a.../merged diff --git a/storage/storagedriver/select-storage-driver.md b/storage/storagedriver/select-storage-driver.md index 8b2dc3618a..171037c57a 100644 --- a/storage/storagedriver/select-storage-driver.md +++ b/storage/storagedriver/select-storage-driver.md @@ -13,90 +13,63 @@ use Docker volumes to write data. However, some workloads require you to be able to write to the container's writable layer. This is where storage drivers come in. -Docker supports several different storage drivers, using a pluggable -architecture. The storage driver controls how images and containers are stored -and managed on your Docker host. - -After you have read the [storage driver overview](index.md), the -next step is to choose the best storage driver for your workloads. In making -this decision, there are three high-level factors to consider: - -If multiple storage drivers are supported in your kernel, Docker has a prioritized -list of which storage driver to use if no storage driver is explicitly configured, -assuming that the storage driver meets the prerequisites. - -Use the storage driver with the best overall performance and stability in the most -usual scenarios. +Docker supports several storage drivers, using a pluggable architecture. The +storage driver controls how images and containers are stored and managed on your +Docker host. After you have read the [storage driver overview](index.md), the +next step is to choose the best storage driver for your workloads. Use the storage +driver with the best overall performance and stability in the most usual scenarios. -Docker supports the following storage drivers: +The Docker Engine provides the following storage drivers on Linux: -* `overlay2` is the preferred storage driver, for all currently supported - Linux distributions, and requires no extra configuration. -* `aufs` was the preferred storage driver for Docker 18.06 and older, when - running on Ubuntu 14.04 on kernel 3.13 which had no support for `overlay2`. -* `fuse-overlayfs` is preferred only for running Rootless Docker - on a host that does not provide support for rootless `overlay2`. - On Ubuntu and Debian 10, the `fuse-overlayfs` driver does not need to be - used `overlay2` works even in rootless mode. - See [Rootless mode documentation](../../engine/security/rootless.md). -* `devicemapper` is supported, but requires `direct-lvm` for production - environments, because `loopback-lvm`, while zero-configuration, has very - poor performance. `devicemapper` was the recommended storage driver for - CentOS and RHEL, as their kernel version did not support `overlay2`. However, - current versions of CentOS and RHEL now have support for `overlay2`, - which is now the recommended driver. - * The `btrfs` and `zfs` storage drivers are used if they are the backing - filesystem (the filesystem of the host on which Docker is installed). - These filesystems allow for advanced options, such as creating "snapshots", - but require more maintenance and setup. Each of these relies on the backing - filesystem being configured correctly. - * The `vfs` storage driver is intended for testing purposes, and for situations - where no copy-on-write filesystem can be used. Performance of this storage - driver is poor, and is not generally recommended for production use. +| Driver | Description | +|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `overlay2` | `overlay2` is the preferred storage driver for all currently supported Linux distributions, and requires no extra configuration. | +| `fuse-overlayfs` | `fuse-overlayfs`is preferred only for running Rootless Docker on a host that does not provide support for rootless `overlay2`. On Ubuntu and Debian 10, the `fuse-overlayfs` driver does not need to be used, and `overlay2` works even in rootless mode. Refer to the [rootless mode documentation](../../engine/security/rootless.md) for details. | +| `btrfs` and `zfs` | The `btrfs` and `zfs` storage drivers allow for advanced options, such as creating "snapshots", but require more maintenance and setup. Each of these relies on the backing filesystem being configured correctly. | +| `vfs` | The `vfs` storage driver is intended for testing purposes, and for situations where no copy-on-write filesystem can be used. Performance of this storage driver is poor, and is not generally recommended for production use. | +| `aufs` | The `aufs` storage driver Was the preferred storage driver for Docker 18.06 and older, when running on Ubuntu 14.04 on kernel 3.13 which had no support for `overlay2`. However, current versions of Ubuntu and Debian now have support for `overlay2`, which is now the recommended driver. | +| `devicemapper` | The `devicemapper` storage driver requires `direct-lvm` for production environments, because `loopback-lvm`, while zero-configuration, has very poor performance. `devicemapper` was the recommended storage driver for CentOS and RHEL, as their kernel version did not support `overlay2`. However, current versions of CentOS and RHEL now have support for `overlay2`, which is now the recommended driver. | +| `overlay` | The legacy `overlay` driver was used for kernels that did not support the "multiple-lowerdir" feature required for `overlay2` All currently supported Linux distributions now provide support for this, and it is therefore deprecated. | -Docker's source code defines the selection order. You can see the order at -[the source code for Docker Engine {{ site.docker_ce_version }}](https://github.com/moby/moby/blob/{{ site.docker_ce_version }}/daemon/graphdriver/driver_linux.go#L52-L53) - -If you run a different version of Docker, you can use the branch selector at the top of the file viewer to choose a different branch. +The Docker Engine has a prioritized list of which storage driver to use if no +storage driver is explicitly configured, assuming that the storage driver meets +the prerequisites, and automatically selects a compatible storage driver. You +can see the order in the [source code for Docker Engine {{ site.docker_ce_version }}](https://github.com/moby/moby/blob/{{ site.docker_ce_version }}/daemon/graphdriver/driver_linux.go#L52-L53). {: id="storage-driver-order" } -Some storage drivers require you to use a specific format for the backing filesystem. If you have external -requirements to use a specific backing filesystem, this may limit your choices. See [Supported backing filesystems](#supported-backing-filesystems). +Some storage drivers require you to use a specific format for the backing filesystem. +If you have external requirements to use a specific backing filesystem, this may +limit your choices. See [Supported backing filesystems](#supported-backing-filesystems). -After you have narrowed down which storage drivers you can choose from, your choice is determined by the -characteristics of your workload and the level of stability you need. See [Other considerations](#other-considerations) -for help in making the final decision. - -> ***NOTE***: Your choice may be limited by your operating system and distribution. -> For instance, `aufs` is only supported on Ubuntu and Debian, and may require extra packages -> to be installed, while `btrfs` is only supported on SLES, which is only supported with Docker -> Enterprise. See [Support storage drivers per Linux distribution](#supported-storage-drivers-per-linux-distribution) -> for more information. +After you have narrowed down which storage drivers you can choose from, your choice +is determined by the characteristics of your workload and the level of stability +you need. See [Other considerations](#other-considerations) for help in making +the final decision. ## Supported storage drivers per Linux distribution -At a high level, the storage drivers you can use is partially determined by -the Docker edition you use. +> Docker Desktop, and Docker in Rootless mode +> +> Modifying the storage-driver is not supported on Docker Desktop for Mac and +> Docker Desktop for Windows, and only the default storage driver can be used. +> The comparison table below is also not applicable for Rootless mode. For the +> drivers available in rootless mode, see the [Rootless mode documentation](../../engine/security/rootless.md). -In addition, Docker does not recommend any configuration that requires you to -disable security features of your operating system, such as the need to disable -`selinux` if you use the `overlay` or `overlay2` driver on CentOS. +Your operating system and kernel may not support every storage driver. For +instance, `aufs` is only supported on Ubuntu and Debian, and may require extra +packages to be installed, while `btrfs` is only supported if your system uses +`btrfs` as storage. In general, the following configurations work on recent +versions of the Linux distribution: -### Docker Engine - Community - -For Docker Engine - Community, only some configurations are tested, and your operating -system's kernel may not support every storage driver. In general, the following -configurations work on recent versions of the Linux distribution: - -| Linux distribution | Recommended storage drivers | Alternative drivers | -|:--------------------|:-----------------------------------------------------------------------|:--------------------------------------------------| -| Docker Engine - Community on Ubuntu | `overlay2` or `aufs` (for Ubuntu 14.04 running on kernel 3.13) | `overlay`¹, `devicemapper`², `zfs`, `vfs` | -| Docker Engine - Community on Debian | `overlay2` (Debian Stretch), `aufs` or `devicemapper` (older versions) | `overlay`¹, `vfs` | -| Docker Engine - Community on CentOS | `overlay2` | `overlay`¹, `devicemapper`², `zfs`, `vfs` | -| Docker Engine - Community on Fedora | `overlay2` | `overlay`¹, `devicemapper`², `zfs`, `vfs` | -| Docker Engine - Community on SLES 15 | `overlay2` | `overlay`¹, `devicemapper`², `vfs` | -| Docker Engine - Community on RHEL | `overlay2` | `overlay`¹, `devicemapper`², `vfs` | +| Linux distribution | Recommended storage drivers | Alternative drivers | +|:--------------------|:------------------------------|:---------------------------------------------------| +| Ubuntu | `overlay2` | `overlay`¹, `devicemapper`², `aufs`³, `zfs`, `vfs` | +| Debian | `overlay2` | `overlay`¹, `devicemapper`², `aufs`³, `vfs` | +| CentOS | `overlay2` | `overlay`¹, `devicemapper`², `zfs`, `vfs` | +| Fedora | `overlay2` | `overlay`¹, `devicemapper`², `zfs`, `vfs` | +| SLES 15 | `overlay2` | `overlay`¹, `devicemapper`², `vfs` | +| RHEL | `overlay2` | `overlay`¹, `devicemapper`², `vfs` | ¹) The `overlay` storage driver is deprecated, and will be removed in a future release. It is recommended that users of the `overlay` storage driver migrate to `overlay2`. @@ -105,49 +78,33 @@ release. It is recommended that users of the `overlay` storage driver migrate to release. It is recommended that users of the `devicemapper` storage driver migrate to `overlay2`. -> **Note** -> -> The comparison table above is not applicable for Rootless mode. -> For the drivers available in Rootless mode, see [the Rootless mode documentation](../../engine/security/rootless.md). - -When possible, `overlay2` is the recommended storage driver. When installing -Docker for the first time, `overlay2` is used by default. Previously, `aufs` was -used by default when available, but this is no longer the case. If you want to -use `aufs` on new installations going forward, you need to explicitly configure -it, and you may need to install extra packages, such as `linux-image-extra`. -See [aufs](aufs-driver.md). - -On existing installations using `aufs`, it is still used. +³) The `aufs` storage driver is deprecated, and will be removed in a future +release. It is recommended that users of the `aufs` storage driver migrate +to `overlay2`. When in doubt, the best all-around configuration is to use a modern Linux distribution with a kernel that supports the `overlay2` storage driver, and to use Docker volumes for write-heavy workloads instead of relying on writing data to the container's writable layer. -The `vfs` storage driver is usually not the best choice. Before using the `vfs` -storage driver, be sure to read about +The `vfs` storage driver is usually not the best choice, and primarily intended +for debugging purposes in situations where no other storage-driver is supported. +Before using the `vfs` storage driver, be sure to read about [its performance and storage characteristics and limitations](vfs-driver.md). -> **Expectations for non-recommended storage drivers**: Commercial support is -> not available for Docker Engine - Community, and you can technically use any storage driver -> that is available for your platform. For instance, you can use `btrfs` with -> Docker Engine - Community, even though it is not recommended on any platform for -> Docker Engine - Community, and you do so at your own risk. -> -> The recommendations in the table above are based on automated regression -> testing and the configurations that are known to work for a large number of -> users. If you use a recommended configuration and find a reproducible issue, -> it is likely to be fixed very quickly. If the driver that you want to use is -> not recommended according to this table, you can run it at your own risk. You -> can and should still report any issues you run into. However, such issues -> have a lower priority than issues encountered when using a recommended -> configuration. +The recommendations in the table above are known to work for a large number of +users. If you use a recommended configuration and find a reproducible issue, +it is likely to be fixed very quickly. If the driver that you want to use is +not recommended according to this table, you can run it at your own risk. You +can and should still report any issues you run into. However, such issues +have a lower priority than issues encountered when using a recommended +configuration. -### Docker Desktop for Mac and Docker Desktop for Windows - -Docker Desktop for Mac and Docker Desktop for Windows are intended for development, rather -than production. Modifying the storage driver on these platforms is not -possible. +Depending on your Linux distribution, other storage-drivers, such as `btrfs` may +be available. These storage drivers can have advantages for specific use-cases, +but may require additional set-up or maintenance, which make them not recommended +for common scenarios. Refer to the documentation for those storage drivers for +details. ## Supported backing filesystems @@ -205,8 +162,8 @@ specific shared storage system. For some users, stability is more important than performance. Though Docker considers all of the storage drivers mentioned here to be stable, some are newer -and are still under active development. In general, `overlay2`, `aufs`, `overlay`, -and `devicemapper` are the choices with the highest stability. +and are still under active development. In general, `overlay2`, `aufs`, and +`devicemapper` are the choices with the highest stability. ### Test with your own workloads @@ -223,7 +180,7 @@ set-up steps to use a given storage driver. To see what storage driver Docker is currently using, use `docker info` and look for the `Storage Driver` line: -```bash +```console $ docker info Containers: 0 @@ -237,11 +194,13 @@ To change the storage driver, see the specific instructions for the new storage driver. Some drivers require additional configuration, including configuration to physical or logical disks on the Docker host. -> **Important**: When you change the storage driver, any existing images and -> containers become inaccessible. This is because their layers cannot be used -> by the new storage driver. If you revert your changes, you can -> access the old images and containers again, but any that you pulled or -> created using the new driver are then inaccessible. +> **Important** +> +> When you change the storage driver, any existing images and containers become +> inaccessible. This is because their layers cannot be used by the new storage +> driver. If you revert your changes, you can access the old images and containers +> again, but any that you pulled or created using the new driver are then +> inaccessible. {:.important} ## Related information diff --git a/storage/storagedriver/vfs-driver.md b/storage/storagedriver/vfs-driver.md index 470c3137b3..9b3f5160fb 100644 --- a/storage/storagedriver/vfs-driver.md +++ b/storage/storagedriver/vfs-driver.md @@ -17,7 +17,7 @@ mechanism to verify other storage back-ends against, in a testing environment. 1. Stop Docker. - ```bash + ```console $ sudo systemctl stop docker ``` @@ -44,14 +44,14 @@ mechanism to verify other storage back-ends against, in a testing environment. 3. Start Docker. - ```bash + ```console $ sudo systemctl start docker ``` 4. Verify that the daemon is using the `vfs` storage driver. Use the `docker info` command and look for `Storage Driver`. - ```bash + ```console $ docker info Storage Driver: vfs @@ -79,7 +79,7 @@ it is a deep copy of its parent layer. These layers are all located under The following `docker pull` command shows a Docker host downloading a Docker image comprising five layers. -```bash +```console $ docker pull ubuntu Using default tag: latest @@ -99,7 +99,7 @@ image layer IDs shown in the `docker pull` command. To see the size taken up on disk by each layer, you can use the `du -sh` command, which gives the size as a human-readable value. -```bash +```console $ ls -l /var/lib/docker/vfs/dir/ total 0 @@ -111,7 +111,7 @@ drwxr-xr-x. 21 root root 224 Aug 2 18:23 a292ac6341a65bf3a5da7b7c251e19de1294bd drwxr-xr-x. 21 root root 224 Aug 2 18:23 e92be7a4a4e3ccbb7dd87695bca1a0ea373d4f673f455491b1342b33ed91446b ``` -```bash +```console $ du -sh /var/lib/docker/vfs/dir/* 4.0K /var/lib/docker/vfs/dir/3262dfbe53dac3e1ab7dcc8ad5d8c4d586a11d2ac3c4234892e34bff7f6b821e diff --git a/storage/storagedriver/zfs-driver.md b/storage/storagedriver/zfs-driver.md index f34ee4ee38..9c7fc45b64 100644 --- a/storage/storagedriver/zfs-driver.md +++ b/storage/storagedriver/zfs-driver.md @@ -56,7 +56,7 @@ use unless you have substantial experience with ZFS on Linux. 2. Copy the contents of `/var/lib/docker/` to `/var/lib/docker.bk` and remove the contents of `/var/lib/docker/`. - ```bash + ```console $ sudo cp -au /var/lib/docker /var/lib/docker.bk $ sudo rm -rf /var/lib/docker/* @@ -67,7 +67,7 @@ use unless you have substantial experience with ZFS on Linux. have specified the correct devices, because this is a destructive operation. This example adds two devices to the pool. - ```bash + ```console $ sudo zpool create -f zpool-docker -m /var/lib/docker /dev/xvdf /dev/xvdg ``` @@ -75,7 +75,7 @@ use unless you have substantial experience with ZFS on Linux. display purposes only, and you can use a different name. Check that the pool was created and mounted correctly using `zfs list`. - ```bash + ```console $ sudo zfs list NAME USED AVAIL REFER MOUNTPOINT @@ -96,7 +96,7 @@ use unless you have substantial experience with ZFS on Linux. 4. Start Docker. Use `docker info` to verify that the storage driver is `zfs`. - ```bash + ```console $ sudo docker info Containers: 0 Running: 0 @@ -122,7 +122,7 @@ use unless you have substantial experience with ZFS on Linux. To increase the size of the `zpool`, you need to add a dedicated block device to the Docker host, and then add it to the `zpool` using the `zpool add` command: -```bash +```console $ sudo zpool add zpool-docker /dev/xvdh ``` diff --git a/storage/tmpfs.md b/storage/tmpfs.md index 6825f55bcc..c1247b58c4 100644 --- a/storage/tmpfs.md +++ b/storage/tmpfs.md @@ -72,7 +72,7 @@ second uses the `--tmpfs` flag.
-```bash +```console $ docker run -d \ -it \ --name tmptest \ @@ -83,7 +83,7 @@ $ docker run -d \
-```bash +```console $ docker run -d \ -it \ --name tmptest \ @@ -105,7 +105,7 @@ tmptest` and looking for the `Mounts` section: Remove the container: -```bash +```console $ docker container stop tmptest $ docker container rm tmptest @@ -125,7 +125,7 @@ as the `--tmpfs` flag does not support them. The following example sets the `tmpfs-mode` to `1770`, so that it is not world-readable within the container. -```bash +```console docker run -d \ -it \ --name tmptest \ diff --git a/storage/troubleshooting_volume_errors.md b/storage/troubleshooting_volume_errors.md index 388af6411c..e6b3f4c512 100644 --- a/storage/troubleshooting_volume_errors.md +++ b/storage/troubleshooting_volume_errors.md @@ -18,7 +18,7 @@ documentation for `cadvisor` instructs you to run the `cadvisor` container as follows: -```bash +```console $ sudo docker run \ --volume=/:/rootfs:ro \ --volume=/var/run:/var/run:rw \ @@ -53,7 +53,7 @@ If you are unsure which process is causing the path mentioned in the error to be busy and preventing it from being removed, you can use the `lsof` command to find its process. For instance, for the error above: -```bash +```console $ sudo lsof /var/lib/docker/containers/74bef250361c7817bee19349c93139621b272bc8f654ae112dd4eb9652af9515/shm ``` diff --git a/storage/volumes.md b/storage/volumes.md index a90cbc7b1d..eb214690d0 100644 --- a/storage/volumes.md +++ b/storage/volumes.md @@ -110,13 +110,13 @@ container. **Create a volume**: -```bash +```console $ docker volume create my-vol ``` **List volumes**: -```bash +```console $ docker volume ls local my-vol @@ -124,7 +124,7 @@ local my-vol **Inspect a volume**: -```bash +```console $ docker volume inspect my-vol [ { @@ -140,7 +140,7 @@ $ docker volume inspect my-vol **Remove a volume**: -```bash +```console $ docker volume rm my-vol ``` @@ -161,7 +161,7 @@ after running the first one.
-```bash +```console $ docker run -d \ --name devtest \ --mount source=myvol2,target=/app \ @@ -171,7 +171,7 @@ $ docker run -d \
-```bash +```console $ docker run -d \ --name devtest \ -v myvol2:/app \ @@ -205,7 +205,7 @@ destination, and that the mount is read-write. Stop the container and remove the volume. Note volume removal is a separate step. -```bash +```console $ docker container stop devtest $ docker container rm devtest @@ -259,7 +259,7 @@ Docker for Azure both support persistent storage using the Cloudstor plugin. The following example starts a `nginx` service with four replicas, each of which uses a local volume called `myvol2`. -```bash +```console $ docker service create -d \ --replicas=4 \ --name devtest-service \ @@ -269,7 +269,7 @@ $ docker service create -d \ Use `docker service ps devtest-service` to verify that the service is running: -```bash +```console $ docker service ps devtest-service ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS @@ -278,7 +278,7 @@ ID NAME IMAGE NODE Remove the service, which stops all its tasks: -```bash +```console $ docker service rm devtest-service ``` @@ -313,7 +313,7 @@ The `--mount` and `-v` examples have the same end result.
-```bash +```console $ docker run -d \ --name=nginxtest \ --mount source=nginx-vol,destination=/usr/share/nginx/html \ @@ -323,7 +323,7 @@ $ docker run -d \
-```bash +```console $ docker run -d \ --name=nginxtest \ -v nginx-vol:/usr/share/nginx/html \ @@ -337,7 +337,7 @@ After running either of these examples, run the following commands to clean up the containers and volumes. Note volume removal is a separate step. -```bash +```console $ docker container stop nginxtest $ docker container rm nginxtest @@ -367,7 +367,7 @@ The `--mount` and `-v` examples have the same result.
-```bash +```console $ docker run -d \ --name=nginxtest \ --mount source=nginx-vol,destination=/usr/share/nginx/html,readonly \ @@ -377,7 +377,7 @@ $ docker run -d \
-```bash +```console $ docker run -d \ --name=nginxtest \ -v nginx-vol:/usr/share/nginx/html:ro \ @@ -408,7 +408,7 @@ correctly. Look for the `Mounts` section: Stop and remove the container, and remove the volume. Volume removal is a separate step. -```bash +```console $ docker container stop nginxtest $ docker container rm nginxtest @@ -448,7 +448,7 @@ host and can connect to the second using SSH. On the Docker host, install the `vieux/sshfs` plugin: -```bash +```console $ docker plugin install --grant-all-permissions vieux/sshfs ``` @@ -458,7 +458,7 @@ This example specifies a SSH password, but if the two hosts have shared keys configured, you can omit the password. Each volume driver may have zero or more configurable options, each of which is specified using an `-o` flag. -```bash +```console $ docker volume create --driver vieux/sshfs \ -o sshcmd=test@node2:/home/test \ -o password=testpassword \ @@ -472,7 +472,7 @@ configured, you can omit the password. Each volume driver may have zero or more configurable options. **If the volume driver requires you to pass options, you must use the `--mount` flag to mount the volume, rather than `-v`.** -```bash +```console $ docker run -d \ --name sshfs-container \ --volume-driver vieux/sshfs \ @@ -485,7 +485,7 @@ $ docker run -d \ This example shows how you can create an NFS volume when creating a service. This example uses `10.0.0.10` as the NFS server and `/var/docker-nfs` as the exported directory on the NFS server. Note that the volume driver specified is `local`. #### NFSv3 -```bash +```console $ docker service create -d \ --name nfs-service \ --mount 'type=volume,source=nfsvolume,target=/app,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-nfs,volume-opt=o=addr=10.0.0.10' \ @@ -493,7 +493,7 @@ $ docker service create -d \ ``` #### NFSv4 -```bash +```console docker service create -d \ --name nfs-service \ --mount 'type=volume,source=nfsvolume,target=/app,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-nfs,"volume-opt=o=addr=10.0.0.10,rw,nfsvers=4,async"' \ @@ -503,8 +503,8 @@ docker service create -d \ ### Create CIFS/Samba volumes You can mount a Samba share directly in docker without configuring a mount point on your host. -```bash -docker volume create \ +```console +$ docker volume create \ --driver local \ --opt type=cifs \ --opt device=//uxxxxx.your-server.de/backup \