Merge pull request #13352 from thaJeztah/add_missing_codehints

Add missing code-hints, and minor markdown edits
This commit is contained in:
Usha Mandya 2021-08-16 12:15:19 +01:00 committed by GitHub
commit 00bbe69666
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
57 changed files with 226 additions and 221 deletions

View File

@ -61,6 +61,6 @@ web:
If you forget and use a single dollar sign (`$`), Compose interprets the value
as an environment variable and warns you:
```
```console
The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string.
```

View File

@ -14,7 +14,7 @@ variable or by making BuildKit the default setting.
To set the BuildKit environment variable when running the `docker build` command,
run:
```
```console
$ DOCKER_BUILDKIT=1 docker build .
```

View File

@ -522,35 +522,37 @@ use an existing domain name for your application:
1. Use the AWS web console or CLI to get your VPC and Subnets IDs. You can retrieve the default VPC ID and attached subnets using this AWS CLI commands:
```console
$ aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId'
```console
$ aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId'
"vpc-123456"
$ aws ec2 describe-subnets --filters Name=vpc-id,Values=vpc-123456 --query 'Subnets[*].SubnetId'
[
"subnet-1234abcd",
"subnet-6789ef00",
]
```
2. Use the AWS CLI to create your load balancer. The AWS Web Console can also be used but will require adding at least one listener, which we don't need here.
"vpc-123456"
$ aws ec2 describe-subnets --filters Name=vpc-id,Values=vpc-123456 --query 'Subnets[*].SubnetId'
[
"subnet-1234abcd",
"subnet-6789ef00",
]
```
1. Use the AWS CLI to create your load balancer. The AWS Web Console can also be used but will require adding at least one listener, which we don't need here.
```console
$ aws elbv2 create-load-balancer --name myloadbalancer --type application --subnets "subnet-1234abcd" "subnet-6789ef00"
{
"LoadBalancers": [
```console
$ aws elbv2 create-load-balancer --name myloadbalancer --type application --subnets "subnet-1234abcd" "subnet-6789ef00"
{
"IpAddressType": "ipv4",
"VpcId": "vpc-123456",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/myloadbalancer/123abcd456",
"DNSName": "myloadbalancer-123456.us-east-1.elb.amazonaws.com",
...
```
1. To assign your application an existing domain name, you can configure your DNS with a
CNAME entry pointing to just-created loadbalancer's `DNSName` reported as you created the loadbalancer.
"LoadBalancers": [
{
"IpAddressType": "ipv4",
"VpcId": "vpc-123456",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/myloadbalancer/123abcd456",
"DNSName": "myloadbalancer-123456.us-east-1.elb.amazonaws.com",
<...>
```
3. To assign your application an existing domain name, you can configure your DNS with a
CNAME entry pointing to just-created loadbalancer's `DNSName` reported as you created the loadbalancer.
1. Use Loadbalancer ARN to set `x-aws-loadbalancer` in your compose file, and deploy your application using `docker compose up` command.
4. Use Loadbalancer ARN to set `x-aws-loadbalancer` in your compose file, and deploy your application using `docker compose up` command.
Please note Docker ECS integration won't be aware of this domain name, so `docker compose ps` command will report URLs with loadbalancer DNSName, not your own domain.

View File

@ -29,6 +29,7 @@ For components and controls we are using [Bootstrap](https://getbootstrap.com)
<button type="button" class="btn btn-default" data-toggle="tooltip" data-placement="right" title="Tooltip on right">Tooltip on right</button>
```
<hr>
<!-- ### Popover

View File

@ -25,6 +25,7 @@ available.
-L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/bash/docker-compose \
-o /etc/bash_completion.d/docker-compose
```
3. Reload your terminal. You can close and then open a new terminal, or reload your setting with `source ~/.bashrc` command in your current terminal.
#### Mac

View File

@ -579,10 +579,7 @@ format `file://<filename>` or `registry://<value-name>`.
When using `file:`, the referenced file must be present in the `CredentialSpecs`
subdirectory in the Docker data directory, which defaults to `C:\ProgramData\Docker\`
on Windows. The following example loads the credential spec from a file named
```
C:\ProgramData\Docker\CredentialSpecs\my-credential-spec.json
```
`C:\ProgramData\Docker\CredentialSpecs\my-credential-spec.json`.
```yaml
credential_spec:

View File

@ -93,6 +93,7 @@ done using the `--env-file` option:
```console
$ docker-compose --env-file ./config/.env.dev up
```
This file path is relative to the current working directory where the Docker Compose
command is executed.
@ -120,6 +121,7 @@ services:
web:
image: 'webapp:v1.5'
```
Passing the `--env-file ` argument overrides the default file path:
```console
@ -132,7 +134,7 @@ services:
When an invalid file path is being passed as `--env-file` argument, Compose returns an error:
```
```console
$ docker-compose --env-file ./doesnotexist/.env.dev config
ERROR: Couldn't find env file: /home/user/./doesnotexist/.env.dev
```

View File

@ -56,9 +56,10 @@ services:
count: 1
capabilities: [gpu, utility]
```
Run with Docker Compose:
```sh
```console
$ docker-compose up
Creating network "gpu_default" with the default driver
Creating gpu_test_1 ... done
@ -99,7 +100,8 @@ services:
devices:
- capabilities: [gpu]
```
```sh
```console
$ docker-compose up
Creating network "gpu_default" with the default driver
Creating gpu_test_1 ... done
@ -114,7 +116,7 @@ gpu_test_1 exited with code 0
On machines hosting multiple GPUs, `device_ids` field can be set to target specific GPU devices and `count` can be used to limit the number of GPU devices assigned to a service container. If `count` exceeds the number of available GPUs on the host, the deployment will error out.
```
```console
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
@ -140,6 +142,7 @@ $ nvidia-smi
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
To enable access only to GPU-0 and GPU-3 devices:
```yaml

View File

@ -159,6 +159,7 @@ $ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
$ docker-compose --version
docker-compose version {{site.compose_version}}, build 1110ad01
```
</div>
<div id="alternatives" class="tab-pane fade" markdown="1">
@ -185,6 +186,7 @@ started.
```console
$ pip install docker-compose
```
If you are not using virtualenv,
```console

View File

@ -175,6 +175,7 @@ networks:
# Use a custom driver
driver: custom-driver-1
```
## Use a pre-existing network
If you want your containers to join a pre-existing network, use the [`external` option](compose-file/compose-file-v2.md#network-configuration-reference):

View File

@ -90,13 +90,13 @@ add to their predecessors.
For example, consider this command line:
```
```console
$ docker-compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db
```
The `docker-compose.yml` file might specify a `webapp` service.
```
```yaml
webapp:
image: examples/web
ports:
@ -109,7 +109,7 @@ If the `docker-compose.admin.yml` also specifies this same service, any matching
fields override the previous file. New values, add to the `webapp` service
configuration.
```
```yaml
webapp:
build: .
environment:
@ -149,7 +149,7 @@ follows: `docker-compose -f ~/sandbox/rails/docker-compose.yml pull db`
Here's the full example:
```
```console
$ docker-compose -f ~/sandbox/rails/docker-compose.yml pull db
Pulling db (postgres:latest)...
latest: Pulling from library/postgres

View File

@ -191,6 +191,7 @@ most 50% of the CPU every second.
```console
$ docker run -it --cpus=".5" ubuntu /bin/bash
```
Which is the equivalent to manually specifying `--cpu-period` and `--cpu-quota`;
```console

View File

@ -109,8 +109,8 @@ $ dockerd --debug \
You can learn what configuration options are available in the
[dockerd reference docs](../../engine/reference/commandline/dockerd.md), or by running:
```
dockerd --help
```console
$ dockerd --help
```
Many specific configuration options are discussed throughout the Docker

View File

@ -42,8 +42,8 @@ include examples of customizing the output format.
It puts a separator between each element in the list.
{% raw %}
```
docker inspect --format '{{join .Args " , "}}' container
```console
$ docker inspect --format '{{join .Args " , "}}' container
```
{% endraw %}
@ -52,8 +52,8 @@ docker inspect --format '{{join .Args " , "}}' container
`table` specifies which fields you want to see its output.
{% raw %}
```
docker image list --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}\t{{.Size}}"
```console
$ docker image list --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}\t{{.Size}}"
```
{% endraw %}
@ -63,8 +63,8 @@ docker image list --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}\t{{.Size}}"
{% raw %}
```
docker inspect --format '{{json .Mounts}}' container
```console
$ docker inspect --format '{{json .Mounts}}' container
```
{% endraw %}
@ -73,8 +73,8 @@ docker inspect --format '{{json .Mounts}}' container
`lower` transforms a string into its lowercase representation.
{% raw %}
```
docker inspect --format "{{lower .Name}}" container
```console
$ docker inspect --format "{{lower .Name}}" container
```
{% endraw %}
@ -83,8 +83,8 @@ docker inspect --format "{{lower .Name}}" container
`split` slices a string into a list of strings separated by a separator.
{% raw %}
```
docker inspect --format '{{split .Image ":"}}'
```console
$ docker inspect --format '{{split .Image ":"}}'
```
{% endraw %}
@ -93,8 +93,8 @@ docker inspect --format '{{split .Image ":"}}'
`title` capitalizes the first character of a string.
{% raw %}
```
docker inspect --format "{{title .Name}}" container
```console
$ docker inspect --format "{{title .Name}}" container
```
{% endraw %}
@ -103,8 +103,8 @@ docker inspect --format "{{title .Name}}" container
`upper` transforms a string into its uppercase representation.
{% raw %}
```
docker inspect --format "{{upper .Name}}" container
```console
$ docker inspect --format "{{upper .Name}}" container
```
{% endraw %}
@ -114,8 +114,8 @@ docker inspect --format "{{upper .Name}}" container
`println` prints each value on a new line.
{% raw %}
```
docker inspect --format='{{range .NetworkSettings.Networks}}{{println .IPAddress}}{{end}}' container
```console
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{println .IPAddress}}{{end}}' container
```
{% endraw %}
@ -124,7 +124,7 @@ docker inspect --format='{{range .NetworkSettings.Networks}}{{println .IPAddress
To find out what data can be printed, show all content as json:
{% raw %}
```
docker container ls --format='{{json .}}'
```console
$ docker container ls --format='{{json .}}'
```
{% endraw %}

View File

@ -138,7 +138,7 @@ Your proxy settings, however, will not be propagated into the containers you sta
If you wish to set the proxy settings for your containers, you need to define
environment variables for them, just like you would do on Linux, for example:
```
```console
$ docker run -e HTTP_PROXY=http://proxy.example.com:3128 alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
@ -251,7 +251,7 @@ $ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.ke
Or, if you prefer to add the certificate to your own local keychain only (rather
than for all users), run this command instead:
```
```console
$ security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain ca.crt
```

View File

@ -23,7 +23,7 @@ Docker Desktop for {{Arch}} intercepts traffic from the containers and injects i
When you run a container with the `-p` argument, for example:
```
```console
$ docker run -p 80:80 -d nginx
```
@ -33,7 +33,7 @@ host and container ports are the same. What if you need to specify a different
host port? If, for example, you already have something running on port 80 of
your host machine, you can connect the container to a different port:
```
```console
$ docker run -p 8000:80 -d nginx
```

View File

@ -41,8 +41,8 @@ Do not move the file directly in Finder as this can cause Docker Desktop to lose
Check whether you have any unnecessary containers and images. If your client and daemon API are running version 1.25 or later (use the `docker version` command on the client to check your client and daemon API versions), you can see the detailed space usage information by running:
```
docker system df -v
```console
$ docker system df -v
```
Alternatively, to list images, run:
@ -72,7 +72,7 @@ It might take a few minutes to reclaim space on the host depending on the format
Space is only freed when images are deleted. Space is not freed automatically when files are deleted inside running containers. To trigger a space reclamation at any point, run the command:
```
```console
$ docker run --privileged --pid=host docker/desktop-reclaim-space
```
@ -80,8 +80,7 @@ Note that many tools report the maximum file size, not the actual file size.
To query the actual size of the file on the host from a terminal, run:
```console
$ cd ~/Library/Containers/com.docker.docker/Data
$ cd vms/0/data
$ cd ~/Library/Containers/com.docker.docker/Data/vms/0/data
$ ls -klsh Docker.raw
2333548 -rw-r--r--@ 1 username staff 64G Dec 13 17:42 Docker.raw
```

View File

@ -23,7 +23,7 @@ Docker Desktop intercepts traffic from the containers and injects it into
When you run a container with the `-p` argument, for example:
```
```console
$ docker run -p 80:80 -d nginx
```
@ -33,7 +33,7 @@ host and container ports are the same. What if you need to specify a different
host port? If, for example, you already have something running on port 80 of
your host machine, you can connect the container to a different port:
```
```console
$ docker run -p 8000:80 -d nginx
```

View File

@ -112,13 +112,13 @@ does not send client certificates to them. Commands like `docker run` that
attempt to pull from the registry produces error messages on the command line,
like this:
```
```console
Error response from daemon: Get http://192.168.203.139:5858/v2/: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
```
As well as on the registry. For example:
```
```console
2017/06/20 18:15:30 http: TLS handshake error from 192.168.203.139:52882: tls: client didn't provide a certificate
2017/06/20 18:15:30 http: TLS handshake error from 192.168.203.139:52883: tls: first record does not look like a TLS handshake
```

View File

@ -114,8 +114,8 @@ Starting with Docker Desktop 3.1.0, Docker Desktop supports WSL 2 GPU Paravirtua
To validate that everything works as expected, run the following command to run a short benchmark on your GPU:
```
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
```console
$ docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)

View File

@ -579,6 +579,7 @@ build from inadvertently succeeding. For example:
```dockerfile
RUN set -o pipefail && wget -O - https://some.site | wc -l > /number
```
> Not all shells support the `-o pipefail` option.
>
> In cases such as the `dash` shell on

View File

@ -227,7 +227,7 @@ Some "testing/helper" scripts are available for testing Linux and Windows Docker
cat ./run_my_application.sh
```
```
```bash
#!/usr/bin/env bash
docker container run -d \
-p 80:8080 --name tomcat-wildbook \
@ -237,7 +237,7 @@ $1
#### To inspect the Docker image, `gforghetti/tomcat-wildbook:latest`, with a custom startup script and upload the result to Docker Hub (leave out the `-product-id` parameter if you are just testing):
```
```console
root:[~/] # ./inspectDockerImage --start-script ./run_my_application.sh -product-id=<store-product-id> gforghetti/tomcat-wildbook:latest
```
@ -462,7 +462,7 @@ root:[~/] #
#### To inspect the Docker image, `gforghetti/apache:latest`, with JSON output:
```
```console
root:[~/] # ./inspectDockerImage --json gforghetti/apache:latest | jq
```
@ -589,7 +589,7 @@ root:[~/] # ./inspectDockerImage --json gforghetti/apache:latest | jq
#### To inspect the Docker image, `gforghetti/apache:latest`, with HTML output:
```
```console
root:[~/] # ./inspectDockerImage --html gforghetti/apache:latest
```
@ -622,7 +622,7 @@ root:[~/] #
#### To inspect the Docker image, `microsoft/nanoserver:latest`:
```
```console
PS D:\InspectDockerimage> .\inspectDockerImage microsoft/nanoserver:latest
```

View File

@ -256,9 +256,10 @@ By default, `inspectDockerLoggingPlugin` displays output locally to `stdout` (th
#### To inspect the Docker logging plugin "gforghetti/docker-log-driver-test:latest", and upload the result to Docker Hub (leave out the `-product-id` parameter if you are just testing):
```console
$ ./inspectDockerLoggingPlugin -product-id=<store-product-id> gforghetti/docker-log-driver-test:latest
```
gforghetti:~:$ ./inspectDockerLoggingPlugin -product-id=<store-product-id> gforghetti/docker-log-driver-test:latest
```
#### Output:
```
@ -360,8 +361,8 @@ gforghetti:~/$
#### To inspect the Docker logging plugin `gforghetti/docker-log-driver-test:latest` with JSON Output:
```
gforghetti:~:$ ./inspectDockerLoggingPlugin --json gforghetti/docker-log-driver-test:latest | jq
```console
$ ./inspectDockerLoggingPlugin --json gforghetti/docker-log-driver-test:latest | jq
```
> **Note**: The output was piped to the `jq` command to display it "nicely".
@ -431,8 +432,8 @@ gforghetti:~:$ ./inspectDockerLoggingPlugin --json gforghetti/docker-log-driver-
#### To inspect the Docker logging plugin `gforghetti/docker-log-driver-test:latest` with HTML output:
```
gforghetti:~:$ ./inspectDockerLoggingPlugin --html gforghetti/docker-log-driver-test:latest
```console
$ ./inspectDockerLoggingPlugin --html gforghetti/docker-log-driver-test:latest
```
#### Output:
@ -442,7 +443,6 @@ Note: The majority of the stdout message output has been intentionally omitted b
```
The inspection of the Docker logging plugin cpuguy83/docker-logdriver-test:latest has completed.
An HTML report has been generated in the file cpuguy83-docker-logdriver-test-latest_inspection_report.html
gforghetti:~/$
```
![HTML Output Image](images/gforghetti-log-driver-latest_inspection_report.html.png)
@ -520,10 +520,11 @@ The **curl** command can be used to test and use the **http_api_endpoint** HTTP
##### Script to run a container to test the Logging Plugin
```console
$ cat test_new_plugin.sh
```
# cat test_new_plugin.sh
```
```
```bash
#!/usr/bin/env bash
#######################################################################################################################################
@ -571,10 +572,11 @@ exit $?
##### Script to retrieve the logging data from the http_api_endpoint HTTP Server
```console
$ cat get_plugin_logs.sh
```
# cat get_plugin_logs.sh
```
```
```bash
#!/usr/bin/env sh
#######################################################################################################################################
@ -589,6 +591,6 @@ curl -s -X GET http://127.0.0.1:80
##### To test the Docker logging plugin
```
./inspectDockerLoggingPlugin --verbose --html --test-script ./test_plugin.sh --get-logs-script ./get_plugin_logs.sh myNamespace/docker-logging-driver:1.0.2
```console
$ ./inspectDockerLoggingPlugin --verbose --html --test-script ./test_plugin.sh --get-logs-script ./get_plugin_logs.sh myNamespace/docker-logging-driver:1.0.2
```

View File

@ -7,8 +7,8 @@ prior version of Docker was shipped.
To view the docs offline on your local machine, run:
```
docker run -ti -p 4000:4000 {{ archive.image }}
```console
$ docker run -ti -p 4000:4000 {{ archive.image }}
```
## Accessing unsupported archived documentation
@ -21,6 +21,6 @@ you can still access that documentation in the following ways:
- By running a container of the specific [tag for your documentation version](https://hub.docker.com/r/docs/docker.github.io)
in Docker Hub. For example, run the following to access `v1.9`:
```console
$ docker run -it -p 4000:4000 docs/docker.github.io:v1.9
```
```console
$ docker run -it -p 4000:4000 docs/docker.github.io:v1.9
```

View File

@ -450,6 +450,7 @@ Reticulating spline 3...
Reticulating spline 4...
Reticulating spline 5...
```
</div>
</div><!-- end tab-content -->

View File

@ -40,7 +40,7 @@ A context is a combination of several properties. These include:
The easiest way to see what a context looks like is to view the **default** context.
```
```console
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current... unix:///var/run/docker.sock swarm
@ -52,7 +52,7 @@ The asterisk in the `NAME` column indicates that this is the active context. Thi
Dig a bit deeper with `docker context inspect`. In this example, we're inspecting the context called `default`.
```
```console
$ docker context inspect default
[
{
@ -86,7 +86,7 @@ The following example creates a new context called "docker-test" and specifies t
- Default orchestrator = Swarm
- Issue commands to the local Unix socket `/var/run/docker.sock`
```
```console
$ docker context create docker-test \
--default-stack-orchestrator=swarm \
--docker host=unix:///var/run/docker.sock
@ -102,7 +102,7 @@ You can view the new context with `docker context ls` and `docker context inspec
The following can be used to create a config with Kubernetes as the default orchestrator using the existing kubeconfig stored in `/home/ubuntu/.kube/config`. For this to work, you will need a valid kubeconfig file in `/home/ubuntu/.kube/config`. If your kubeconfig has more than one context, the current context (`kubectl config current-context`) will be used.
```
```console
$ docker context create k8s-test \
--default-stack-orchestrator=kubernetes \
--kubernetes config-file=/home/ubuntu/.kube/config \
@ -113,7 +113,7 @@ Successfully created context "k8s-test"
You can view all contexts on the system with `docker context ls`.
```
```console
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current unix:///var/run/docker.sock https://35.226.99.100 (default) swarm
@ -129,7 +129,7 @@ You can use `docker context use` to quickly switch between contexts.
The following command will switch the `docker` CLI to use the "k8s-test" context.
```
```console
$ docker context use k8s-test
k8s-test
@ -138,7 +138,7 @@ Current context is now "k8s-test"
Verify the operation by listing all contexts and ensuring the asterisk ("\*") is against the "k8s-test" context.
```
```console
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://35.226.99.100 (default) swarm
@ -154,13 +154,13 @@ Use the appropriate command below to set the context to `docker-test` using an e
Windows PowerShell:
```
```console
> $Env:DOCKER_CONTEXT=docker-test
```
Linux:
```
```console
$ export DOCKER_CONTEXT=docker-test
```
@ -168,7 +168,7 @@ Run a `docker context ls` to verify that the "docker-test" context is now the ac
You can also use the global `--context` flag to override the context specified by the `DOCKER_CONTEXT` environment variable. For example, the following will send the command to a context called "production".
```
```console
$ docker --context production container ls
```
@ -188,21 +188,21 @@ Let's look at exporting and importing a native Docker context.
The following example exports an existing context called "docker-test". It will be written to a file called `docker-test.dockercontext`.
```
```console
$ docker context export docker-test
Written file "docker-test.dockercontext"
```
Check the contents of the export file.
```
```console
$ cat docker-test.dockercontext
meta.json0000644000000000000000000000022300000000000011023 0ustar0000000000000000{"Name":"docker-test","Metadata":{"StackOrchestrator":"swarm"},"Endpoints":{"docker":{"Host":"unix:///var/run/docker.sock","SkipTLSVerify":false}}}tls0000700000000000000000000000000000000000000007716 5ustar0000000000000000
```
This file can be imported on another host using `docker context import`. The target host must have the Docker client installed.
```
```console
$ docker context import docker-test docker-test.dockercontext
docker-test
Successfully imported context "docker-test"
@ -220,14 +220,14 @@ You can export a Kubernetes context only if the context you are exporting has a
These steps will use the `--kubeconfig` flag to export **only** the Kubernetes elements of the existing `k8s-test` context to a file called "k8s-test.kubeconfig". The `cat` command will then show that it's exported as a valid kubeconfig file.
```
```console
$ docker context export k8s-test --kubeconfig
Written file "k8s-test.kubeconfig"
```
Verify that the exported file contains a valid kubectl config.
```
```console
$ cat k8s-test.kubeconfig
apiVersion: v1
clusters:
@ -265,7 +265,7 @@ You can use `docker context update` to update fields in an existing context.
The following example updates the "Description" field in the existing `k8s-test` context.
```
```console
$ docker context update k8s-test --description "Test Kubernetes cluster"
k8s-test
Successfully updated context "k8s-test"

View File

@ -68,7 +68,7 @@ it initializes a new project based on the Compose file.
Use the following command to initialize a new empty project called "hello-world".
```
```console
$ docker app init hello-world
Created "hello-world.dockerapp"
```
@ -82,7 +82,7 @@ project with `.dockerapp` appended, and the three YAML files are:
Inspect the YAML files with the following commands.
```
```console
$ cd hello-world.dockerapp/
$ cat docker-compose.yml
@ -116,19 +116,19 @@ This section describes editing the project YAML files so that it runs a simple w
Use your preferred editor to edit the `docker-compose.yml` YAML file and update it with
the following information:
```
```yaml
version: "3.6"
services:
hello:
image: hashicorp/http-echo
command: ["-text", "${hello.text}"]
ports:
- ${hello.port}:5678
- "${hello.port}:5678"
```
Update the `parameters.yml` file to the following:
```
```yaml
hello:
port: 8080
text: Hello world!
@ -147,7 +147,7 @@ It is a quick way to check how to configure the application before deployment, w
the `Compose file`. It's important to note that the application is not running at this point, and that
the `inspect` operation inspects the configuration file(s).
```
```console
$ docker app inspect hello-world.dockerapp
hello-world 0.1.0
@ -171,7 +171,7 @@ The application is ready to be validated and rendered.
Docker App provides the `validate` subcommand to check syntax and other aspects of the configuration.
If the app passes validation, the command returns no arguments.
```
```console
$ docker app validate hello-world.dockerapp
Validated "hello-world.dockerapp"
```
@ -199,7 +199,7 @@ Use `docker app install` to deploy the application.
Use the following command to deploy (install) the application.
```
```console
$ docker app install hello-world.dockerapp --name my-app
Creating network my-app_default
Creating service my-app_hello
@ -211,7 +211,7 @@ installation container and as a target context to deploy the application. You ca
using the flag `--target-context` or by using the environment variable `DOCKER_TARGET_CONTEXT`. This flag is also
available for the commands `status`, `upgrade`, and `uninstall`.
```
```console
$ docker app install hello-world.dockerapp --name my-app --target-context=my-big-production-cluster
Creating network my-app_default
Creating service my-app_hello
@ -223,7 +223,7 @@ valid if they are deployed on different target contexts.
You can check the status of the app with the `docker app status <app-name>` command.
```
```console
$ docker app status my-app
INSTALLATION
------------
@ -254,7 +254,7 @@ miqdk1v7j3zk my-app_hello replicated 1/1 hashicorp/http-echo:la
The app is deployed using the stack orchestrator. This means you can also inspect it using the regular `docker stack` commands.
```
```console
$ docker stack ls
NAME SERVICES ORCHESTRATOR
my-app 1 Swarm
@ -265,7 +265,8 @@ port 8080 and see the app. You must ensure traffic to port 8080 is allowed on
the connection from your browser to your Docker host.
Now change the port of the application using `docker app upgrade <app-name>` command.
```
```console
$ docker app upgrade my-app --set hello.port=8181
Upgrading service my-app_hello
Application "my-app" upgraded on context "default"
@ -286,13 +287,13 @@ Rendering is the process of reading the entire application configuration and out
Use the following command to render the app to a Compose file called `docker-compose.yml` in the current directory.
```
```console
$ docker app render --output docker-compose.yml hello-world.dockerapp
```
Check the contents of the resulting `docker-compose.yml` file.
```
```console
$ cat docker-compose.yml
version: "3.6"
services:
@ -315,7 +316,7 @@ section of the project's YAML file. For example, `${hello.text}` has been expand
Try to render the application with a different text:
```
```console
$ docker app render hello-world.dockerapp --set hello.text="Hello whales!"
version: "3.6"
services:
@ -333,7 +334,7 @@ services:
Use `docker-compose up` to deploy the app.
```
```console
$ docker-compose up --detach
WARNING: The Docker Engine you're using is running in swarm mode.
<Snip>
@ -354,7 +355,7 @@ Deploying the app as a Docker stack is a two-step process very similar to deploy
Complete the steps in the previous section to render the Docker app project as a Compose file and make sure
you're ready to deploy it as a Docker Stack. Your Docker host must be in Swarm mode.
```
```console
$ docker stack deploy hello-world-app -c docker-compose.yml
Creating network hello-world-app_default
Creating service hello-world-app_hello
@ -394,7 +395,7 @@ $ docker app push my-app --platform="linux/amd64" --tag <hub-id>/<repo>:0.1.0
Now that the app is pushed to the registry, try an `inspect` and `install` command against it.
The location of your app is different from the one provided in the examples.
```
```console
$ docker app inspect myuser/hello-world:0.1.0
hello-world 0.1.0
@ -412,7 +413,7 @@ This action was performed directly against the app in the registry.
Now install it as a native Docker App by referencing the app in the registry, with a different port.
```
```console
$ docker app install myuser/hello-world:0.1.0 --set hello.port=8181
Creating network hello-world_default
Creating service hello-world_hello
@ -424,14 +425,14 @@ Test that the app is working.
The app used in these examples is a simple web server that displays the text "Hello world!" on port 8181,
your app might be different.
```
```console
$ curl http://localhost:8181
Hello world!
```
Uninstall the app.
```
```console
$ docker app uninstall hello-world
Removing service hello-world_hello
Removing network hello-world_default

View File

@ -192,6 +192,7 @@ Run the following command to get the current value of the `MountFlags` property
$ sudo systemctl show --property=MountFlags docker.service
MountFlags=
```
Update your configuration if this command prints a non-empty value for `MountFlags`, and restart the docker service.
### Security fixes

View File

@ -205,6 +205,7 @@ $ docker scan --json --group-issues docker-scan:e2e
"path": "docker-scan:e2e"
}
```
You can find all the sources of the vulnerability in the `from` section.
### Checking the dependency tree

View File

@ -176,6 +176,7 @@ If `dockerd-rootless-setuptool.sh` is not present, you may need to install the `
```console
$ sudo apt-get install -y docker-ce-rootless-extras
```
</div>
<div id="install-without-packages" class="tab-pane fade in" markdown="1">
If you do not have permission to run package managers like `apt-get` and `dnf`,

View File

@ -120,7 +120,7 @@ the reason each syscall is blocked rather than white-listed.
You can pass `unconfined` to run a container without the default seccomp
profile.
```
```console
$ docker run --rm -it --security-opt seccomp=unconfined debian:jessie \
unshare --map-root-user --user sh -c whoami
```

View File

@ -130,7 +130,7 @@ is automatically added to the local trust store. If you are importing a separate
key, you will need to use the
`$ docker trust key load` command.
```
```console
$ docker trust key generate jeff
Generating key for jeff...
Enter passphrase for new jeff key with ID 9deed25:
@ -140,7 +140,7 @@ Successfully generated and loaded private key. Corresponding public key availabl
Or if you have an existing key:
```
```console
$ docker trust key load key.pem --name jeff
Loading key from "key.pem"...
Enter passphrase for new jeff key with ID 8ae710e:
@ -156,7 +156,7 @@ canonical root key. To understand more about initiating a repository, and the
role of delegations, head to
[delegations for content trust](trust_delegation.md).
```
```console
$ docker trust signer add --key cert.pem jeff registry.example.com/admin/demo
Adding signer "jeff" to registry.example.com/admin/demo...
Enter passphrase for new repository key with ID 10b5e94:
@ -165,7 +165,7 @@ Enter passphrase for new repository key with ID 10b5e94:
Finally, we will use the delegation private key to sign a particular tag and
push it up to the registry.
```
```console
$ docker trust sign registry.example.com/admin/demo:1
Signing and pushing trust data for local image registry.example.com/admin/demo:1, may overwrite remote trust data
The push refers to repository [registry.example.com/admin/demo]
@ -179,7 +179,7 @@ Successfully signed registry.example.com/admin/demo:1
Alternatively, once the keys have been imported an image can be pushed with the
`$ docker push` command, by exporting the DCT environmental variable.
```
```console
$ export DOCKER_CONTENT_TRUST=1
$ docker push registry.example.com/admin/demo:1
@ -194,7 +194,7 @@ Successfully signed registry.example.com/admin/demo:1
Remote trust data for a tag or a repository can be viewed by the
`$ docker trust inspect` command:
```
```console
$ docker trust inspect --pretty registry.example.com/admin/demo:1
Signatures for registry.example.com/admin/demo:1
@ -215,7 +215,7 @@ Administrative keys for registry.example.com/admin/demo:1
Remote Trust data for a tag can be removed by the `$ docker trust revoke` command:
```
```console
$ docker trust revoke registry.example.com/admin/demo:1
Enter passphrase for signer key with ID 8ae710e:
Successfully deleted signature for registry.example.com/admin/demo:1
@ -241,7 +241,7 @@ For example, with DCT enabled a `docker pull someimage:latest` only
succeeds if `someimage:latest` is signed. However, an operation with an explicit
content hash always succeeds as long as the hash exists:
```
```console
$ docker pull registry.example.com/user/image:1
Error: remote trust data does not exist for registry.example.com/user/image: registry.example.com does not have trust data for registry.example.com/user/image

View File

@ -20,7 +20,7 @@ To automate importing a delegation private key to the local Docker trust store,
need to pass a passphrase for the new key. This passphrase will be required
everytime that delegation signs a tag.
```
```console
$ export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE="mypassphrase123"
$ docker trust key load delegation.key --name jeff
@ -35,7 +35,7 @@ public key, then you will need to use the local Notary Canonical Root Key's
passphrase to create the repositories trust data. If the repository has already
been initiated then you only need the repositories passphrase.
```
```console
# Export the Local Root Key Passphrase if required.
$ export DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE="rootpassphrase123"
@ -56,7 +56,7 @@ Finally when signing an image, we will need to export the passphrase of the
signing key. This was created when the key was loaded into the local Docker
trust store with `$ docker trust key load`.
```
```console
$ export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE="mypassphrase123"
$ docker trust sign registry.example.com/admin/demo:1

View File

@ -28,28 +28,28 @@ server URL is the same as the registry URL. However, for self-hosted
environments or 3rd party registries, you will need to specify an alternative
URL for the notary server. This is done with:
```
export DOCKER_CONTENT_TRUST_SERVER=https://<URL>:<PORT>
```console
$ export DOCKER_CONTENT_TRUST_SERVER=https://<URL>:<PORT>
```
If you do not export this variable in self-hosted environments, you may see
errors such as:
```
```console
$ docker trust signer add --key cert.pem jeff registry.example.com/admin/demo
Adding signer "jeff" to registry.example.com/admin/demo...
[...]
<...>
Error: trust data missing for remote repository registry.example.com/admin/demo or remote repository not found: timestamp key trust data unavailable. Has a notary repository been initialized?
$ docker trust inspect registry.example.com/admin/demo --pretty
WARN[0000] Error while downloading remote metadata, using cached timestamp - this might not be the latest version available remotely
[...]
<...>
```
If you have enabled authentication for your notary server, or are using DTR, you will need to log in
before you can push data to the notary server.
```
```console
$ docker login registry.example.com/user/repo
Username: admin
Password:

View File

@ -79,7 +79,7 @@ This loss also requires **manual intervention** from every consumer that pulled
the tagged image prior to the loss. Image consumers would get an error for
content that they already downloaded:
```
```console
Warning: potential malicious behavior - trust data has insufficient signatures for remote repository docker.io/my/image: valid signatures did not meet threshold
```

View File

@ -244,6 +244,7 @@ This example assumes that you have PowerShell installed.
</body>
</html>
```
2. If you have not already done so, initialize or join the swarm.
```powershell

View File

@ -276,6 +276,7 @@ This example assumes that you have PowerShell installed.
</body>
</html>
```
2. If you have not already done so, initialize or join the swarm.
```powershell

View File

@ -132,22 +132,25 @@ The credential spec contained in the specified `config` is used.
The following simple example retrieves the gMSA name and JSON contents from your Active Directory (AD) instance:
```
name="mygmsa"
contents="{...}"
echo $contents > contents.json
```console
$ name="mygmsa"
$ contents="{...}"
$ echo $contents > contents.json
```
Make sure that the nodes to which you are deploying are correctly configured for the gMSA.
To use a Config as a credential spec, create a Docker Config in a credential spec file named `credpspec.json`.
You can specify any name for the name of the `config`.
```console
$ docker config create --label com.docker.gmsa.name=mygmsa credspec credspec.json
```
docker config create --label com.docker.gmsa.name=mygmsa credspec credspec.json
```
Now you can create a service using this credential spec. Specify the `--credential-spec` flag with the config name:
```
docker service create --credential-spec="config://credspec" <your image>
```console
$ docker service create --credential-spec="config://credspec" <your image>
```
Your service uses the gMSA credential spec when it starts, but unlike a typical Docker Config (used by passing the --config flag), the credential spec is not mounted into the container.

View File

@ -63,6 +63,7 @@ To add a worker to this swarm, run the following command:
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
### Configuring default address pools
By default Docker Swarm uses a default address pool `10.0.0.0/8` for global scope (overlay) networks. Every

View File

@ -117,7 +117,8 @@ docker run -dp 3000:3000 `
ports:
- 3000:3000
```
4. Next, we'll migrate both the working directory (`-w /app`) and the volume mapping (`-v "$(pwd):/app"`) by using
5. Next, we'll migrate both the working directory (`-w /app`) and the volume mapping (`-v "$(pwd):/app"`) by using
the `working_dir` and `volumes` definitions. Volumes also has a [short](../compose/compose-file/compose-file-v3.md#short-syntax-3) and [long](../compose/compose-file/compose-file-v3.md#long-syntax-3) syntax.
One advantage of Docker Compose volume definitions is we can use relative paths from the current directory.
@ -136,7 +137,7 @@ docker run -dp 3000:3000 `
- ./:/app
```
5. Finally, we need to migrate the environment variable definitions using the `environment` key.
6. Finally, we need to migrate the environment variable definitions using the `environment` key.
```yaml
version: "3.7"

View File

@ -107,8 +107,9 @@ You'll notice a few flags being used. Here's some more info on them:
>
> You can combine single character flags to shorten the full command.
> As an example, the command above could be written as:
> ```
> docker run -dp 80:80 docker/getting-started
>
> ```console
> $ docker run -dp 80:80 docker/getting-started
> ```
## The Docker Dashboard

View File

@ -236,9 +236,7 @@ Let's build our first Docker image!
```console
$ docker build --tag docker-gs-ping .
```
```
[+] Building 3.6s (12/12) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 38B 0.0s
@ -273,9 +271,7 @@ To list images, simply run the `images` command:
```console
$ docker images
```
```
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-gs-ping latest 336a3f164d0f 39 minutes ago 540MB
postgres 13.2 c5ec7353d87d 7 weeks ago 314MB
@ -301,9 +297,7 @@ Now run the `docker images` command to see the updated list of local images:
```console
$ docker images
```
```
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-gs-ping latest 336a3f164d0f 43 minutes ago 540MB
docker-gs-ping v1.0 336a3f164d0f 43 minutes ago 540MB
@ -323,9 +317,7 @@ Notice that the response from Docker tells us that the image has not been remove
```console
$ docker images
```
```
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-gs-ping latest 336a3f164d0f 45 minutes ago 540MB
postgres 13.2 c5ec7353d87d 7 weeks ago 314MB
@ -385,7 +377,9 @@ $ docker build -t docker-gs-ping:multistage -f Dockerfile.multistage .
Comparing the sizes of `docker-gs-ping:multistage` and `docker-gs-ping:latest` we see an order-of-magnitude difference!
```
```console
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-gs-ping multistage e3fdde09f172 About a minute ago 27.1MB
docker-gs-ping latest 336a3f164d0f About an hour ago 540MB

View File

@ -132,8 +132,8 @@ $ docker exec -it roach ./cockroach sql --insecure
An example of interaction with the SQL shell is presented below.
{% raw %}
```
oliver@hki:~$ sudo docker exec -it roach ./cockroach sql --insecure
```console
$ sudo docker exec -it roach ./cockroach sql --insecure
#
# Welcome to the CockroachDB SQL shell.
# All statements must be terminated by a semicolon.
@ -580,7 +580,7 @@ This Docker Compose configuration is super convenient as we do not have to type
Docker Compose will automatically read environment variables from a `.env` file if it is available. Since our Compose file requires `PGPASSWORD` to be set, we add the following content to the `.env` file:
```
```bash
PGPASSWORD=whatever
```

View File

@ -92,9 +92,7 @@ Since we ran our container in the background, how do we know if our container is
```console
$ docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d75e61fcad1e docker-gs-ping "/docker-gs-ping" 41 seconds ago Up 40 seconds 0.0.0.0:8080->8080/tcp inspiring_ishizaka
```
@ -112,6 +110,7 @@ Now rerun the `docker ps` command to see a list of running containers.
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
@ -120,10 +119,8 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Docker containers can be started, stopped and restarted. When we stop a container, it is not removed but the status is changed to stopped and the process inside of the container is stopped. When we ran the `docker ps` command, the default output is to only show running containers. If we pass the `--all` or `-a` for short, we will see all containers on our system, that is stopped containers and running containers.
```console
$ docker ps -a
```
$ docker ps -all
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d75e61fcad1e docker-gs-ping "/docker-gs-ping" About a minute ago Exited (2) 23 seconds ago inspiring_ishizaka
f65dbbb9a548 docker-gs-ping "/docker-gs-ping" 3 minutes ago Exited (2) 2 minutes ago wizardly_joliot
@ -142,10 +139,8 @@ $ docker restart inspiring_ishizaka
Now, list all the containers again using the `ps` command:
```console
$ docker ps --all
```
$ docker ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d75e61fcad1e docker-gs-ping "/docker-gs-ping" 2 minutes ago Up 5 seconds 0.0.0.0:8080->8080/tcp inspiring_ishizaka
f65dbbb9a548 docker-gs-ping "/docker-gs-ping" 4 minutes ago Exited (2) 2 minutes ago wizardly_joliot
@ -172,9 +167,7 @@ Again, make sure you replace the containers names in the below command with the
```console
$ docker rm inspiring_ishizaka wizardly_joliot magical_carson gifted_mestorf
```
```
inspiring_ishizaka
wizardly_joliot
magical_carson
@ -194,9 +187,7 @@ $ docker run -d -p 8080:8080 --name rest-server docker-gs-ping
```console
$ docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bbc6a3102ea docker-gs-ping "/docker-gs-ping" 25 seconds ago Up 24 seconds 0.0.0.0:8080->8080/tcp rest-server
```

View File

@ -84,9 +84,6 @@ That was a bit... underwhelming? Let's ask it to print a bit more detail, just t
```console
$ go test -v ./...
```
```
=== RUN TestRespondsWithLove
main_test.go:47: container not ready, waiting...
--- PASS: TestRespondsWithLove (5.24s)

View File

@ -255,7 +255,7 @@ The Docker tag command creates a new tag for an image. It does not create a new
Now run the `docker images` command to see a list of our local images.
```
```console
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc 24 minutes ago 945MB

View File

@ -93,7 +93,7 @@ $ curl --request POST \
Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, we can run the `docker ps` command. Just like on Linux, to see a list of processes on your machine we would run the ps command. In the same spirit, we can run the `docker ps` command which will show us a list of containers running on our machine.
```
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 0.0.0.0:8000->8000/tcp wonderful_kalam

View File

@ -93,7 +93,7 @@ Reboot your desktop system to clear out any routing table problems. Without a re
which asks you to create [three networked host machines](../../engine/swarm/swarm-tutorial/index.md#three-networked-host-machines),
you can create these swarm nodes: `manager1`, `worker1`, `worker2`.
* Use the Microsoft Hyper-V driver and reference the new virtual switch you created.
* Use the Microsoft Hyper-V driver and reference the new virtual switch you created.
```console
$ docker-machine create -d hyperv --hyperv-virtual-switch <NameOfVirtualSwitch> <nameOfNode>
@ -127,7 +127,8 @@ Reboot your desktop system to clear out any routing table problems. Without a re
ker\Docker\Resources\bin\docker-machine.exe env manager1
PS C:\WINDOWS\system32>
```
* Use the same process, driver, and network switch to create the other nodes.
* Use the same process, driver, and network switch to create the other nodes.
For our example, the commands are:

View File

@ -46,5 +46,6 @@ You can also call `fuserunmount` (or `fusermount -u`) commands directly.
$ docker-machine mount -u dev:/home/docker/foo foo
$ rmdir foo
```
**Files are actually being stored on the machine, *not* on the host.**
So make sure to make a copy of any files you want to keep, before removing it!

View File

@ -16,7 +16,7 @@ address.
> take effect, and the `-p`, `--publish`, `-P`, and `--publish-all` option are
> ignored, producing a warning instead:
>
> ```
> ```console
> WARNING: Published ports are discarded when using host network mode
> ```

View File

@ -100,7 +100,7 @@ gives a clear indication of items eligible for deletion.
The config.yml file should be in the following format:
```
```yaml
version: 0.1
storage:
filesystem:

View File

@ -159,7 +159,7 @@ request coming to an endpoint.
An example of a full event may look as follows:
```
```http request
GET /callback HTTP/1.1
Host: application/vnd.docker.distribution.events.v1+json
Authorization: Bearer <your token, if needed>

View File

@ -84,6 +84,7 @@ The following AWS policy is required by the registry for push and pull. Make sur
]
}
```
See [the S3 policy documentation](http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html) for more details.
# CloudFront as Middleware with S3 backend
@ -135,7 +136,7 @@ are still directly written to S3.
The following example shows a minimum configuration:
```
```yaml
...
storage:
s3:

View File

@ -127,7 +127,7 @@ configure this app to use our SQL Server database, and then create a
> variable below to the one you defined in the `docker-compose.yml` file.
```csharp
[...]
<...>
public void ConfigureServices(IServiceCollection services)
{
// Database connection string.
@ -149,7 +149,7 @@ configure this app to use our SQL Server database, and then create a
services.AddTransient<IEmailSender, AuthMessageSender>();
services.AddTransient<ISmsSender, AuthMessageSender>();
}
[...]
<...>
```
1. Go to `app.csproj`. You see a line like:

View File

@ -73,6 +73,7 @@ ENTRYPOINT ["dotnet", "aspnetapp.dll"]
bin/
obj/
```
### Method 2 (build app outside Docker container):
1. Create a `Dockerfile` in your project folder.

View File

@ -50,6 +50,7 @@ Create an empty `Gemfile.lock` file to build our `Dockerfile`.
```console
$ touch Gemfile.lock
```
Next, provide an entrypoint script to fix a Rails-specific issue that
prevents the server from restarting when a certain `server.pid` file pre-exists.
This script will be executed every time the container gets started.
@ -184,14 +185,12 @@ test:
database: myapp_test
```
You can now boot the app with [docker-compose up](../compose/reference/up.md):
You can now boot the app with [docker-compose up](../compose/reference/up.md).
If all is well, you should see some PostgreSQL output:
```console
$ docker-compose up
```
If all's well, you should see some PostgreSQL output.
```bash
rails_db_1 is up-to-date
Creating rails_web_1 ... done
Attaching to rails_db_1, rails_web_1
@ -208,12 +207,6 @@ Finally, you need to create the database. In another terminal, run:
```console
$ docker-compose run web rake db:create
```
Here is an example of the output from that command:
```console
$ docker-compose run web rake db:create
Starting rails_db_1 ... done
Created database 'myapp_development'
Created database 'myapp_test'

View File

@ -511,6 +511,7 @@ $ docker volume create \
--opt o=addr=uxxxxx.your-server.de,username=uxxxxxxx,password=*****,file_mode=0777,dir_mode=0777 \
--name cif-volume
```
Notice the `addr` option is required if using a hostname instead of an IP so docker can perform the hostname lookup.
## Backup, restore, or migrate data volumes
@ -522,7 +523,7 @@ Volumes are useful for backups, restores, and migrations. Use the
For example, create a new container named `dbstore`:
```
```console
$ docker run -v /dbdata --name dbstore ubuntu /bin/bash
```
@ -532,7 +533,7 @@ Then in the next command, we:
- Mount a local host directory as `/backup`
- Pass a command that tars the contents of the `dbdata` volume to a `backup.tar` file inside our `/backup` directory.
```
```console
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
```
@ -546,13 +547,13 @@ another that you made elsewhere.
For example, create a new container named `dbstore2`:
```
```console
$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
```
Then un-tar the backup file in the new container`s data volume:
```
```console
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
```
@ -573,7 +574,7 @@ To automatically remove anonymous volumes, use the `--rm` option. For example,
this command creates an anonymous `/foo` volume. When the container is removed,
the Docker Engine removes the `/foo` volume but not the `awesome` volume.
```
```console
$ docker run --rm -v /foo -v awesome:/bar busybox top
```
@ -581,7 +582,7 @@ $ docker run --rm -v /foo -v awesome:/bar busybox top
To remove all unused volumes and free up space:
```
```console
$ docker volume prune
```