Update docker cli reference for 19.03

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit is contained in:
Sebastiaan van Stijn 2019-05-10 18:24:37 -07:00
parent e537dabb46
commit d6dd777bb0
No known key found for this signature in database
GPG Key ID: 76698F39D527CE8C
55 changed files with 1395 additions and 444 deletions

View File

@ -7,6 +7,7 @@ cname:
- docker commit
- docker config
- docker container
- docker context
- docker cp
- docker create
- docker deploy
@ -66,6 +67,7 @@ clink:
- docker_commit.yaml
- docker_config.yaml
- docker_container.yaml
- docker_context.yaml
- docker_cp.yaml
- docker_create.yaml
- docker_deploy.yaml

View File

@ -16,8 +16,8 @@ long: |-
To stop a container, use `CTRL-c`. This key sequence sends `SIGKILL` to the
container. If `--sig-proxy` is true (the default),`CTRL-c` sends a `SIGINT` to
the container. You can detach from a container and leave it running using the
`CTRL-p CTRL-q` key sequence.
the container. If the container was run with `-i` and `-t`, you can detach from
a container and leave it running using the `CTRL-p CTRL-q` key sequence.
> **Note:**
> A process running as PID 1 inside a container is treated specially by

View File

@ -284,6 +284,17 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: output
shorthand: o
value_type: stringArray
default_value: '[]'
description: 'Output destination (format: type=local,dest=path)'
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: platform
value_type: string
description: Set platform if server is multi-platform capable
@ -551,13 +562,13 @@ examples: "### Build with PATH\n\n```bash\n$ docker build .\n\nUploading context
is preserved with this method.\n\nThe `--squash` option is an experimental feature,
and should not be considered\nstable.\n\n\nSquashing layers can be beneficial if
your Dockerfile produces multiple layers\nmodifying the same files, for example,
file that are created in one step, and\nremoved in another step. For other use-cases,
files that are created in one step, and\nremoved in another step. For other use-cases,
squashing images may actually have\na negative impact on performance; when pulling
an image consisting of multiple\nlayers, layers can be pulled in parallel, and allows
sharing layers between\nimages (saving space).\n\nFor most use cases, multi-stage
are a better alternative, as they give more\nfine-grained control over your build,
and can take advantage of future\noptimizations in the builder. Refer to the [use
multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/)\nsection
builds are a better alternative, as they give more\nfine-grained control over your
build, and can take advantage of future\noptimizations in the builder. Refer to
the [use multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/)\nsection
in the userguide for more information.\n\n\n#### Known limitations\n\nThe `--squash`
option has a number of known limitations:\n\n- When squashing layers, the resulting
image cannot take advantage of layer\n sharing with other images, and may use significantly
@ -568,7 +579,7 @@ examples: "### Build with PATH\n\n```bash\n$ docker build .\n\nUploading context
\ impact on performance, as a single layer takes longer to extract, and\n downloading
a single layer cannot be parallelized.\n- When attempting to squash an image that
does not make changes to the\n filesystem (for example, the Dockerfile only contains
`ENV` instructions),\n the squash step will fail (see [issue #33823](https://github.com/moby/moby/issues/33823)\n\n####
`ENV` instructions),\n the squash step will fail (see [issue #33823](https://github.com/moby/moby/issues/33823)).\n\n####
Prerequisites\n\nThe example on this page is using experimental mode in Docker 1.13.\n\nExperimental
mode can be enabled by using the `--experimental` flag when starting the Docker
daemon or setting `experimental: true` in the `daemon.json` configuration file.\n\nBy

View File

@ -5,10 +5,13 @@ usage: docker builder
pname: docker
plink: docker.yaml
cname:
- docker builder build
- docker builder prune
clink:
- docker_builder_build.yaml
- docker_builder_prune.yaml
deprecated: false
min_api_version: "1.31"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -0,0 +1,335 @@
command: docker builder build
short: Build an image from a Dockerfile
long: Build an image from a Dockerfile
usage: docker builder build [OPTIONS] PATH | URL | -
pname: docker builder
plink: docker_builder.yaml
options:
- option: add-host
value_type: list
description: Add a custom host-to-IP mapping (host:ip)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: build-arg
value_type: list
description: Set build-time variables
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cache-from
value_type: stringSlice
default_value: '[]'
description: Images to consider as cache sources
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cgroup-parent
value_type: string
description: Optional parent cgroup for the container
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: compress
value_type: bool
default_value: "false"
description: Compress the build context using gzip
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpu-period
value_type: int64
default_value: "0"
description: Limit the CPU CFS (Completely Fair Scheduler) period
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpu-quota
value_type: int64
default_value: "0"
description: Limit the CPU CFS (Completely Fair Scheduler) quota
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpu-shares
shorthand: c
value_type: int64
default_value: "0"
description: CPU shares (relative weight)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpuset-cpus
value_type: string
description: CPUs in which to allow execution (0-3, 0,1)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpuset-mems
value_type: string
description: MEMs in which to allow execution (0-3, 0,1)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: disable-content-trust
value_type: bool
default_value: "true"
description: Skip image verification
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: file
shorthand: f
value_type: string
description: Name of the Dockerfile (Default is 'PATH/Dockerfile')
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: force-rm
value_type: bool
default_value: "false"
description: Always remove intermediate containers
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: iidfile
value_type: string
description: Write the image ID to the file
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: isolation
value_type: string
description: Container isolation technology
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: label
value_type: list
description: Set metadata for an image
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: memory
shorthand: m
value_type: bytes
default_value: "0"
description: Memory limit
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: memory-swap
value_type: bytes
default_value: "0"
description: |
Swap limit equal to memory plus swap: '-1' to enable unlimited swap
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
description: |
Set the networking mode for the RUN instructions during build
deprecated: false
min_api_version: "1.25"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: no-cache
value_type: bool
default_value: "false"
description: Do not use cache when building the image
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: output
shorthand: o
value_type: stringArray
default_value: '[]'
description: 'Output destination (format: type=local,dest=path)'
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: platform
value_type: string
description: Set platform if server is multi-platform capable
deprecated: false
min_api_version: "1.32"
experimental: true
experimentalcli: false
kubernetes: false
swarm: false
- option: progress
value_type: string
default_value: auto
description: |
Set type of progress output (auto, plain, tty). Use plain to show container output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: pull
value_type: bool
default_value: "false"
description: Always attempt to pull a newer version of the image
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Suppress the build output and print image ID on success
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: rm
value_type: bool
default_value: "true"
description: Remove intermediate containers after a successful build
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: secret
value_type: stringArray
default_value: '[]'
description: |
Secret file to expose to the build (only if BuildKit enabled): id=mysecret,src=/local/secret
deprecated: false
min_api_version: "1.39"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: security-opt
value_type: stringSlice
default_value: '[]'
description: Security options
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: shm-size
value_type: bytes
default_value: "0"
description: Size of /dev/shm
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: squash
value_type: bool
default_value: "false"
description: Squash newly built layers into a single new layer
deprecated: false
min_api_version: "1.25"
experimental: true
experimentalcli: false
kubernetes: false
swarm: false
- option: ssh
value_type: stringArray
default_value: '[]'
description: |
SSH agent socket or keys to expose to the build (only if BuildKit enabled) (format: default|<id>[=<socket>|<key>[,<key>]])
deprecated: false
min_api_version: "1.39"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: stream
value_type: bool
default_value: "false"
description: Stream attaches to server to negotiate build context
deprecated: false
min_api_version: "1.31"
experimental: true
experimentalcli: false
kubernetes: false
swarm: false
- option: tag
shorthand: t
value_type: list
description: Name and optionally a tag in the 'name:tag' format
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: target
value_type: string
description: Set the target build stage to build.
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: ulimit
value_type: ulimit
default_value: '[]'
description: Ulimit options
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
min_api_version: "1.31"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -18,6 +18,7 @@ options:
value_type: string
description: Template driver
deprecated: false
min_api_version: "1.37"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -259,6 +259,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -292,6 +300,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -560,8 +577,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -577,8 +593,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false

View File

@ -277,6 +277,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -310,6 +318,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -578,8 +595,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -595,8 +611,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false

View File

@ -126,6 +126,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: pids-limit
value_type: int64
default_value: "0"
description: Tune container pids limit (set -1 for unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: restart
value_type: string
description: Restart policy to apply when a container exits

View File

@ -0,0 +1,30 @@
command: docker context
short: Manage contexts
long: Manage contexts
usage: docker context
pname: docker
plink: docker.yaml
cname:
- docker context create
- docker context export
- docker context import
- docker context inspect
- docker context ls
- docker context rm
- docker context update
- docker context use
clink:
- docker_context_create.yaml
- docker_context_export.yaml
- docker_context_import.yaml
- docker_context_inspect.yaml
- docker_context_ls.yaml
- docker_context_rm.yaml
- docker_context_update.yaml
- docker_context_use.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,116 @@
command: docker context create
short: Create a context
long: |-
Creates a new `context`. This allows you to quickly switch the cli
configuration to connect to different clusters or single nodes.
To create a context from scratch provide the docker and, if required,
kubernetes options. The example below creates the context `my-context`
with a docker endpoint of `/var/run/docker.sock` and a kubernetes configuration
sourced from the file `/home/me/my-kube-config`:
```bash
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes config-file=/home/me/my-kube-config
```
Use the `--from=<context-name>` option to create a new context from
an existing context. The example below creates a new context named `my-context`
from the existing context `existing-context`:
```bash
$ docker context create my-context --from existing-context
```
If the `--from` option is not set, the `context` is created from the current context:
```bash
$ docker context create my-context
```
This can be used to create a context out of an existing `DOCKER_HOST` based script:
```bash
$ source my-setup-script.sh
$ docker context create my-context
```
To source only the `docker` endpoint configuration from an existing context
use the `--docker from=<context-name>` option. The example below creates a
new context named `my-context` using the docker endpoint configuration from
the existing context `existing-context` and a kubernetes configuration sourced
from the file `/home/me/my-kube-config`:
```bash
$ docker context create my-context \
--docker from=existing-context \
--kubernetes config-file=/home/me/my-kube-config
```
To source only the `kubernetes` configuration from an existing context use the
`--kubernetes from=<context-name>` option. The example below creates a new
context named `my-context` using the kuberentes configuration from the existing
context `existing-context` and a docker endpoint of `/var/run/docker.sock`:
```bash
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes from=existing-context
```
Docker and Kubernetes endpoints configurations, as well as default stack
orchestrator and description can be modified with `docker context update`
usage: docker context create [OPTIONS] CONTEXT
pname: docker context
plink: docker_context.yaml
options:
- option: default-stack-orchestrator
value_type: string
description: |
Default orchestrator for stack operations to use with this context (swarm|kubernetes|all)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: description
value_type: string
description: Description of the context
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: docker
value_type: stringToString
default_value: '[]'
description: set the docker endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: from
value_type: string
description: create context from a named context
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: kubernetes
value_type: stringToString
default_value: '[]'
description: set the kubernetes endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,25 @@
command: docker context export
short: Export a context to a tar or kubeconfig file
long: |-
Exports a context in a file that can then be used with `docker context import` (or with `kubectl` if `--kubeconfig` is set).
Default output filename is `<CONTEXT>.dockercontext`, or `<CONTEXT>.kubeconfig` if `--kubeconfig` is set.
To export to `STDOUT`, you can run `docker context export my-context -`.
usage: docker context export [OPTIONS] CONTEXT [FILE|-]
pname: docker context
plink: docker_context.yaml
options:
- option: kubeconfig
value_type: bool
default_value: "false"
description: Export as a kubeconfig file
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,13 @@
command: docker context import
short: Import a context from a tar file
long: Imports a context previously exported with `docker context export`. To import
from stdin, use a hyphen (`-`) as filename.
usage: docker context import CONTEXT FILE|-
pname: docker context
plink: docker_context.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,60 @@
command: docker context inspect
short: Display detailed information on one or more contexts
long: Inspects one or more contexts.
usage: docker context inspect [OPTIONS] [CONTEXT] [CONTEXT...]
pname: docker context
plink: docker_context.yaml
options:
- option: format
shorthand: f
value_type: string
description: Format the output using the given Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
### Inspect a context by name
```bash
$ docker context inspect "local+aks"
[
{
"Name": "local+aks",
"Metadata": {
"Description": "Local Docker Engine + Azure AKS endpoint",
"StackOrchestrator": "kubernetes"
},
"Endpoints": {
"docker": {
"Host": "npipe:////./pipe/docker_engine",
"SkipTLSVerify": false
},
"kubernetes": {
"Host": "https://simon-aks-***.hcp.uksouth.azmk8s.io:443",
"SkipTLSVerify": false,
"DefaultNamespace": "default"
}
},
"TLSMaterial": {
"kubernetes": [
"ca.pem",
"cert.pem",
"key.pem"
]
},
"Storage": {
"MetadataPath": "C:\\Users\\simon\\.docker\\contexts\\meta\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f509141daff05f620fc54ddee",
"TLSPath": "C:\\Users\\simon\\.docker\\contexts\\tls\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f509141daff05f620fc54ddee"
}
}
]
```
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,32 @@
command: docker context ls
aliases: list
short: List contexts
long: List contexts
usage: docker context ls [OPTIONS]
pname: docker context
plink: docker_context.yaml
options:
- option: format
value_type: string
description: Pretty-print contexts using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Only show context names
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,24 @@
command: docker context rm
aliases: remove
short: Remove one or more contexts
long: Remove one or more contexts
usage: docker context rm CONTEXT [CONTEXT...]
pname: docker context
plink: docker_context.yaml
options:
- option: force
shorthand: f
value_type: bool
default_value: "false"
description: Force the removal of a context in use
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,50 @@
command: docker context update
short: Update a context
long: |-
Updates an existing `context`.
See [context create](context_create.md)
usage: docker context update [OPTIONS] CONTEXT
pname: docker context
plink: docker_context.yaml
options:
- option: default-stack-orchestrator
value_type: string
description: |
Default orchestrator for stack operations to use with this context (swarm|kubernetes|all)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: description
value_type: string
description: Description of the context
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: docker
value_type: stringToString
default_value: '[]'
description: set the docker endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: kubernetes
value_type: stringToString
default_value: '[]'
description: set the kubernetes endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,14 @@
command: docker context use
short: Set the current docker context
long: |-
Set the default context to use, when `DOCKER_HOST`, `DOCKER_CONTEXT` environment variables and `--host`, `--context` global options are not set.
To disable usage of contexts, you can use the special `default` context.
usage: docker context use CONTEXT
pname: docker context
plink: docker_context.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -270,6 +270,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -303,6 +311,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -571,8 +588,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -588,8 +604,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false

View File

@ -182,6 +182,17 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: output
shorthand: o
value_type: stringArray
default_value: '[]'
description: 'Output destination (format: type=local,dest=path)'
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: platform
value_type: string
description: Set platform if server is multi-platform capable

View File

@ -33,6 +33,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Suppress verbose output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false

View File

@ -307,6 +307,16 @@ examples: |-
busybox glibc 21c16b6787c6 5 weeks ago 4.19 MB
```
Filtering with multiple `reference` would give, either match A or B:
```bash
$ docker images --filter=reference='busy*:uclibc' --filter=reference='busy*:glibc'
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox uclibc e02e811dd08f 5 weeks ago 1.09 MB
busybox glibc 21c16b6787c6 5 weeks ago 4.19 MB
```
### Format the output
The formatting option (`--format`) will pretty print container output

View File

@ -31,203 +31,66 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
### Show output
The example below shows the output for a daemon running on Red Hat Enterprise Linux,
using the `devicemapper` storage driver. As can be seen in the output, additional
information about the `devicemapper` storage driver is shown:
```bash
$ docker info
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-202:2-25583803-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.68 GB
Data Space Total: 107.4 GB
Data Space Available: 7.548 GB
Metadata Space Used: 2.322 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 991.7 MiB
Name: ip-172-30-0-91.ec2.internal
ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Username: gordontheturtle
Registry: https://index.docker.io/v1/
Insecure registries:
myinsecurehost:5000
127.0.0.0/8
```
### Show debugging output
Here is a sample output for a daemon running on Ubuntu, using the overlay2
storage driver and a node that is part of a 2-node swarm:
```bash
$ docker -D info
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.13.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: rdjq45w1op418waxlairloqbm
Is Manager: true
ClusterID: te8kdyw33n36fqiz74bfjeixd
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Root Rotation In Progress: false
Node Address: 172.16.66.128 172.16.66.129
Manager Addresses:
172.16.66.128:2477
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531
runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2
init version: N/A (expected: v0.13.0)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-31-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.937 GiB
Name: ubuntu
ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326
Docker Root Dir: /var/lib/docker
Debug Mode (client): true
Debug Mode (server): true
File Descriptors: 30
Goroutines: 123
System Time: 2016-11-12T17:24:37.955404361-08:00
EventsListeners: 0
Http Proxy: http://test:test@proxy.example.com:8080
Https Proxy: https://test:test@proxy.example.com:8080
No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
storage=ssd
staging=true
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false
```
The global `-D` option causes all `docker` commands to output debug information.
### Format the output
You can also specify the output format:
```bash
$ docker info --format '{{json .}}'
{"ID":"I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S","Containers":14, ...}
```
### Run `docker info` on Windows
Here is a sample output for a daemon running on Windows Server 2016:
```none
E:\docker>docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 17
Server Version: 1.13.0
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: nat null overlay
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 14393 (14393.206.amd64fre.rs1_release.160912-1937)
Operating System: Windows Server 2016 Datacenter
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 3.999 GiB
Name: WIN-V0V70C0LU5P
ID: NYMS:B5VK:UMSL:FVDZ:EWB5:FKVK:LPFL:FJMQ:H6FT:BZJ6:L2TD:XH62
Docker Root Dir: C:\control
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false
```
examples: "### Show output\n\nThe example below shows the output for a daemon running
on Red Hat Enterprise Linux,\nusing the `devicemapper` storage driver. As can be
seen in the output, additional\ninformation about the `devicemapper` storage driver
is shown:\n\n```bash\n$ docker info\nClient:\n Debug Mode: false\n\nServer:\n Containers:
14\n Running: 3\n Paused: 1\n Stopped: 10\n Images: 52\n Server Version: 1.10.3\n
Storage Driver: devicemapper\n Pool Name: docker-202:2-25583803-pool\n Pool Blocksize:
65.54 kB\n Base Device Size: 10.74 GB\n Backing Filesystem: xfs\n Data file:
/dev/loop0\n Metadata file: /dev/loop1\n Data Space Used: 1.68 GB\n Data Space
Total: 107.4 GB\n Data Space Available: 7.548 GB\n Metadata Space Used: 2.322
MB\n Metadata Space Total: 2.147 GB\n Metadata Space Available: 2.145 GB\n Udev
Sync Supported: true\n Deferred Removal Enabled: false\n Deferred Deletion Enabled:
false\n Deferred Deleted Device Count: 0\n Data loop file: /var/lib/docker/devicemapper/devicemapper/data\n
\ Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata\n Library
Version: 1.02.107-RHEL7 (2015-12-01)\n Execution Driver: native-0.2\n Logging Driver:
json-file\n Plugins:\n Volume: local\n Network: null host bridge\n Kernel Version:
3.10.0-327.el7.x86_64\n Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)\n
OSType: linux\n Architecture: x86_64\n CPUs: 1\n Total Memory: 991.7 MiB\n Name:
ip-172-30-0-91.ec2.internal\n ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S\n
Docker Root Dir: /var/lib/docker\n Debug Mode: false\n Username: gordontheturtle\n
Registry: https://index.docker.io/v1/\n Insecure registries:\n myinsecurehost:5000\n
\ 127.0.0.0/8\n```\n \n### Show debugging output\n\nHere is a sample output for
a daemon running on Ubuntu, using the overlay2\nstorage driver and a node that is
part of a 2-node swarm:\n\n```bash\n$ docker -D info\nClient:\n Debug Mode: true\n\nServer:\n
Containers: 14\n Running: 3\n Paused: 1\n Stopped: 10\n Images: 52\n Server Version:
1.13.0\n Storage Driver: overlay2\n Backing Filesystem: extfs\n Supports d_type:
true\n Native Overlay Diff: false\n Logging Driver: json-file\n Cgroup Driver:
cgroupfs\n Plugins:\n Volume: local\n Network: bridge host macvlan null overlay\n
Swarm: active\n NodeID: rdjq45w1op418waxlairloqbm\n Is Manager: true\n ClusterID:
te8kdyw33n36fqiz74bfjeixd\n Managers: 1\n Nodes: 2\n Orchestration:\n Task
History Retention Limit: 5\n Raft:\n Snapshot Interval: 10000\n Number of Old
Snapshots to Retain: 0\n Heartbeat Tick: 1\n Election Tick: 3\n Dispatcher:\n
\ Heartbeat Period: 5 seconds\n CA Configuration:\n Expiry Duration: 3 months\n
\ Root Rotation In Progress: false\n Node Address: 172.16.66.128 172.16.66.129\n
\ Manager Addresses:\n 172.16.66.128:2477\n Runtimes: runc\n Default Runtime:
runc\n Init Binary: docker-init\n containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531\n
runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2\n init version: N/A (expected:
v0.13.0)\n Security Options:\n apparmor\n seccomp\n Profile: default\n Kernel
Version: 4.4.0-31-generic\n Operating System: Ubuntu 16.04.1 LTS\n OSType: linux\n
Architecture: x86_64\n CPUs: 2\n Total Memory: 1.937 GiB\n Name: ubuntu\n ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326\n
Docker Root Dir: /var/lib/docker\n Debug Mode: true\n File Descriptors: 30\n Goroutines:
123\n System Time: 2016-11-12T17:24:37.955404361-08:00\n EventsListeners: 0\n
Http Proxy: http://test:test@proxy.example.com:8080\n Https Proxy: https://test:test@proxy.example.com:8080\n
No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com\n Registry: https://index.docker.io/v1/\n
WARNING: No swap limit support\n Labels:\n storage=ssd\n staging=true\n Experimental:
false\n Insecure Registries:\n 127.0.0.0/8\n Registry Mirrors:\n http://192.168.1.2/\n
\ http://registry-mirror.example.com:5000/\n Live Restore Enabled: false\n```\n\nThe
global `-D` option causes all `docker` commands to output debug information.\n\n###
Format the output\n\nYou can also specify the output format:\n\n```bash\n$ docker
info --format '{{json .}}'\n\n{\"ID\":\"I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S\",\"Containers\":14,
...}\n```\n\n### Run `docker info` on Windows\n\nHere is a sample output for a daemon
running on Windows Server 2016:\n\n```none\nE:\\docker>docker info\nClient:\n Debug
Mode: false\n\nServer:\n Containers: 1\n Running: 0\n Paused: 0\n Stopped: 1\n
Images: 17\n Server Version: 1.13.0\n Storage Driver: windowsfilter\n Windows:\n
Logging Driver: json-file\n Plugins:\n Volume: local\n Network: nat null overlay\n
Swarm: inactive\n Default Isolation: process\n Kernel Version: 10.0 14393 (14393.206.amd64fre.rs1_release.160912-1937)\n
Operating System: Windows Server 2016 Datacenter\n OSType: windows\n Architecture:
x86_64\n CPUs: 8\n Total Memory: 3.999 GiB\n Name: WIN-V0V70C0LU5P\n ID: NYMS:B5VK:UMSL:FVDZ:EWB5:FKVK:LPFL:FJMQ:H6FT:BZJ6:L2TD:XH62\n
Docker Root Dir: C:\\control\n Debug Mode: false\n Registry: https://index.docker.io/v1/\n
Insecure Registries:\n 127.0.0.0/8\n Registry Mirrors:\n http://192.168.1.2/\n
\ http://registry-mirror.example.com:5000/\n Live Restore Enabled: false\n```"
deprecated: false
experimental: false
experimentalcli: false

View File

@ -83,7 +83,7 @@ examples: "### Inspect an image's manifest object\n \n```bash\n$ docker manifest
IP and port.\nThis is similar to tagging an image and pushing it to a foreign registry.\n\nAfter
you have created your local copy of the manifest list, you may optionally\n`annotate`
it. Annotations allowed are the architecture and operating system (overriding the
image's current values),\nos features, and an archictecure variant. \n\nFinally,
image's current values),\nos features, and an architecture variant. \n\nFinally,
you need to `push` your manifest list to the desired registry. Below are descriptions
of these three commands,\nand an example putting them all together.\n\n```bash\n$
docker manifest create 45.55.81.106:5000/coolapp:v1 \\\n 45.55.81.106:5000/coolapp-ppc64le-linux:v1
@ -122,7 +122,7 @@ examples: "### Inspect an image's manifest object\n \n```bash\n$ docker manifest
docker manifest push --insecure myprivateregistry.mycompany.com/repo/image:tag\n```\n\nNote
that the `--insecure` flag is not required to annotate a manifest list, since annotations
are to a locally-stored copy of a manifest list. You may also skip the `--insecure`
flag if you are performaing a `docker manifest inspect` on a locally-stored manifest
flag if you are performing a `docker manifest inspect` on a locally-stored manifest
list. Be sure to keep in mind that locally-stored manifest lists are never used
by the engine on a `docker pull`."
deprecated: false

View File

@ -23,6 +23,7 @@ clink:
- docker_network_prune.yaml
- docker_network_rm.yaml
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -17,6 +17,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: driver-opt
value_type: stringSlice
default_value: '[]'
description: driver options for the network
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: ip
value_type: string
description: IPv4 address (e.g., 172.30.100.104)
@ -119,6 +128,7 @@ examples: |-
You can connect a container to one or more networks. The networks need not be the same type. For example, you can connect a single container bridge and overlay networks.
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -336,6 +336,7 @@ examples: |-
my-ingress-network
```
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -22,6 +22,7 @@ examples: |-
$ docker network disconnect multi-host-network container1
```
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -27,6 +27,7 @@ options:
kubernetes: false
swarm: false
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -125,6 +125,7 @@ examples: "### List all networks\n\n```bash\n$ sudo docker network ls\nNETWORK I
by a colon for all networks:\n\n```bash\n$ docker network ls --format \"{{.ID}}:
{{.Driver}}\"\nafaaab448eb2: bridge\nd1584f8dc718: host\n391df270dc66: null\n```"
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -31,6 +31,7 @@ examples: |-
list and tries to delete that. The command reports success or failure for each
deletion.
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -1,8 +1,6 @@
command: docker node promote
short: Promote one or more nodes to manager in the swarm
long: |-
Promotes a node to manager. This command targets a docker engine that is a
manager in the swarm.
long: Promotes a node to manager. This command can only be executed on a manager node.
usage: docker node promote NODE [NODE...]
pname: docker node
plink: docker_node.yaml

View File

@ -133,6 +133,7 @@ examples: |-
Placeholder | Description
----------------|------------------------------------------------------------------------------------------
`.ID` | Task ID
`.Name` | Task name
`.Image` | Task image
`.Node` | Node ID

View File

@ -57,6 +57,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Suppress verbose output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
### Pull an image from Docker Hub

View File

@ -1,6 +1,14 @@
command: docker rmi
short: Remove one or more images
long: Remove one or more images
long: |-
Removes (and un-tags) one or more images from the host node. If an image has
multiple tags, using this command with the tag as a parameter only removes the
tag. If the tag is the only one for the image, both the image and the tag are
removed.
This does not remove images from a registry. You cannot remove an image of a
running container unless you use the `-f` option. To see all images on a host
use the [`docker image ls`](images.md) command.
usage: docker rmi [OPTIONS] IMAGE [IMAGE...]
pname: docker
plink: docker.yaml

View File

@ -288,6 +288,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -321,6 +329,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -589,8 +606,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -606,8 +622,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -1306,6 +1321,28 @@ examples: |-
> that may be removed should not be added to untrusted containers with
> `--device`.
For Windows, the format of the string passed to the `--device` option is in
the form of `--device=<IdType>/<Id>`. Beginning with Windows Server 2019
and Windows 10 October 2018 Update, Windows only supports an IdType of
`class` and the Id as a [device interface class
GUID](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/overview-of-device-interface-classes).
Refer to the table defined in the [Windows container
docs](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/hardware-devices-in-containers)
for a list of container-supported device interface class GUIDs.
If this option is specified for a process-isolated Windows container, _all_
devices that implement the requested device interface class GUID are made
available in the container. For example, the command below makes all COM
ports on the host visible in the container.
```powershell
PS C:\> docker run --device=class/86E0D1E0-8089-11D0-9CE4-08003E301F73 mcr.microsoft.com/windows/servercore:ltsc2019
```
> **Note**: the `--device` option is only supported on process-isolated
> Windows containers. This option fails if the container isolation is `hyperv`
> or when running Linux Containers on Windows (LCOW).
### Restart policies (--restart)
Use Docker's `--restart` to specify a container's *restart policy*. A restart
@ -1441,15 +1478,15 @@ examples: |-
On Windows, `--isolation` can take one of these values:
| Value | Description |
|:----------|:-------------------------------------------------------------------------------------------|
| `default` | Use the value specified by the Docker daemon's `--exec-opt` or system default (see below). |
| `process` | Shared-kernel namespace isolation (not supported on Windows client operating systems). |
| `hyperv` | Hyper-V hypervisor partition-based isolation. |
| Value | Description |
|:----------|:------------------------------------------------------------------------------------------------------------------|
| `default` | Use the value specified by the Docker daemon's `--exec-opt` or system default (see below). |
| `process` | Shared-kernel namespace isolation (not supported on Windows client operating systems older than Windows 10 1809). |
| `hyperv` | Hyper-V hypervisor partition-based isolation. |
The default isolation on Windows server operating systems is `process`. The default (and only supported)
The default isolation on Windows server operating systems is `process`. The default
isolation on Windows client operating systems is `hyperv`. An attempt to start a container on a client
operating system with `--isolation process` will fail.
operating system older than Windows 10 1809 with `--isolation process` will fail.
On Windows server, assuming the default configuration, these commands are equivalent
and result in `process` isolation:

View File

@ -38,6 +38,14 @@ examples: |-
$ docker save -o fedora-latest.tar fedora:latest
```
### Save an image to a tar.gz file using gzip
You can use gzip to save the image file and make the backup smaller.
```bash
docker save myimage:latest | gzip > myimage_latest.tar.gz
```
### Cherry-pick particular tags
You can even cherry-pick particular tags of an image repository.

View File

@ -13,7 +13,7 @@ options:
value_type: string
description: Secret driver
deprecated: false
min_api_version: "1.37"
min_api_version: "1.31"
experimental: false
experimentalcli: false
kubernetes: false
@ -31,6 +31,7 @@ options:
value_type: string
description: Template driver
deprecated: false
min_api_version: "1.37"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -358,6 +358,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: replicas-max-per-node
value_type: uint64
default_value: "0"
description: Maximum number of tasks per node (default 0 = unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: reserve-cpu
value_type: decimal
description: Reserve CPUs
@ -497,6 +507,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: sysctl
value_type: list
description: Sysctl options
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: tty
shorthand: t
value_type: bool
@ -636,8 +655,8 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
\ --update-parallelism 2 \\\n redis:3.0.6\n```\n\nWhen you run a [service update](service_update.md),
the scheduler updates a\nmaximum of 2 tasks at a time, with `10s` between updates.
For more information,\nrefer to the [rolling updates\ntutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/).\n\n###
Set environment variables (-e, --env)\n\nThis sets an environmental variable for
all tasks in a service. For example:\n\n```bash\n$ docker service create \\\n --name
Set environment variables (-e, --env)\n\nThis sets an environment variable for all
tasks in a service. For example:\n\n```bash\n$ docker service create \\\n --name
redis_2 \\\n --replicas 5 \\\n --env MYVAR=foo \\\n redis:3.0.6\n```\n\nTo specify
multiple environment variables, specify multiple `--env` flags, each\nwith a separate
key-value pair.\n\n```bash\n$ docker service create \\\n --name redis_2 \\\n --replicas
@ -652,46 +671,49 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
Add bind mounts, volumes or memory filesystems\n\nDocker supports three different
kinds of mounts, which allow containers to read\nfrom or write to files or directories,
either on the host operating system, or\non memory filesystems. These types are
_data volumes_ (often referred to simply\nas volumes), _bind mounts_, and _tmpfs_.\n\nA
**bind mount** makes a file or directory on the host available to the\ncontainer
it is mounted within. A bind mount may be either read-only or\nread-write. For example,
a container might share its host's DNS information by\nmeans of a bind mount of
the host's `/etc/resolv.conf` or a container might\nwrite logs to its host's `/var/log/myContainerLogs`
directory. If you use\nbind mounts and your host and containers have different notions
of permissions,\naccess controls, or other such details, you will run into portability
issues.\n\nA **named volume** is a mechanism for decoupling persistent data needed
by your\ncontainer from the image used to create the container and from the host
machine.\nNamed volumes are created and managed by Docker, and a named volume persists\neven
when no container is currently using it. Data in named volumes can be\nshared between
a container and the host machine, as well as between multiple\ncontainers. Docker
uses a _volume driver_ to create, manage, and mount volumes.\nYou can back up or
restore volumes using Docker commands.\n\nA **tmpfs** mounts a tmpfs inside a container
for volatile data.\n\nConsider a situation where your image starts a lightweight
web server. You could\nuse that image as a base image, copy in your website's HTML
files, and package\nthat into another image. Each time your website changed, you'd
need to update\nthe new image and redeploy all of the containers serving your website.
A better\nsolution is to store the website in a named volume which is attached to
each of\nyour web server containers when they start. To update the website, you
just\nupdate the named volume.\n\nFor more information about named volumes, see\n[Data
Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/).\n\nThe following
table describes options which apply to both bind mounts and named\nvolumes in a
service:\n\n<table>\n <tr>\n <th>Option</th>\n <th>Required</th>\n <th>Description</th>\n
\ </tr>\n <tr>\n <td><b>types</b></td>\n <td></td>\n <td>\n <p>The
type of mount, can be either <tt>volume</tt>, <tt>bind</tt>, or <tt>tmpfs</tt>.
Defaults to <tt>volume</tt> if no type is specified.\n <ul>\n <li><tt>volume</tt>:
mounts a <a href=\"https://docs.docker.com/engine/reference/commandline/volume_create/\">managed
_data volumes_ (often referred to simply\nas volumes), _bind mounts_, _tmpfs_, and
_named pipes_.\n\nA **bind mount** makes a file or directory on the host available
to the\ncontainer it is mounted within. A bind mount may be either read-only or\nread-write.
For example, a container might share its host's DNS information by\nmeans of a bind
mount of the host's `/etc/resolv.conf` or a container might\nwrite logs to its host's
`/var/log/myContainerLogs` directory. If you use\nbind mounts and your host and
containers have different notions of permissions,\naccess controls, or other such
details, you will run into portability issues.\n\nA **named volume** is a mechanism
for decoupling persistent data needed by your\ncontainer from the image used to
create the container and from the host machine.\nNamed volumes are created and managed
by Docker, and a named volume persists\neven when no container is currently using
it. Data in named volumes can be\nshared between a container and the host machine,
as well as between multiple\ncontainers. Docker uses a _volume driver_ to create,
manage, and mount volumes.\nYou can back up or restore volumes using Docker commands.\n\nA
**tmpfs** mounts a tmpfs inside a container for volatile data.\n\nA **npipe** mounts
a named pipe from the host into the container.\n\nConsider a situation where your
image starts a lightweight web server. You could\nuse that image as a base image,
copy in your website's HTML files, and package\nthat into another image. Each time
your website changed, you'd need to update\nthe new image and redeploy all of the
containers serving your website. A better\nsolution is to store the website in a
named volume which is attached to each of\nyour web server containers when they
start. To update the website, you just\nupdate the named volume.\n\nFor more information
about named volumes, see\n[Data Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/).\n\nThe
following table describes options which apply to both bind mounts and named\nvolumes
in a service:\n\n<table>\n <tr>\n <th>Option</th>\n <th>Required</th>\n <th>Description</th>\n
\ </tr>\n <tr>\n <td><b>type</b></td>\n <td></td>\n <td>\n <p>The
type of mount, can be either <tt>volume</tt>, <tt>bind</tt>, <tt>tmpfs</tt>, or
<tt>npipe</tt>. Defaults to <tt>volume</tt> if no type is specified.\n <ul>\n
\ <li><tt>volume</tt>: mounts a <a href=\"https://docs.docker.com/engine/reference/commandline/volume_create/\">managed
volume</a>\n into the container.</li> <li><tt>bind</tt>:\n bind-mounts
a directory or file from the host into the container.</li>\n <li><tt>tmpfs</tt>:
mount a tmpfs in the container</li>\n </ul></p>\n </td>\n </tr>\n <tr>\n
\ <td><b>src</b> or <b>source</b></td>\n <td>for <tt>type=bind</tt> only></td>\n
\ <td>\n <ul>\n <li>\n <tt>type=volume</tt>: <tt>src</tt>
is an optional way to specify the name of the volume (for example, <tt>src=my-volume</tt>).\n
\ If the named volume does not exist, it is automatically created. If no
<tt>src</tt> is specified, the volume is\n assigned a random name which
is guaranteed to be unique on the host, but may not be unique cluster-wide.\n A
randomly-named volume has the same lifecycle as its container and is destroyed when
the <i>container</i>\n is destroyed (which is upon <tt>service update</tt>,
or when scaling or re-balancing the service)\n </li>\n <li>\n <tt>type=bind</tt>:
mount a tmpfs in the container</li>\n <li><tt>npipe</tt>: mounts named pipe
from the host into the container (Windows containers only).</li>\n </ul></p>\n
\ </td>\n </tr>\n <tr>\n <td><b>src</b> or <b>source</b></td>\n <td>for
<tt>type=bind</tt> and <tt>type=npipe</tt></td>\n <td>\n <ul>\n <li>\n
\ <tt>type=volume</tt>: <tt>src</tt> is an optional way to specify the name
of the volume (for example, <tt>src=my-volume</tt>).\n If the named volume
does not exist, it is automatically created. If no <tt>src</tt> is specified, the
volume is\n assigned a random name which is guaranteed to be unique on
the host, but may not be unique cluster-wide.\n A randomly-named volume
has the same lifecycle as its container and is destroyed when the <i>container</i>\n
\ is destroyed (which is upon <tt>service update</tt>, or when scaling or
re-balancing the service)\n </li>\n <li>\n <tt>type=bind</tt>:
<tt>src</tt> is required, and specifies an absolute path to the file or directory
to bind-mount\n (for example, <tt>src=/path/on/host/</tt>). An error is
produced if the file or directory does not exist.\n </li>\n <li>\n
@ -703,10 +725,16 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
mounting the volume or bind mount.</p>\n </td>\n </tr>\n <tr>\n <td><p><b>readonly</b>
or <b>ro</b></p></td>\n <td></td>\n <td>\n <p>The Engine mounts binds
and volumes <tt>read-write</tt> unless <tt>readonly</tt> option\n is given
when mounting the bind or volume.\n <ul>\n <li><tt>true</tt> or <tt>1</tt>
when mounting the bind or volume. Note that setting <tt>readonly</tt> for a\n bind-mount
does not make its submounts <tt>readonly</tt> on the current Linux implementation.
See also <tt>bind-nonrecursive</tt>.\n <ul>\n <li><tt>true</tt> or <tt>1</tt>
or no value: Mounts the bind or volume read-only.</li>\n <li><tt>false</tt>
or <tt>0</tt>: Mounts the bind or volume read-write.</li>\n </ul></p>\n </td>\n
\ </tr>\n <tr>\n <td><b>consistency</b></td>\n <td></td>\n <td>\n <p>The
\ </tr>\n</table>\n\n#### Options for Bind Mounts\n\nThe following options can only
be used for bind mounts (`type=bind`):\n\n\n<table>\n <tr>\n <th>Option</th>\n
\ <th>Description</th>\n </tr>\n <tr>\n <td><b>bind-propagation</b></td>\n
\ <td>\n <p>See the <a href=\"#bind-propagation\">bind propagation section</a>.</p>\n
\ </td>\n </tr>\n <tr>\n <td><b>consistency</b></td>\n <td>\n <p>The
consistency requirements for the mount; one of\n <ul>\n <li><tt>default</tt>:
Equivalent to <tt>consistent</tt>.</li>\n <li><tt>consistent</tt>: Full
consistency. The container runtime and the host maintain an identical view of the
@ -715,33 +743,41 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
visible within a container.</li>\n <li><tt>delegated</tt>: The container
runtime's view of the mount is authoritative. There may be delays before updates
made in a container are visible on the host.</li>\n </ul>\n </p>\n </td>\n
\ </tr>\n</table>\n\n#### Bind Propagation\n\nBind propagation refers to whether
or not mounts created within a given\nbind mount or named volume can be propagated
to replicas of that mount. Consider\na mount point `/mnt`, which is also mounted
on `/tmp`. The propation settings\ncontrol whether a mount on `/tmp/a` would also
be available on `/mnt/a`. Each\npropagation setting has a recursive counterpoint.
In the case of recursion,\nconsider that `/tmp/a` is also mounted as `/foo`. The
propagation settings\ncontrol whether `/mnt/a` and/or `/tmp/a` would exist.\n\nThe
`bind-propagation` option defaults to `rprivate` for both bind mounts and\nvolume
mounts, and is only configurable for bind mounts. In other words, named\nvolumes
do not support bind propagation.\n\n- **`shared`**: Sub-mounts of the original mount
are exposed to replica mounts,\n and sub-mounts of replica mounts
are also propagated to the\n original mount.\n- **`slave`**: similar
to a shared mount, but only in one direction. If the\n original mount
exposes a sub-mount, the replica mount can see it.\n However, if the
replica mount exposes a sub-mount, the original\n mount cannot see
it.\n- **`private`**: The mount is private. Sub-mounts within it are not exposed
to\n replica mounts, and sub-mounts of replica mounts are not\n
\ exposed to the original mount.\n- **`rshared`**: The same as shared,
but the propagation also extends to and from\n mount points nested
within any of the original or replica mount\n points.\n- **`rslave`**:
The same as `slave`, but the propagation also extends to and from\n mount
\ </tr>\n <tr>\n <td><b>bind-nonrecursive</b></td>\n <td>\n By default,
submounts are recursively bind-mounted as well. However, this behavior can be confusing
when a\n bind mount is configured with <tt>readonly</tt> option, because submounts
are not mounted as read-only.\n Set <tt>bind-nonrecursive</tt> to disable recursive
bind-mount.<br />\n <br />\n A value is optional:<br />\n <br />\n
\ <ul>\n <li><tt>true</tt> or <tt>1</tt>: Disables recursive bind-mount.</li>\n
\ <li><tt>false</tt> or <tt>0</tt>: Default if you do not provide a value.
Enables recursive bind-mount.</li>\n </ul>\n </td>\n </tr>\n</table>\n\n#####
Bind propagation\n\nBind propagation refers to whether or not mounts created within
a given\nbind mount or named volume can be propagated to replicas of that mount.
Consider\na mount point `/mnt`, which is also mounted on `/tmp`. The propation settings\ncontrol
whether a mount on `/tmp/a` would also be available on `/mnt/a`. Each\npropagation
setting has a recursive counterpoint. In the case of recursion,\nconsider that `/tmp/a`
is also mounted as `/foo`. The propagation settings\ncontrol whether `/mnt/a` and/or
`/tmp/a` would exist.\n\nThe `bind-propagation` option defaults to `rprivate` for
both bind mounts and\nvolume mounts, and is only configurable for bind mounts. In
other words, named\nvolumes do not support bind propagation.\n\n- **`shared`**:
Sub-mounts of the original mount are exposed to replica mounts,\n and
sub-mounts of replica mounts are also propagated to the\n original
mount.\n- **`slave`**: similar to a shared mount, but only in one direction. If
the\n original mount exposes a sub-mount, the replica mount can see
it.\n However, if the replica mount exposes a sub-mount, the original\n
\ mount cannot see it.\n- **`private`**: The mount is private. Sub-mounts
within it are not exposed to\n replica mounts, and sub-mounts of
replica mounts are not\n exposed to the original mount.\n- **`rshared`**:
The same as shared, but the propagation also extends to and from\n mount
points nested within any of the original or replica mount\n points.\n-
**`rprivate`**: The default. The same as `private`, meaning that no mount points\n
\ anywhere within the original or replica mount points propagate\n
\ in either direction.\n\nFor more information about bind propagation,
see the\n[Linux kernel documentation for shared subtree](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt).\n\n####
Options for Named Volumes\n\nThe following options can only be used for named volumes
**`rslave`**: The same as `slave`, but the propagation also extends to and from\n
\ mount points nested within any of the original or replica mount\n
\ points.\n- **`rprivate`**: The default. The same as `private`,
meaning that no mount points\n anywhere within the original or
replica mount points propagate\n in either direction.\n\nFor more
information about bind propagation, see the\n[Linux kernel documentation for shared
subtree](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt).\n\n####
Options for named volumes\n\nThe following options can only be used for named volumes
(`type=volume`):\n\n\n<table>\n <tr>\n <th>Option</th>\n <th>Description</th>\n
\ </tr>\n <tr>\n <td><b>volume-driver</b></td>\n <td>\n <p>Name of the
volume-driver plugin to use for the volume. Defaults to\n <tt>\"local\"</tt>,
@ -756,8 +792,8 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
\ the Engine copies those files and directories into the volume, allowing\n
\ the host to access them. Set <tt>volume-nocopy</tt> to disable copying files\n
\ from the container's filesystem to the volume and mount the empty volume.<br
/>\n\n A value is optional:\n\n <ul>\n <li><tt>true</tt> or <tt>1</tt>:
Default if you do not provide a value. Disables copying.</li>\n <li><tt>false</tt>
/>\n <br />\n A value is optional:<br />\n <br />\n <ul>\n <li><tt>true</tt>
or <tt>1</tt>: Default if you do not provide a value. Disables copying.</li>\n <li><tt>false</tt>
or <tt>0</tt>: Enables copying.</li>\n </ul>\n </td>\n </tr>\n <tr>\n
\ <td><b>volume-opt</b></td>\n <td>\n Options specific to a given volume
driver, which will be passed to the\n driver when creating the volume. Options
@ -868,14 +904,23 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
\ --placement-pref 'spread=node.labels.rack' \\\n redis:3.0.6\n```\n\nWhen updating
a service with `docker service update`, `--placement-pref-add`\nappends a new placement
preference after all existing placement preferences.\n`--placement-pref-rm` removes
an existing placement preference that matches the\nargument.\n\n### Attach a service
to an existing network (--network)\n\nYou can use overlay networks to connect one
or more services within the swarm.\n\nFirst, create an overlay network on a manager
node the docker network create\ncommand:\n\n```bash\n$ docker network create --driver
overlay my-network\n\netjpu59cykrptrgw0z0hk5snf\n```\n\nAfter you create an overlay
network in swarm mode, all manager nodes have\naccess to the network.\n\nWhen you
create a service and pass the `--network` flag to attach the service to\nthe overlay
network:\n\n```bash\n$ docker service create \\\n --replicas 3 \\\n --network
an existing placement preference that matches the\nargument.\n\n### Specify maximum
replicas per node (--replicas-max-per-node)\n\nUse the `--replicas-max-per-node`
flag to set the maximum number of replica tasks that can run on a node.\nThe following
command creates a nginx service with 2 replica tasks but only one replica task per
node.\n\nOne example where this can be useful is to balance tasks over a set of
data centers together with `--placement-pref`\nand let `--replicas-max-per-node`
setting make sure that replicas are not migrated to another datacenter during\nmaintenance
or datacenter failure.\n\nThe example below illustrates this:\n\n```bash\n$ docker
service create \\\n --name nginx \\\n --replicas 2 \\\n --replicas-max-per-node
1 \\\n --placement-pref 'spread=node.labels.datacenter' \\\n nginx\n```\n\n###
Attach a service to an existing network (--network)\n\nYou can use overlay networks
to connect one or more services within the swarm.\n\nFirst, create an overlay network
on a manager node the docker network create\ncommand:\n\n```bash\n$ docker network
create --driver overlay my-network\n\netjpu59cykrptrgw0z0hk5snf\n```\n\nAfter you
create an overlay network in swarm mode, all manager nodes have\naccess to the network.\n\nWhen
you create a service and pass the `--network` flag to attach the service to\nthe
overlay network:\n\n```bash\n$ docker service create \\\n --replicas 3 \\\n --network
my-network \\\n --name my-web \\\n nginx\n\n716thylsndqma81j6kkkb5aus\n```\n\nThe
swarm extends my-network to each node running the service.\n\nContainers on the
same network can access each other using\n[service discovery](https://docs.docker.com/engine/swarm/networking/#use-swarm-mode-service-discovery).\n\nLong

View File

@ -492,6 +492,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: replicas-max-per-node
value_type: uint64
default_value: "0"
description: Maximum number of tasks per node (default 0 = unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: reserve-cpu
value_type: decimal
description: Reserve CPUs
@ -647,6 +657,24 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: sysctl-add
value_type: list
description: Add or update a Sysctl option
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: sysctl-rm
value_type: list
description: Remove a Sysctl option
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: tty
shorthand: t
value_type: bool

View File

@ -180,8 +180,8 @@ examples: |-
"table {{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"
> **Note**: On Docker 17.09 and older, the `{{.Container}}` column was used, in
> stead of `{{.ID}}\t{{.Name}}`.
> **Note**: On Docker 17.09 and older, the `{{.Container}}` column was used,
> instead of `{{.ID}}\t{{.Name}}`.
deprecated: false
experimental: false
experimentalcli: false

View File

@ -53,6 +53,17 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: data-path-port
value_type: uint32
default_value: "0"
description: |
Port number to use for data path traffic (1024 - 49151). If no value is set or is set to 0, the default port (4789) is used.
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: default-addr-pool
value_type: ipNetSlice
default_value: '[]'
@ -137,127 +148,77 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
```bash
$ docker swarm init --advertise-addr 192.168.99.121
Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx \
172.17.0.2:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
`docker swarm init` generates two random tokens, a worker token and a manager token. When you join
a new node to the swarm, the node joins as a worker or manager node based upon the token you pass
to [swarm join](swarm_join.md).
After you create the swarm, you can display or rotate the token using
[swarm join-token](swarm_join_token.md).
### `--autolock`
This flag enables automatic locking of managers with an encryption key. The
private keys and data stored by all managers will be protected by the
encryption key printed in the output, and will not be accessible without it.
Thus, it is very important to store this key in order to activate a manager
after it restarts. The key can be passed to `docker swarm unlock` to reactivate
the manager. Autolock can be disabled by running
`docker swarm update --autolock=false`. After disabling it, the encryption key
is no longer required to start the manager, and it will start up on its own
without user intervention.
### `--cert-expiry`
This flag sets the validity period for node certificates.
### `--dispatcher-heartbeat`
This flag sets the frequency with which nodes are told to use as a
period to report their health.
### `--external-ca`
This flag sets up the swarm to use an external CA to issue node certificates. The value takes
the form `protocol=X,url=Y`. The value for `protocol` specifies what protocol should be used
to send signing requests to the external CA. Currently, the only supported value is `cfssl`.
The URL specifies the endpoint where signing requests should be submitted.
### `--force-new-cluster`
This flag forces an existing node that was part of a quorum that was lost to restart as a single node Manager without losing its data.
### `--listen-addr`
The node listens for inbound swarm manager traffic on this address. The default is to listen on
0.0.0.0:2377. It is also possible to specify a network interface to listen on that interface's
address; for example `--listen-addr eth0:2377`.
Specifying a port is optional. If the value is a bare IP address or interface
name, the default port 2377 will be used.
### `--advertise-addr`
This flag specifies the address that will be advertised to other members of the
swarm for API access and overlay networking. If unspecified, Docker will check
if the system has a single IP address, and use that IP address with the
listening port (see `--listen-addr`). If the system has multiple IP addresses,
`--advertise-addr` must be specified so that the correct address is chosen for
inter-manager communication and overlay networking.
It is also possible to specify a network interface to advertise that interface's address;
for example `--advertise-addr eth0:2377`.
Specifying a port is optional. If the value is a bare IP address or interface
name, the default port 2377 will be used.
### `--data-path-addr`
This flag specifies the address that global scope network drivers will publish towards
other nodes in order to reach the containers running on this node.
Using this parameter it is then possible to separate the container's data traffic from the
management traffic of the cluster.
If unspecified, Docker will use the same IP address or interface that is used for the
advertise address.
### `--default-addr-pool`
This flag specifies default subnet pools for global scope networks.
Format example is `--default-addr-pool 30.30.0.0/16 --default-addr-pool 40.40.0.0/16`
### `--default-addr-pool-mask-length`
This flag specifies default subnet pools mask length for default-addr-pool.
Format example is `--default-addr-pool-mask-length 24`
### `--task-history-limit`
This flag sets up task history retention limit.
### `--max-snapshots`
This flag sets the number of old Raft snapshots to retain in addition to the
current Raft snapshots. By default, no old snapshots are retained. This option
may be used for debugging, or to store old snapshots of the swarm state for
disaster recovery purposes.
### `--snapshot-interval`
This flag specifies how many log entries to allow in between Raft snapshots.
Setting this to a higher number will trigger snapshots less frequently.
Snapshots compact the Raft log and allow for more efficient transfer of the
state to new managers. However, there is a performance cost to taking snapshots
frequently.
### `--availability`
This flag specifies the availability of the node at the time the node joins a master.
Possible availability values are `active`, `pause`, or `drain`.
This flag is useful in certain situations. For example, a cluster may want to have
dedicated manager nodes that are not served as worker nodes. This could be achieved
by passing `--availability=drain` to `docker swarm init`.
examples: "```bash\n$ docker swarm init --advertise-addr 192.168.99.121\nSwarm initialized:
current node (bvz81updecsj6wjz393c09vti) is now a manager.\n\nTo add a worker to
this swarm, run the following command:\n\n docker swarm join \\\n --token
SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx
\\\n 172.17.0.2:2377\n\nTo add a manager to this swarm, run 'docker swarm join-token
manager' and follow the instructions.\n```\n\n`docker swarm init` generates two
random tokens, a worker token and a manager token. When you join\na new node to
the swarm, the node joins as a worker or manager node based upon the token you pass\nto
[swarm join](swarm_join.md).\n\nAfter you create the swarm, you can display or rotate
the token using\n[swarm join-token](swarm_join_token.md).\n\n### `--autolock`\n\nThis
flag enables automatic locking of managers with an encryption key. The\nprivate
keys and data stored by all managers will be protected by the\nencryption key printed
in the output, and will not be accessible without it.\nThus, it is very important
to store this key in order to activate a manager\nafter it restarts. The key can
be passed to `docker swarm unlock` to reactivate\nthe manager. Autolock can be disabled
by running\n`docker swarm update --autolock=false`. After disabling it, the encryption
key\nis no longer required to start the manager, and it will start up on its own\nwithout
user intervention.\n\n### `--cert-expiry`\n\nThis flag sets the validity period
for node certificates.\n\n### `--dispatcher-heartbeat`\n\nThis flag sets the frequency
with which nodes are told to use as a\nperiod to report their health.\n\n### `--external-ca`\n\nThis
flag sets up the swarm to use an external CA to issue node certificates. The value
takes\nthe form `protocol=X,url=Y`. The value for `protocol` specifies what protocol
should be used\nto send signing requests to the external CA. Currently, the only
supported value is `cfssl`.\nThe URL specifies the endpoint where signing requests
should be submitted.\n\n### `--force-new-cluster`\n\nThis flag forces an existing
node that was part of a quorum that was lost to restart as a single node Manager
without losing its data.\n\n### `--listen-addr`\n\nThe node listens for inbound
swarm manager traffic on this address. The default is to listen on\n0.0.0.0:2377.
It is also possible to specify a network interface to listen on that interface's\naddress;
for example `--listen-addr eth0:2377`.\n\nSpecifying a port is optional. If the
value is a bare IP address or interface\nname, the default port 2377 will be used.\n\n###
`--advertise-addr`\n\nThis flag specifies the address that will be advertised to
other members of the\nswarm for API access and overlay networking. If unspecified,
Docker will check\nif the system has a single IP address, and use that IP address
with the\nlistening port (see `--listen-addr`). If the system has multiple IP addresses,\n`--advertise-addr`
must be specified so that the correct address is chosen for\ninter-manager communication
and overlay networking.\n\nIt is also possible to specify a network interface to
advertise that interface's address;\nfor example `--advertise-addr eth0:2377`.\n\nSpecifying
a port is optional. If the value is a bare IP address or interface\nname, the default
port 2377 will be used.\n\n### `--data-path-addr`\n\nThis flag specifies the address
that global scope network drivers will publish towards\nother nodes in order to
reach the containers running on this node.\nUsing this parameter it is then possible
to separate the container's data traffic from the\nmanagement traffic of the cluster.\nIf
unspecified, Docker will use the same IP address or interface that is used for the\nadvertise
address.\n\n### `--data-path-port`\n\nThis flag allows you to configure the UDP
port number to use for data path\ntraffic. The provided port number must be within
the 1024 - 49151 range. If\nthis flag is not set or is set to 0, the default port
number 4789 is used.\nThe data path port can only be configured when initializing
the swarm, and\napplies to all nodes that join the swarm.\nThe following example
initializes a new Swarm, and configures the data path\nport to UDP port 7777;\n\n```bash\ndocker
swarm init --data-path-port=7777\n```\nAfter the swarm is initialized, use the `docker
info` command to verify that\nthe port is configured:\n\n```bash\ndocker info\n\t...\n\tClusterID:
9vs5ygs0gguyyec4iqf2314c0\n\tManagers: 1\n\tNodes: 1\n\tData Path Port: 7777\n\t...\n```\n\n###
`--default-addr-pool`\nThis flag specifies default subnet pools for global scope
networks.\nFormat example is `--default-addr-pool 30.30.0.0/16 --default-addr-pool
40.40.0.0/16`\n\n### `--default-addr-pool-mask-length`\nThis flag specifies default
subnet pools mask length for default-addr-pool.\nFormat example is `--default-addr-pool-mask-length
24`\n\n### `--task-history-limit`\n\nThis flag sets up task history retention limit.\n\n###
`--max-snapshots`\n\nThis flag sets the number of old Raft snapshots to retain in
addition to the\ncurrent Raft snapshots. By default, no old snapshots are retained.
This option\nmay be used for debugging, or to store old snapshots of the swarm state
for\ndisaster recovery purposes.\n\n### `--snapshot-interval`\n\nThis flag specifies
how many log entries to allow in between Raft snapshots.\nSetting this to a higher
number will trigger snapshots less frequently.\nSnapshots compact the Raft log and
allow for more efficient transfer of the\nstate to new managers. However, there
is a performance cost to taking snapshots\nfrequently.\n\n### `--availability`\n\nThis
flag specifies the availability of the node at the time the node joins a master.\nPossible
availability values are `active`, `pause`, or `drain`.\n\nThis flag is useful in
certain situations. For example, a cluster may want to have\ndedicated manager nodes
that are not served as worker nodes. This could be achieved\nby passing `--availability=drain`
to `docker swarm init`."
deprecated: false
min_api_version: "1.24"
experimental: false

View File

@ -35,7 +35,6 @@ examples: |-
Images 5 2 16.43 MB 11.63 MB (70%)
Containers 2 0 212 B 212 B (100%)
Local Volumes 2 1 36 B 0 B (0%)
Build Cache 0 0 0B 0B
```
A more detailed view can be requested using the `-v, --verbose` flag:
@ -63,14 +62,6 @@ examples: |-
NAME LINKS SIZE
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e 2 36 B
my-named-vol 0 0 B
Build cache usage: 0B
CACHE ID CACHE TYPE SIZE CREATED LAST USED USAGE SHARED
0d8ab63ff30d regular 4.34MB 7 days ago 0 true
189876ac9226 regular 11.5MB 7 days ago 0 true
```
* `SHARED SIZE` is the amount of space that an image shares with another one (i.e. their common data)

View File

@ -140,6 +140,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: pids-limit
value_type: int64
default_value: "0"
description: Tune container pids limit (set -1 for unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: restart
value_type: string
description: Restart policy to apply when a container exits

View File

@ -544,6 +544,8 @@ reference:
section:
- path: /engine/reference/commandline/builder/
title: docker builder
- path: /engine/reference/commandline/builder_build/
title: docker builder build
- path: /engine/reference/commandline/builder_prune/
title: docker builder prune
- sectiontitle: docker checkpoint *
@ -624,6 +626,26 @@ reference:
title: docker container update
- path: /engine/reference/commandline/container_wait/
title: docker container wait
- sectiontitle: docker context *
section:
- path: /engine/reference/commandline/context/
title: docker context
- path: /engine/reference/commandline/context_create/
title: docker context create
- path: /engine/reference/commandline/context_export/
title: docker context export
- path: /engine/reference/commandline/context_import/
title: docker context import
- path: /engine/reference/commandline/context_inspect/
title: docker context inspect
- path: /engine/reference/commandline/context_ls/
title: docker context ls
- path: /engine/reference/commandline/context_rm/
title: docker context rm
- path: /engine/reference/commandline/context_update/
title: docker context update
- path: /engine/reference/commandline/context_use/
title: docker context use
- path: /engine/reference/commandline/cp/
title: docker cp
- path: /engine/reference/commandline/create/

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_builder_build
title: docker builder build
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context
title: docker context
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_create
title: docker context create
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_export
title: docker context export
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_import
title: docker context import
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_inspect
title: docker context inspect
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_ls
title: docker context ls
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_rm
title: docker context rm
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_update
title: docker context update
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}

View File

@ -0,0 +1,15 @@
---
datafolder: engine-cli
datafile: docker_context_use
title: docker context use
---
<!--
Sorry, but the contents of this page are automatically generated from
Docker's source code. If you want to suggest a change to the text that appears
here, you'll need to find the string by searching this repo:
https://github.com/docker/cli
-->
{% include cli.md datafolder=page.datafolder datafile=page.datafile %}