mirror of https://github.com/docker/docs.git
Merge pull request #18540 from dvdksn/engine-config-refresh
/config tier 1 refresh
This commit is contained in:
commit
b0adb473dc
|
|
@ -1,9 +1,9 @@
|
||||||
---
|
---
|
||||||
description: How to keep containers running when the daemon isn't available.
|
description: Learn how to keep containers running when the daemon isn't available
|
||||||
keywords: docker, upgrade, daemon, dockerd, live-restore, daemonless container
|
keywords: docker, upgrade, daemon, dockerd, live-restore, daemonless container
|
||||||
title: Keep containers alive during daemon downtime
|
title: Live restore
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/live-restore/
|
- /engine/admin/live-restore/
|
||||||
---
|
---
|
||||||
|
|
||||||
By default, when the Docker daemon terminates, it shuts down running containers.
|
By default, when the Docker daemon terminates, it shuts down running containers.
|
||||||
|
|
@ -14,7 +14,7 @@ or upgrades.
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> Live restore is not supported on Windows containers, but it does work for
|
> Live restore isn't supported for Windows containers, but it does work for
|
||||||
> Linux containers running on Docker Desktop for Windows.
|
> Linux containers running on Docker Desktop for Windows.
|
||||||
|
|
||||||
## Enable live restore
|
## Enable live restore
|
||||||
|
|
@ -22,7 +22,7 @@ or upgrades.
|
||||||
There are two ways to enable the live restore setting to keep containers alive
|
There are two ways to enable the live restore setting to keep containers alive
|
||||||
when the daemon becomes unavailable. **Only do one of the following**.
|
when the daemon becomes unavailable. **Only do one of the following**.
|
||||||
|
|
||||||
* Add the configuration to the daemon configuration file. On Linux, this
|
- Add the configuration to the daemon configuration file. On Linux, this
|
||||||
defaults to `/etc/docker/daemon.json`. On Docker Desktop for Mac or Docker
|
defaults to `/etc/docker/daemon.json`. On Docker Desktop for Mac or Docker
|
||||||
Desktop for Windows, select the Docker icon from the task bar, then click
|
Desktop for Windows, select the Docker icon from the task bar, then click
|
||||||
**Settings** -> **Docker Engine**.
|
**Settings** -> **Docker Engine**.
|
||||||
|
|
@ -40,12 +40,11 @@ when the daemon becomes unavailable. **Only do one of the following**.
|
||||||
`systemd`, then use the command `systemctl reload docker`. Otherwise, send a
|
`systemd`, then use the command `systemctl reload docker`. Otherwise, send a
|
||||||
`SIGHUP` signal to the `dockerd` process.
|
`SIGHUP` signal to the `dockerd` process.
|
||||||
|
|
||||||
* If you prefer, you can start the `dockerd` process manually with the
|
- If you prefer, you can start the `dockerd` process manually with the
|
||||||
`--live-restore` flag. This approach is not recommended because it does not
|
`--live-restore` flag. This approach isn't recommended because it doesn't
|
||||||
set up the environment that `systemd` or another process manager would use
|
set up the environment that `systemd` or another process manager would use
|
||||||
when starting the Docker process. This can cause unexpected behavior.
|
when starting the Docker process. This can cause unexpected behavior.
|
||||||
|
|
||||||
|
|
||||||
## Live restore during upgrades
|
## Live restore during upgrades
|
||||||
|
|
||||||
Live restore allows you to keep containers running across Docker daemon updates,
|
Live restore allows you to keep containers running across Docker daemon updates,
|
||||||
|
|
@ -54,12 +53,12 @@ major (`YY.MM`) daemon upgrades.
|
||||||
|
|
||||||
If you skip releases during an upgrade, the daemon may not restore its
|
If you skip releases during an upgrade, the daemon may not restore its
|
||||||
connection to the containers. If the daemon can't restore the connection, it
|
connection to the containers. If the daemon can't restore the connection, it
|
||||||
cannot manage the running containers and you must stop them manually.
|
can't manage the running containers and you must stop them manually.
|
||||||
|
|
||||||
## Live restore upon restart
|
## Live restore upon restart
|
||||||
|
|
||||||
The live restore option only works to restore containers if the daemon options,
|
The live restore option only works to restore containers if the daemon options,
|
||||||
such as bridge IP addresses and graph driver, did not change. If any of these
|
such as bridge IP addresses and graph driver, didn't change. If any of these
|
||||||
daemon-level configuration options have changed, the live restore may not work
|
daemon-level configuration options have changed, the live restore may not work
|
||||||
and you may need to manually stop the containers.
|
and you may need to manually stop the containers.
|
||||||
|
|
||||||
|
|
@ -71,12 +70,12 @@ data. The default buffer size is 64K. If the buffers fill, you must restart
|
||||||
the Docker daemon to flush them.
|
the Docker daemon to flush them.
|
||||||
|
|
||||||
On Linux, you can modify the kernel's buffer size by changing
|
On Linux, you can modify the kernel's buffer size by changing
|
||||||
`/proc/sys/fs/pipe-max-size`. You cannot modify the buffer size on Docker Desktop for
|
`/proc/sys/fs/pipe-max-size`. You can't modify the buffer size on Docker Desktop for
|
||||||
Mac or Docker Desktop for Windows.
|
Mac or Docker Desktop for Windows.
|
||||||
|
|
||||||
## Live restore and swarm mode
|
## Live restore and Swarm mode
|
||||||
|
|
||||||
The live restore option only pertains to standalone containers, and not to swarm
|
The live restore option only pertains to standalone containers, and not to Swarm
|
||||||
services. Swarm services are managed by swarm managers. If swarm managers are
|
services. Swarm services are managed by Swarm managers. If Swarm managers are
|
||||||
not available, swarm services continue to run on worker nodes but cannot be
|
not available, Swarm services continue to run on worker nodes but can't be
|
||||||
managed until enough swarm managers are available to maintain a quorum.
|
managed until enough Swarm managers are available to maintain a quorum.
|
||||||
|
|
|
||||||
|
|
@ -1,19 +1,19 @@
|
||||||
---
|
---
|
||||||
description: How to write to and view a container's logs
|
description: Learn how to write to, view, and configure a container's logs
|
||||||
keywords: docker, logging
|
keywords: docker, logging
|
||||||
title: View container logs
|
title: View container logs
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/logging/
|
- /engine/admin/logging/
|
||||||
- /engine/admin/logging/view_container_logs/
|
- /engine/admin/logging/view_container_logs/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `docker logs` command shows information logged by a running container. The
|
The `docker logs` command shows information logged by a running container. The
|
||||||
`docker service logs` command shows information logged by all containers
|
`docker service logs` command shows information logged by all containers
|
||||||
participating in a service. The information that is logged and the format of the
|
participating in a service. The information that's logged and the format of the
|
||||||
log depends almost entirely on the container's endpoint command.
|
log depends almost entirely on the container's endpoint command.
|
||||||
|
|
||||||
By default, `docker logs` or `docker service logs` shows the command's output
|
By default, `docker logs` or `docker service logs` shows the command's output
|
||||||
just as it would appear if you ran the command interactively in a terminal. UNIX
|
just as it would appear if you ran the command interactively in a terminal. Unix
|
||||||
and Linux commands typically open three I/O streams when they run, called
|
and Linux commands typically open three I/O streams when they run, called
|
||||||
`STDIN`, `STDOUT`, and `STDERR`. `STDIN` is the command's input stream, which
|
`STDIN`, `STDOUT`, and `STDERR`. `STDIN` is the command's input stream, which
|
||||||
may include input from the keyboard or input from another command. `STDOUT` is
|
may include input from the keyboard or input from another command. `STDOUT` is
|
||||||
|
|
@ -50,4 +50,4 @@ its errors to `/proc/self/fd/2` (which is `STDERR`). See the
|
||||||
## Next steps
|
## Next steps
|
||||||
|
|
||||||
- Configure [logging drivers](configure.md).
|
- Configure [logging drivers](configure.md).
|
||||||
- Write a [Dockerfile](../../../engine/reference/builder.md).
|
- Write a [Dockerfile](../../../engine/reference/builder.md).
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,10 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the Amazon CloudWatch Logs logging driver.
|
description: Learn how to use the Amazon CloudWatch Logs logging driver with Docker Engine
|
||||||
keywords: AWS, Amazon, CloudWatch, logging, driver
|
keywords: AWS, Amazon, CloudWatch, logging, driver
|
||||||
title: Amazon CloudWatch Logs logging driver
|
title: Amazon CloudWatch Logs logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/awslogs/
|
- /engine/reference/logging/awslogs/
|
||||||
- /engine/admin/logging/awslogs/
|
- /engine/admin/logging/awslogs/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `awslogs` logging driver sends container logs to
|
The `awslogs` logging driver sends container logs to
|
||||||
|
|
@ -51,7 +51,7 @@ myservice:
|
||||||
options:
|
options:
|
||||||
awslogs-region: us-east-1
|
awslogs-region: us-east-1
|
||||||
```
|
```
|
||||||
|
|
||||||
## Amazon CloudWatch Logs options
|
## Amazon CloudWatch Logs options
|
||||||
|
|
||||||
You can add logging options to the `daemon.json` to set Docker-wide defaults,
|
You can add logging options to the `daemon.json` to set Docker-wide defaults,
|
||||||
|
|
@ -108,7 +108,7 @@ specified, the container ID is used as the log stream.
|
||||||
|
|
||||||
### awslogs-create-group
|
### awslogs-create-group
|
||||||
|
|
||||||
Log driver returns an error by default if the log group does not exist. However, you can set the
|
Log driver returns an error by default if the log group doesn't exist. However, you can set the
|
||||||
`awslogs-create-group` to `true` to automatically create the log group as needed.
|
`awslogs-create-group` to `true` to automatically create the log group as needed.
|
||||||
The `awslogs-create-group` option defaults to `false`.
|
The `awslogs-create-group` option defaults to `false`.
|
||||||
|
|
||||||
|
|
@ -128,7 +128,7 @@ $ docker run \
|
||||||
|
|
||||||
### awslogs-datetime-format
|
### awslogs-datetime-format
|
||||||
|
|
||||||
The `awslogs-datetime-format` option defines a multiline start pattern in [Python
|
The `awslogs-datetime-format` option defines a multi-line start pattern in [Python
|
||||||
`strftime` format](https://strftime.org). A log message consists of a line that
|
`strftime` format](https://strftime.org). A log message consists of a line that
|
||||||
matches the pattern and any following lines that don't match the pattern. Thus
|
matches the pattern and any following lines that don't match the pattern. Thus
|
||||||
the matched line is the delimiter between log messages.
|
the matched line is the delimiter between log messages.
|
||||||
|
|
@ -141,10 +141,9 @@ single entry.
|
||||||
This option always takes precedence if both `awslogs-datetime-format` and
|
This option always takes precedence if both `awslogs-datetime-format` and
|
||||||
`awslogs-multiline-pattern` are configured.
|
`awslogs-multiline-pattern` are configured.
|
||||||
|
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> Multiline logging performs regular expression parsing and matching of all log
|
> Multi-line logging performs regular expression parsing and matching of all log
|
||||||
> messages, which may have a negative impact on logging performance.
|
> messages, which may have a negative impact on logging performance.
|
||||||
|
|
||||||
Consider the following log stream, where new log messages start with a
|
Consider the following log stream, where new log messages start with a
|
||||||
|
|
@ -152,7 +151,7 @@ timestamp:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
[May 01, 2017 19:00:01] A message was logged
|
[May 01, 2017 19:00:01] A message was logged
|
||||||
[May 01, 2017 19:00:04] Another multiline message was logged
|
[May 01, 2017 19:00:04] Another multi-line message was logged
|
||||||
Some random message
|
Some random message
|
||||||
with some random words
|
with some random words
|
||||||
[May 01, 2017 19:01:32] Another message was logged
|
[May 01, 2017 19:01:32] Another message was logged
|
||||||
|
|
@ -178,7 +177,7 @@ This parses the logs into the following CloudWatch log events:
|
||||||
[May 01, 2017 19:00:01] A message was logged
|
[May 01, 2017 19:00:01] A message was logged
|
||||||
|
|
||||||
# Second event
|
# Second event
|
||||||
[May 01, 2017 19:00:04] Another multiline message was logged
|
[May 01, 2017 19:00:04] Another multi-line message was logged
|
||||||
Some random message
|
Some random message
|
||||||
with some random words
|
with some random words
|
||||||
|
|
||||||
|
|
@ -189,7 +188,7 @@ with some random words
|
||||||
The following `strftime` codes are supported:
|
The following `strftime` codes are supported:
|
||||||
|
|
||||||
| Code | Meaning | Example |
|
| Code | Meaning | Example |
|
||||||
|:-----|:-----------------------------------------------------------------|:---------|
|
| :--- | :--------------------------------------------------------------- | :------- |
|
||||||
| `%a` | Weekday abbreviated name. | Mon |
|
| `%a` | Weekday abbreviated name. | Mon |
|
||||||
| `%A` | Weekday full name. | Monday |
|
| `%A` | Weekday full name. | Monday |
|
||||||
| `%w` | Weekday as a decimal number where 0 is Sunday and 6 is Saturday. | 0 |
|
| `%w` | Weekday as a decimal number where 0 is Sunday and 6 is Saturday. | 0 |
|
||||||
|
|
@ -212,15 +211,16 @@ The following `strftime` codes are supported:
|
||||||
|
|
||||||
### awslogs-multiline-pattern
|
### awslogs-multiline-pattern
|
||||||
|
|
||||||
The `awslogs-multiline-pattern` option defines a multiline start pattern using a
|
The `awslogs-multiline-pattern` option defines a multi-line start pattern using a
|
||||||
regular expression. A log message consists of a line that matches the pattern
|
regular expression. A log message consists of a line that matches the pattern
|
||||||
and any following lines that don't match the pattern. Thus the matched line is
|
and any following lines that don't match the pattern. Thus the matched line is
|
||||||
the delimiter between log messages.
|
the delimiter between log messages.
|
||||||
|
|
||||||
This option is ignored if `awslogs-datetime-format` is also configured.
|
This option is ignored if `awslogs-datetime-format` is also configured.
|
||||||
|
|
||||||
> **Note**:
|
> **Note**
|
||||||
> Multiline logging performs regular expression parsing and matching of all log
|
>
|
||||||
|
> Multi-line logging performs regular expression parsing and matching of all log
|
||||||
> messages. This may have a negative impact on logging performance.
|
> messages. This may have a negative impact on logging performance.
|
||||||
|
|
||||||
Consider the following log stream, where each log message should start with the
|
Consider the following log stream, where each log message should start with the
|
||||||
|
|
@ -228,7 +228,7 @@ pattern `INFO`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
INFO A message was logged
|
INFO A message was logged
|
||||||
INFO Another multiline message was logged
|
INFO Another multi-line message was logged
|
||||||
Some random message
|
Some random message
|
||||||
INFO Another message was logged
|
INFO Another message was logged
|
||||||
```
|
```
|
||||||
|
|
@ -251,7 +251,7 @@ This parses the logs into the following CloudWatch log events:
|
||||||
INFO A message was logged
|
INFO A message was logged
|
||||||
|
|
||||||
# Second event
|
# Second event
|
||||||
INFO Another multiline message was logged
|
INFO Another multi-line message was logged
|
||||||
Some random message
|
Some random message
|
||||||
|
|
||||||
# Third event
|
# Third event
|
||||||
|
|
@ -273,22 +273,21 @@ If not specified, the container ID is used as the log stream.
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> The CloudWatch log API does not support `:` in the log name. This can cause
|
> The CloudWatch log API doesn't support `:` in the log name. This can cause
|
||||||
> some issues when using the `{{ .ImageName }}` as a tag,
|
> some issues when using the `{{ .ImageName }}` as a tag,
|
||||||
> since a docker image has a format of `IMAGE:TAG`, such as `alpine:latest`.
|
> since a Docker image has a format of `IMAGE:TAG`, such as `alpine:latest`.
|
||||||
> Template markup can be used to get the proper format. To get the image name
|
> Template markup can be used to get the proper format. To get the image name
|
||||||
> and the first 12 characters of the container ID, you can use:
|
> and the first 12 characters of the container ID, you can use:
|
||||||
>
|
>
|
||||||
>
|
|
||||||
> ```bash
|
> ```bash
|
||||||
> --log-opt tag='{{ with split .ImageName ":" }}{{join . "_"}}{{end}}-{{.ID}}'
|
> --log-opt tag='{{ with split .ImageName ":" }}{{join . "_"}}{{end}}-{{.ID}}'
|
||||||
> ```
|
> ```
|
||||||
>
|
>
|
||||||
> the output is something like: `alpine_latest-bf0072049c76`
|
> the output is something like: `alpine_latest-bf0072049c76`
|
||||||
|
|
||||||
### awslogs-force-flush-interval-seconds
|
### awslogs-force-flush-interval-seconds
|
||||||
|
|
||||||
The `awslogs` driver periodically flushs logs to CloudWatch.
|
The `awslogs` driver periodically flushes logs to CloudWatch.
|
||||||
|
|
||||||
The `awslogs-force-flush-interval-seconds` option changes log flush interval seconds.
|
The `awslogs-force-flush-interval-seconds` option changes log flush interval seconds.
|
||||||
|
|
||||||
|
|
@ -319,13 +318,10 @@ and `logs:PutLogEvents` actions, as shown in the following example.
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
{
|
{
|
||||||
"Action": [
|
"Action": ["logs:CreateLogStream", "logs:PutLogEvents"],
|
||||||
"logs:CreateLogStream",
|
|
||||||
"logs:PutLogEvents"
|
|
||||||
],
|
|
||||||
"Effect": "Allow",
|
"Effect": "Allow",
|
||||||
"Resource": "*"
|
"Resource": "*"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,41 +1,42 @@
|
||||||
---
|
---
|
||||||
description: Configure logging driver.
|
description: Learn how to configure logging driver for the Docker daemon
|
||||||
keywords: docker, logging, driver
|
keywords: docker, logging, driver
|
||||||
title: Configure logging drivers
|
title: Configure logging drivers
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/overview/
|
- /engine/reference/logging/overview/
|
||||||
- /engine/reference/logging/
|
- /engine/reference/logging/
|
||||||
- /engine/admin/reference/logging/
|
- /engine/admin/reference/logging/
|
||||||
- /engine/admin/logging/overview/
|
- /engine/admin/logging/overview/
|
||||||
---
|
---
|
||||||
|
|
||||||
Docker includes multiple logging mechanisms to help you
|
Docker includes multiple logging mechanisms to help you
|
||||||
[get information from running containers and services](index.md).
|
[get information from running containers and services](index.md).
|
||||||
These mechanisms are called logging drivers. Each Docker daemon has a default
|
These mechanisms are called logging drivers. Each Docker daemon has a default
|
||||||
logging driver, which each container uses unless you configure it to use a
|
logging driver, which each container uses unless you configure it to use a
|
||||||
different logging driver, or "log-driver" for short.
|
different logging driver, or log driver for short.
|
||||||
|
|
||||||
As a default, Docker uses the [`json-file` logging driver](json-file.md), which
|
As a default, Docker uses the [`json-file` logging driver](json-file.md), which
|
||||||
caches container logs as JSON internally. In addition to using the logging drivers
|
caches container logs as JSON internally. In addition to using the logging drivers
|
||||||
included with Docker, you can also implement and use [logging driver plugins](plugins.md).
|
included with Docker, you can also implement and use [logging driver plugins](plugins.md).
|
||||||
|
|
||||||
> **Tip: use the "local" logging driver to prevent disk-exhaustion**
|
> **Tip: use the `local` logging driver to prevent disk-exhaustion**
|
||||||
>
|
>
|
||||||
> By default, no log-rotation is performed. As a result, log-files stored by the
|
> By default, no log-rotation is performed. As a result, log-files stored by the
|
||||||
> default [`json-file` logging driver](json-file.md) logging driver can cause
|
> default [`json-file` logging driver](json-file.md) logging driver can cause
|
||||||
> a significant amount of disk space to be used for containers that generate much
|
> a significant amount of disk space to be used for containers that generate much
|
||||||
> output, which can lead to disk space exhaustion.
|
> output, which can lead to disk space exhaustion.
|
||||||
>
|
>
|
||||||
> Docker keeps the json-file logging driver (without log-rotation) as a default
|
> Docker keeps the json-file logging driver (without log-rotation) as a default
|
||||||
> to remain backward compatibility with older versions of Docker, and for situations
|
> to remain backward compatibility with older versions of Docker, and for situations
|
||||||
> where Docker is used as runtime for Kubernetes.
|
> where Docker is used as runtime for Kubernetes.
|
||||||
>
|
>
|
||||||
> For other situations, the "local" logging driver is recommended as it performs
|
> For other situations, the `local` logging driver is recommended as it performs
|
||||||
> log-rotation by default, and uses a more efficient file format. Refer to the
|
> log-rotation by default, and uses a more efficient file format. Refer to the
|
||||||
> [Configure the default logging driver](#configure-the-default-logging-driver)
|
> [Configure the default logging driver](#configure-the-default-logging-driver)
|
||||||
> section below to learn how to configure the "local" logging driver as a default,
|
> section below to learn how to configure the `local` logging driver as a default,
|
||||||
> and the [local file logging driver](local.md) page for more details about the
|
> and the [local file logging driver](local.md) page for more details about the
|
||||||
> "local" logging driver.
|
> `local` logging driver.
|
||||||
|
{ .tip }
|
||||||
|
|
||||||
## Configure the default logging driver
|
## Configure the default logging driver
|
||||||
|
|
||||||
|
|
@ -71,7 +72,7 @@ example sets four configurable options on the `json-file` logging driver:
|
||||||
```
|
```
|
||||||
|
|
||||||
Restart Docker for the changes to take effect for newly created containers.
|
Restart Docker for the changes to take effect for newly created containers.
|
||||||
Existing containers do not use the new logging configuration.
|
Existing containers don't use the new logging configuration automatically.
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
|
|
@ -79,21 +80,19 @@ Existing containers do not use the new logging configuration.
|
||||||
> be provided as strings. Boolean and numeric values (such as the value for
|
> be provided as strings. Boolean and numeric values (such as the value for
|
||||||
> `max-file` in the example above) must therefore be enclosed in quotes (`"`).
|
> `max-file` in the example above) must therefore be enclosed in quotes (`"`).
|
||||||
|
|
||||||
If you do not specify a logging driver, the default is `json-file`.
|
If you don't specify a logging driver, the default is `json-file`.
|
||||||
To find the current default logging driver for the Docker daemon, run
|
To find the current default logging driver for the Docker daemon, run
|
||||||
`docker info` and search for `Logging Driver`. You can use the following
|
`docker info` and search for `Logging Driver`. You can use the following
|
||||||
command on Linux, macOS, or PowerShell on Windows:
|
command on Linux, macOS, or PowerShell on Windows:
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker info --format '{{.LoggingDriver}}'
|
$ docker info --format '{{.LoggingDriver}}'
|
||||||
|
|
||||||
json-file
|
json-file
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> Changing the default logging driver or logging driver options in the daemon
|
> Changing the default logging driver or logging driver options in the daemon
|
||||||
> configuration only affects containers that are created after the configuration
|
> configuration only affects containers that are created after the configuration
|
||||||
> is changed. Existing containers retain the logging driver options that were
|
> is changed. Existing containers retain the logging driver options that were
|
||||||
|
|
@ -121,21 +120,19 @@ To find the current logging driver for a running container, if the daemon
|
||||||
is using the `json-file` logging driver, run the following `docker inspect`
|
is using the `json-file` logging driver, run the following `docker inspect`
|
||||||
command, substituting the container name or ID for `<CONTAINER>`:
|
command, substituting the container name or ID for `<CONTAINER>`:
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>
|
$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>
|
||||||
|
|
||||||
json-file
|
json-file
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Configure the delivery mode of log messages from container to log driver
|
## Configure the delivery mode of log messages from container to log driver
|
||||||
|
|
||||||
Docker provides two modes for delivering messages from the container to the log
|
Docker provides two modes for delivering messages from the container to the log
|
||||||
driver:
|
driver:
|
||||||
|
|
||||||
* (default) direct, blocking delivery from container to driver
|
- (default) direct, blocking delivery from container to driver
|
||||||
* non-blocking delivery that stores log messages in an intermediate per-container buffer for consumption by driver
|
- non-blocking delivery that stores log messages in an intermediate per-container buffer for consumption by driver
|
||||||
|
|
||||||
The `non-blocking` message delivery mode prevents applications from blocking due
|
The `non-blocking` message delivery mode prevents applications from blocking due
|
||||||
to logging back pressure. Applications are likely to fail in unexpected ways when
|
to logging back pressure. Applications are likely to fail in unexpected ways when
|
||||||
|
|
@ -143,7 +140,7 @@ STDERR or STDOUT streams block.
|
||||||
|
|
||||||
> **Warning**
|
> **Warning**
|
||||||
>
|
>
|
||||||
> When the buffer is full, new messages will not be enqueued. Dropping messages is often preferred to blocking the
|
> When the buffer is full, new messages will not be enqueued. Dropping messages is often preferred to blocking the
|
||||||
> log-writing process of an application.
|
> log-writing process of an application.
|
||||||
{ .warning }
|
{ .warning }
|
||||||
|
|
||||||
|
|
@ -165,8 +162,8 @@ $ docker run -it --log-opt mode=non-blocking --log-opt max-buffer-size=4m alpine
|
||||||
|
|
||||||
Some logging drivers add the value of a container's `--env|-e` or `--label`
|
Some logging drivers add the value of a container's `--env|-e` or `--label`
|
||||||
flags to the container's logs. This example starts a container using the Docker
|
flags to the container's logs. This example starts a container using the Docker
|
||||||
daemon's default logging driver (let's assume `json-file`) but sets the
|
daemon's default logging driver (in the following example, `json-file`) but
|
||||||
environment variable `os=ubuntu`.
|
sets the environment variable `os=ubuntu`.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run -dit --label production_status=testing -e os=ubuntu alpine sh
|
$ docker run -dit --label production_status=testing -e os=ubuntu alpine sh
|
||||||
|
|
@ -186,33 +183,33 @@ documentation for its configurable options, if applicable. If you are using
|
||||||
[logging driver plugins](plugins.md), you may
|
[logging driver plugins](plugins.md), you may
|
||||||
see more options.
|
see more options.
|
||||||
|
|
||||||
| Driver | Description |
|
| Driver | Description |
|
||||||
|:------------------------------|:--------------------------------------------------------------------------------------------------------------|
|
| :---------------------------- | :---------------------------------------------------------------------------------------------------------- |
|
||||||
| `none` | No logs are available for the container and `docker logs` does not return any output. |
|
| `none` | No logs are available for the container and `docker logs` does not return any output. |
|
||||||
| [`local`](local.md) | Logs are stored in a custom format designed for minimal overhead. |
|
| [`local`](local.md) | Logs are stored in a custom format designed for minimal overhead. |
|
||||||
| [`json-file`](json-file.md) | The logs are formatted as JSON. The default logging driver for Docker. |
|
| [`json-file`](json-file.md) | The logs are formatted as JSON. The default logging driver for Docker. |
|
||||||
| [`syslog`](syslog.md) | Writes logging messages to the `syslog` facility. The `syslog` daemon must be running on the host machine. |
|
| [`syslog`](syslog.md) | Writes logging messages to the `syslog` facility. The `syslog` daemon must be running on the host machine. |
|
||||||
| [`journald`](journald.md) | Writes log messages to `journald`. The `journald` daemon must be running on the host machine. |
|
| [`journald`](journald.md) | Writes log messages to `journald`. The `journald` daemon must be running on the host machine. |
|
||||||
| [`gelf`](gelf.md) | Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash. |
|
| [`gelf`](gelf.md) | Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash. |
|
||||||
| [`fluentd`](fluentd.md) | Writes log messages to `fluentd` (forward input). The `fluentd` daemon must be running on the host machine. |
|
| [`fluentd`](fluentd.md) | Writes log messages to `fluentd` (forward input). The `fluentd` daemon must be running on the host machine. |
|
||||||
| [`awslogs`](awslogs.md) | Writes log messages to Amazon CloudWatch Logs. |
|
| [`awslogs`](awslogs.md) | Writes log messages to Amazon CloudWatch Logs. |
|
||||||
| [`splunk`](splunk.md) | Writes log messages to `splunk` using the HTTP Event Collector. |
|
| [`splunk`](splunk.md) | Writes log messages to `splunk` using the HTTP Event Collector. |
|
||||||
| [`etwlogs`](etwlogs.md) | Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms. |
|
| [`etwlogs`](etwlogs.md) | Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms. |
|
||||||
| [`gcplogs`](gcplogs.md) | Writes log messages to Google Cloud Platform (GCP) Logging. |
|
| [`gcplogs`](gcplogs.md) | Writes log messages to Google Cloud Platform (GCP) Logging. |
|
||||||
| [`logentries`](logentries.md) | Writes log messages to Rapid7 Logentries. |
|
| [`logentries`](logentries.md) | Writes log messages to Rapid7 Logentries. |
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> When using Docker Engine 19.03 or older, the [`docker logs` command](../../../engine/reference/commandline/logs.md)
|
> When using Docker Engine 19.03 or older, the [`docker logs` command](../../../engine/reference/commandline/logs.md)
|
||||||
> is only functional for the `local`, `json-file` and `journald` logging drivers.
|
> is only functional for the `local`, `json-file` and `journald` logging drivers.
|
||||||
> Docker 20.10 and up introduces "dual logging", which uses a local buffer that
|
> Docker 20.10 and up introduces "dual logging", which uses a local buffer that
|
||||||
> allows you to use the `docker logs` command for any logging driver. Refer to
|
> allows you to use the `docker logs` command for any logging driver. Refer to
|
||||||
> [reading logs when using remote logging drivers](dual-logging.md) for details.
|
> [reading logs when using remote logging drivers](dual-logging.md) for details.
|
||||||
|
|
||||||
## Limitations of logging drivers
|
## Limitations of logging drivers
|
||||||
|
|
||||||
- Reading log information requires decompressing rotated log files, which causes
|
- Reading log information requires decompressing rotated log files, which causes
|
||||||
a temporary increase in disk usage (until the log entries from the rotated
|
a temporary increase in disk usage (until the log entries from the rotated
|
||||||
files are read) and an increased CPU usage while decompressing.
|
files are read) and an increased CPU usage while decompressing.
|
||||||
- The capacity of the host storage where the Docker data directory resides
|
- The capacity of the host storage where the Docker data directory resides
|
||||||
determines the maximum size of the log file information.
|
determines the maximum size of the log file information.
|
||||||
|
|
|
||||||
|
|
@ -1,29 +1,31 @@
|
||||||
---
|
---
|
||||||
description: Learn how to read container logs locally when using a third party logging
|
description:
|
||||||
|
Learn how to read container logs locally when using a third party logging
|
||||||
solution.
|
solution.
|
||||||
keywords: docker, logging, driver, dual logging, dual-logging, cache, ring-buffer,
|
keywords:
|
||||||
|
docker, logging, driver, dual logging, dual-logging, cache, ring-buffer,
|
||||||
configuration
|
configuration
|
||||||
title: Use docker logs with remote logging drivers
|
title: Use docker logs with remote logging drivers
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Prior to Docker Engine 20.10, the [`docker logs` command](../../../engine/reference/commandline/logs.md)
|
Prior to Docker Engine 20.10, the [`docker logs` command](../../../engine/reference/commandline/logs.md)
|
||||||
could only be used with logging drivers that supported for containers using the
|
could only be used with logging drivers that supported for containers using the
|
||||||
`local`, `json-file`, or `journald` log drivers. However, many third party logging
|
`local`, `json-file`, or `journald` log drivers. However, many third party logging
|
||||||
drivers had no support for locally reading logs using `docker logs`
|
drivers had no support for locally reading logs using `docker logs`
|
||||||
|
|
||||||
This created multiple problems when attempting to gather log data in an
|
This created multiple problems when attempting to gather log data in an
|
||||||
automated and standard way. Log information could only be accessed and viewed
|
automated and standard way. Log information could only be accessed and viewed
|
||||||
through the third-party solution in the format specified by that
|
through the third-party solution in the format specified by that
|
||||||
third-party tool.
|
third-party tool.
|
||||||
|
|
||||||
Starting with Docker Engine 20.10, you can use `docker logs` to read container
|
Starting with Docker Engine 20.10, you can use `docker logs` to read container
|
||||||
logs regardless of the configured logging driver or plugin. This capability,
|
logs regardless of the configured logging driver or plugin. This capability,
|
||||||
referred to as "dual logging", allows you to use `docker logs` to read container
|
referred to as "dual logging", allows you to use `docker logs` to read container
|
||||||
logs locally in a consistent format, regardless of the log driver used, because
|
logs locally in a consistent format, regardless of the log driver used, because
|
||||||
the engine is configured to log information to the “local” logging driver. Refer
|
the engine is configured to log information to the “local” logging driver. Refer
|
||||||
to [Configure the default logging driver](configure.md) for additional information.
|
to [Configure the default logging driver](configure.md) for additional information.
|
||||||
|
|
||||||
Dual logging uses the [`local`](local.md) logging driver to act as cache for
|
Dual logging uses the [`local`](local.md) logging driver to act as cache for
|
||||||
reading the latest logs of your containers. By default, the cache has log-file
|
reading the latest logs of your containers. By default, the cache has log-file
|
||||||
|
|
@ -34,10 +36,10 @@ Refer to the [configuration options](#configuration-options) section to customiz
|
||||||
these defaults, or to the [disable dual-logging](#disable-the-dual-logging-cache)
|
these defaults, or to the [disable dual-logging](#disable-the-dual-logging-cache)
|
||||||
section to disable this feature.
|
section to disable this feature.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
No configuration changes are needed to use dual logging. Docker Engine 20.10 and
|
No configuration changes are needed to use dual logging. Docker Engine 20.10 and
|
||||||
up automatically enable dual logging if the configured logging driver does not
|
up automatically enable dual logging if the configured logging driver doesn't
|
||||||
support reading logs.
|
support reading logs.
|
||||||
|
|
||||||
The following examples show the result of running a `docker logs` command with
|
The following examples show the result of running a `docker logs` command with
|
||||||
|
|
@ -51,67 +53,67 @@ logs locally:
|
||||||
|
|
||||||
- Step 1: Configure Docker daemon
|
- Step 1: Configure Docker daemon
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ cat /etc/docker/daemon.json
|
$ cat /etc/docker/daemon.json
|
||||||
{
|
{
|
||||||
"log-driver": "splunk",
|
"log-driver": "splunk",
|
||||||
"log-opts": {
|
"log-opts": {
|
||||||
"cache-disabled": "true",
|
"cache-disabled": "true",
|
||||||
... (options for "splunk" logging driver)
|
... (options for "splunk" logging driver)
|
||||||
}
|
|
||||||
}
|
}
|
||||||
```
|
}
|
||||||
|
```
|
||||||
|
|
||||||
- Step 2: Start the container
|
- Step 2: Start the container
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run -d busybox --name testlog top
|
$ docker run -d busybox --name testlog top
|
||||||
```
|
```
|
||||||
|
|
||||||
- Step 3: Read the container logs
|
- Step 3: Read the container logs
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker logs 7d6ac83a89a0
|
$ docker logs 7d6ac83a89a0
|
||||||
Error response from daemon: configured logging driver does not support reading
|
Error response from daemon: configured logging driver does not support reading
|
||||||
```
|
```
|
||||||
|
|
||||||
### With dual logging capability
|
### With dual logging capability
|
||||||
|
|
||||||
With the dual logging cache enabled, the `docker logs` command can be used to
|
With the dual logging cache enabled, the `docker logs` command can be used to
|
||||||
read logs, even if the logging driver does not support reading logs. The following
|
read logs, even if the logging driver doesn't support reading logs. The following
|
||||||
example shows a daemon configuration that uses the `splunk` remote logging driver
|
example shows a daemon configuration that uses the `splunk` remote logging driver
|
||||||
as a default, with dual logging caching enabled:
|
as a default, with dual logging caching enabled:
|
||||||
|
|
||||||
- Step 1: Configure Docker daemon
|
- Step 1: Configure Docker daemon
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ cat /etc/docker/daemon.json
|
$ cat /etc/docker/daemon.json
|
||||||
{
|
{
|
||||||
"log-driver": "splunk",
|
"log-driver": "splunk",
|
||||||
"log-opts": {
|
"log-opts": {
|
||||||
... (options for "splunk" logging driver)
|
... (options for "splunk" logging driver)
|
||||||
}
|
|
||||||
}
|
}
|
||||||
```
|
}
|
||||||
|
```
|
||||||
|
|
||||||
- Step 2: Start the container
|
- Step 2: Start the container
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run -d busybox --name testlog top
|
$ docker run -d busybox --name testlog top
|
||||||
```
|
```
|
||||||
|
|
||||||
- Step 3: Read the container logs
|
- Step 3: Read the container logs
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker logs 7d6ac83a89a0
|
$ docker logs 7d6ac83a89a0
|
||||||
2019-02-04T19:48:15.423Z [INFO] core: marked as sealed
|
2019-02-04T19:48:15.423Z [INFO] core: marked as sealed
|
||||||
2019-02-04T19:48:15.423Z [INFO] core: pre-seal teardown starting
|
2019-02-04T19:48:15.423Z [INFO] core: pre-seal teardown starting
|
||||||
2019-02-04T19:48:15.423Z [INFO] core: stopping cluster listeners
|
2019-02-04T19:48:15.423Z [INFO] core: stopping cluster listeners
|
||||||
2019-02-04T19:48:15.423Z [INFO] core: shutting down forwarding rpc listeners
|
2019-02-04T19:48:15.423Z [INFO] core: shutting down forwarding rpc listeners
|
||||||
2019-02-04T19:48:15.423Z [INFO] core: forwarding rpc listeners stopped
|
2019-02-04T19:48:15.423Z [INFO] core: forwarding rpc listeners stopped
|
||||||
2019-02-04T19:48:15.599Z [INFO] core: rpc listeners successfully shut down
|
2019-02-04T19:48:15.599Z [INFO] core: rpc listeners successfully shut down
|
||||||
2019-02-04T19:48:15.599Z [INFO] core: cluster listeners successfully shut down
|
2019-02-04T19:48:15.599Z [INFO] core: cluster listeners successfully shut down
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
|
|
@ -120,7 +122,6 @@ as a default, with dual logging caching enabled:
|
||||||
> the dual logging capability became available. For these drivers, Logs can be
|
> the dual logging capability became available. For these drivers, Logs can be
|
||||||
> read using `docker logs` in both scenarios.
|
> read using `docker logs` in both scenarios.
|
||||||
|
|
||||||
|
|
||||||
### Configuration options
|
### Configuration options
|
||||||
|
|
||||||
The "dual logging" cache accepts the same configuration options as the
|
The "dual logging" cache accepts the same configuration options as the
|
||||||
|
|
@ -132,9 +133,8 @@ By default, the cache has log-file rotation enabled, and is limited to a maximum
|
||||||
of 5 files of 20MB each (before compression) per container. Use the configuration
|
of 5 files of 20MB each (before compression) per container. Use the configuration
|
||||||
options described below to customize these defaults.
|
options described below to customize these defaults.
|
||||||
|
|
||||||
|
|
||||||
| Option | Default | Description |
|
| Option | Default | Description |
|
||||||
|:-----------------|:----------|:--------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :--------------- | :-------- | :------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| `cache-disabled` | `"false"` | Disable local caching. Boolean value passed as a string (`true`, `1`, `0`, or `false`). |
|
| `cache-disabled` | `"false"` | Disable local caching. Boolean value passed as a string (`true`, `1`, `0`, or `false`). |
|
||||||
| `cache-max-size` | `"20m"` | The maximum size of the cache before it is rotated. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). |
|
| `cache-max-size` | `"20m"` | The maximum size of the cache before it is rotated. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). |
|
||||||
| `cache-max-file` | `"5"` | The maximum number of cache files that can be present. If rotating the logs creates excess files, the oldest file is removed. A positive integer. |
|
| `cache-max-file` | `"5"` | The maximum number of cache files that can be present. If rotating the logs creates excess files, the oldest file is removed. A positive integer. |
|
||||||
|
|
@ -150,7 +150,7 @@ through a remote logging system, and if there is no need to read logs through
|
||||||
Caching can be disabled for individual containers or by default for new containers,
|
Caching can be disabled for individual containers or by default for new containers,
|
||||||
when using the [daemon configuration file](/engine/reference/commandline/dockerd/#daemon-configuration-file).
|
when using the [daemon configuration file](/engine/reference/commandline/dockerd/#daemon-configuration-file).
|
||||||
|
|
||||||
The following example uses the daemon configuration file to use the ["splunk'](splunk.md)
|
The following example uses the daemon configuration file to use the [`splunk`](splunk.md)
|
||||||
logging driver as a default, with caching disabled:
|
logging driver as a default, with caching disabled:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
@ -167,16 +167,16 @@ $ cat /etc/docker/daemon.json
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> For logging drivers that support reading logs, such as the `local`, `json-file`
|
> For logging drivers that support reading logs, such as the `local`, `json-file`
|
||||||
> and `journald` drivers, dual logging is not used, and disabling the option has
|
> and `journald` drivers, dual logging isn't used, and disabling the option has
|
||||||
> no effect.
|
> no effect.
|
||||||
|
|
||||||
## Limitations
|
## Limitations
|
||||||
|
|
||||||
- If a container using a logging driver or plugin that sends logs remotely
|
- If a container using a logging driver or plugin that sends logs remotely
|
||||||
suddenly has a "network" issue, no ‘write’ to the local cache occurs.
|
has a network issue, no `write` to the local cache occurs.
|
||||||
- If a write to `logdriver` fails for any reason (file system full, write
|
- If a write to `logdriver` fails for any reason (file system full, write
|
||||||
permissions removed), the cache write fails and is logged in the daemon log.
|
permissions removed), the cache write fails and is logged in the daemon log.
|
||||||
The log entry to the cache is not retried.
|
The log entry to the cache isn't retried.
|
||||||
- Some logs might be lost from the cache in the default configuration because a
|
- Some logs might be lost from the cache in the default configuration because a
|
||||||
ring buffer is used to prevent blocking the stdio of the container in case of
|
ring buffer is used to prevent blocking the stdio of the container in case of
|
||||||
slow file writes. An admin must repair these while the daemon is shut down.
|
slow file writes. An admin must repair these while the daemon is shut down.
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,12 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the etwlogs logging driver.
|
description: Learn how to use the Event Tracing for Windows (ETW) logging driver with Docker Engine
|
||||||
keywords: ETW, docker, logging, driver
|
keywords: ETW, docker, logging, driver
|
||||||
title: ETW logging driver
|
title: ETW logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/logging/etwlogs/
|
- /engine/admin/logging/etwlogs/
|
||||||
---
|
---
|
||||||
|
|
||||||
The ETW logging driver forwards container logs as ETW events.
|
The Event Tracing for Windows (ETW) logging driver forwards container logs as ETW events.
|
||||||
ETW stands for Event Tracing in Windows, and is the common framework
|
ETW stands for Event Tracing in Windows, and is the common framework
|
||||||
for tracing applications in Windows. Each ETW event contains a message
|
for tracing applications in Windows. Each ETW event contains a message
|
||||||
with both the log and its context information. A client can then create
|
with both the log and its context information. A client can then create
|
||||||
|
|
@ -15,7 +15,7 @@ an ETW listener to listen to these events.
|
||||||
The ETW provider that this logging driver registers with Windows, has the
|
The ETW provider that this logging driver registers with Windows, has the
|
||||||
GUID identifier of: `{a3693192-9ed6-46d2-a981-f8226c8363bd}`. A client creates an
|
GUID identifier of: `{a3693192-9ed6-46d2-a981-f8226c8363bd}`. A client creates an
|
||||||
ETW listener and registers to listen to events from the logging driver's provider.
|
ETW listener and registers to listen to events from the logging driver's provider.
|
||||||
It does not matter the order in which the provider and listener are created.
|
It doesn't matter the order in which the provider and listener are created.
|
||||||
A client can create their ETW listener and start listening for events from the provider,
|
A client can create their ETW listener and start listening for events from the provider,
|
||||||
before the provider has been registered with the system.
|
before the provider has been registered with the system.
|
||||||
|
|
||||||
|
|
@ -33,18 +33,20 @@ included in most installations of Windows:
|
||||||
|
|
||||||
Each ETW event contains a structured message string in this format:
|
Each ETW event contains a structured message string in this format:
|
||||||
|
|
||||||
container_name: %s, image_name: %s, container_id: %s, image_id: %s, source: [stdout | stderr], log: %s
|
```text
|
||||||
|
container_name: %s, image_name: %s, container_id: %s, image_id: %s, source: [stdout | stderr], log: %s
|
||||||
|
```
|
||||||
|
|
||||||
Details on each item in the message can be found below:
|
Details on each item in the message can be found below:
|
||||||
|
|
||||||
| Field | Description |
|
| Field | Description |
|
||||||
-----------------------|-------------------------------------------------|
|
| ---------------- | ---------------------------------------------- |
|
||||||
| `container_name` | The container name at the time it was started. |
|
| `container_name` | The container name at the time it was started. |
|
||||||
| `image_name` | The name of the container's image. |
|
| `image_name` | The name of the container's image. |
|
||||||
| `container_id` | The full 64-character container ID. |
|
| `container_id` | The full 64-character container ID. |
|
||||||
| `image_id` | The full ID of the container's image. |
|
| `image_id` | The full ID of the container's image. |
|
||||||
| `source` | `stdout` or `stderr`. |
|
| `source` | `stdout` or `stderr`. |
|
||||||
| `log` | The container log message. |
|
| `log` | The container log message. |
|
||||||
|
|
||||||
Here is an example event message (output formatted for readability):
|
Here is an example event message (output formatted for readability):
|
||||||
|
|
||||||
|
|
@ -62,6 +64,6 @@ context information. The timestamp is also available within the ETW event.
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> This ETW provider emits only a message string, and not a specially structured
|
> This ETW provider only emits a message string, and not a specially structured
|
||||||
> ETW event. Therefore, it is not required to register a manifest file with the
|
> ETW event. Therefore, you don't have to register a manifest file with the
|
||||||
> system to read and interpret its ETW events.
|
> system to read and interpret its ETW events.
|
||||||
|
|
|
||||||
|
|
@ -1,11 +1,11 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the fluentd logging driver.
|
description: Learn how to use the fluentd logging driver
|
||||||
keywords: Fluentd, docker, logging, driver
|
keywords: Fluentd, docker, logging, driver
|
||||||
title: Fluentd logging driver
|
title: Fluentd logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/fluentd/
|
- /engine/reference/logging/fluentd/
|
||||||
- /reference/logging/fluentd/
|
- /reference/logging/fluentd/
|
||||||
- /engine/admin/logging/fluentd/
|
- /engine/admin/logging/fluentd/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `fluentd` logging driver sends container logs to the
|
The `fluentd` logging driver sends container logs to the
|
||||||
|
|
@ -18,40 +18,38 @@ In addition to the log message itself, the `fluentd` log
|
||||||
driver sends the following metadata in the structured log message:
|
driver sends the following metadata in the structured log message:
|
||||||
|
|
||||||
| Field | Description |
|
| Field | Description |
|
||||||
|:-----------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `container_id` | The full 64-character container ID. |
|
| `container_id` | The full 64-character container ID. |
|
||||||
| `container_name` | The container name at the time it was started. If you use `docker rename` to rename a container, the new name is not reflected in the journal entries. |
|
| `container_name` | The container name at the time it was started. If you use `docker rename` to rename a container, the new name isn't reflected in the journal entries. |
|
||||||
| `source` | `stdout` or `stderr` |
|
| `source` | `stdout` or `stderr` |
|
||||||
| `log` | The container log |
|
| `log` | The container log |
|
||||||
|
|
||||||
The `docker logs` command is not available for this logging driver.
|
The `docker logs` command isn't available for this logging driver.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
Some options are supported by specifying `--log-opt` as many times as needed:
|
Some options are supported by specifying `--log-opt` as many times as needed:
|
||||||
|
|
||||||
- `fluentd-address`: specify a socket address to connect to the Fluentd daemon, ex `fluentdhost:24224` or `unix:///path/to/fluentd.sock`
|
- `fluentd-address`: specify a socket address to connect to the Fluentd daemon, ex `fluentdhost:24224` or `unix:///path/to/fluentd.sock`.
|
||||||
- `tag`: specify a tag for fluentd message, which interprets some markup, ex `{{.ID}}`, `{{.FullID}}` or `{{.Name}}` `docker.{{.ID}}`
|
- `tag`: specify a tag for Fluentd messages. Supports some Go template markup, ex `{{.ID}}`, `{{.FullID}}` or `{{.Name}}` `docker.{{.ID}}`.
|
||||||
|
|
||||||
|
To use the `fluentd` driver as the default logging driver, set the `log-driver`
|
||||||
To use the `fluentd` driver as the default logging driver, set the `log-driver`
|
and `log-opt` keys to appropriate values in the `daemon.json` file, which is
|
||||||
and `log-opt` keys to appropriate values in the `daemon.json` file, which is
|
located in `/etc/docker/` on Linux hosts or
|
||||||
located in `/etc/docker/` on Linux hosts or
|
`C:\ProgramData\docker\config\daemon.json` on Windows Server. For more about
|
||||||
`C:\ProgramData\docker\config\daemon.json` on Windows Server. For more about
|
configuring Docker using `daemon.json`, see [daemon.json](../../../engine/reference/commandline/dockerd.md#daemon-configuration-file).
|
||||||
+configuring Docker using `daemon.json`, see
|
|
||||||
+[daemon.json](../../../engine/reference/commandline/dockerd.md#daemon-configuration-file).
|
|
||||||
|
|
||||||
The following example sets the log driver to `fluentd` and sets the
|
The following example sets the log driver to `fluentd` and sets the
|
||||||
`fluentd-address` option.
|
`fluentd-address` option.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"log-driver": "fluentd",
|
"log-driver": "fluentd",
|
||||||
"log-opts": {
|
"log-opts": {
|
||||||
"fluentd-address": "fluentdhost:24224"
|
"fluentd-address": "fluentdhost:24224"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Restart Docker for the changes to take effect.
|
Restart Docker for the changes to take effect.
|
||||||
|
|
||||||
|
|
@ -65,13 +63,17 @@ Restart Docker for the changes to take effect.
|
||||||
To set the logging driver for a specific container, pass the
|
To set the logging driver for a specific container, pass the
|
||||||
`--log-driver` option to `docker run`:
|
`--log-driver` option to `docker run`:
|
||||||
|
|
||||||
docker run --log-driver=fluentd ...
|
```text
|
||||||
|
docker run --log-driver=fluentd ...
|
||||||
|
```
|
||||||
|
|
||||||
Before using this logging driver, launch a Fluentd daemon. The logging driver
|
Before using this logging driver, launch a Fluentd daemon. The logging driver
|
||||||
connects to this daemon through `localhost:24224` by default. Use the
|
connects to this daemon through `localhost:24224` by default. Use the
|
||||||
`fluentd-address` option to connect to a different address.
|
`fluentd-address` option to connect to a different address.
|
||||||
|
|
||||||
docker run --log-driver=fluentd --log-opt fluentd-address=fluentdhost:24224
|
```text
|
||||||
|
docker run --log-driver=fluentd --log-opt fluentd-address=fluentdhost:24224
|
||||||
|
```
|
||||||
|
|
||||||
If container cannot connect to the Fluentd daemon, the container stops
|
If container cannot connect to the Fluentd daemon, the container stops
|
||||||
immediately unless the `fluentd-async` option is used.
|
immediately unless the `fluentd-async` option is used.
|
||||||
|
|
@ -97,7 +99,6 @@ By default, Docker uses the first 12 characters of the container ID to tag log m
|
||||||
Refer to the [log tag option documentation](log_tags.md) for customizing
|
Refer to the [log tag option documentation](log_tags.md) for customizing
|
||||||
the log tag format.
|
the log tag format.
|
||||||
|
|
||||||
|
|
||||||
### labels, labels-regex, env, and env-regex
|
### labels, labels-regex, env, and env-regex
|
||||||
|
|
||||||
The `labels` and `env` options each take a comma-separated list of keys. If
|
The `labels` and `env` options each take a comma-separated list of keys. If
|
||||||
|
|
@ -128,7 +129,7 @@ How long to wait between retries. Defaults to 1 second.
|
||||||
|
|
||||||
### fluentd-max-retries
|
### fluentd-max-retries
|
||||||
|
|
||||||
The maximum number of retries. Defaults to `4294967295` (2**32 - 1).
|
The maximum number of retries. Defaults to `4294967295` (2\*\*32 - 1).
|
||||||
|
|
||||||
### fluentd-sub-second-precision
|
### fluentd-sub-second-precision
|
||||||
|
|
||||||
|
|
@ -148,20 +149,26 @@ aggregate store.
|
||||||
|
|
||||||
### Test container loggers
|
### Test container loggers
|
||||||
|
|
||||||
1. Write a configuration file (`test.conf`) to dump input logs:
|
1. Write a configuration file (`test.conf`) to dump input logs:
|
||||||
|
|
||||||
<source>
|
```none
|
||||||
@type forward
|
<source>
|
||||||
</source>
|
@type forward
|
||||||
|
</source>
|
||||||
|
|
||||||
<match *>
|
<match *>
|
||||||
@type stdout
|
@type stdout
|
||||||
</match>
|
</match>
|
||||||
|
```none
|
||||||
|
|
||||||
2. Launch Fluentd container with this configuration file:
|
2. Launch Fluentd container with this configuration file:
|
||||||
|
|
||||||
$ docker run -it -p 24224:24224 -v /path/to/conf/test.conf:/fluentd/etc/test.conf -e FLUENTD_CONF=test.conf fluent/fluentd:latest
|
```none
|
||||||
|
$ docker run -it -p 24224:24224 -v /path/to/conf/test.conf:/fluentd/etc/test.conf -e FLUENTD_CONF=test.conf fluent/fluentd:latest
|
||||||
|
```
|
||||||
|
|
||||||
3. Start one or more containers with the `fluentd` logging driver:
|
3. Start one or more containers with the `fluentd` logging driver:
|
||||||
|
|
||||||
$ docker run --log-driver=fluentd your/application
|
```none
|
||||||
|
$ docker run --log-driver=fluentd your/application
|
||||||
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,9 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the Google Cloud Logging driver.
|
description: Learn how to use the Google Cloud Logging driver with Docker Engine
|
||||||
keywords: gcplogs, google, docker, logging, driver
|
keywords: gcplogs, google, docker, logging, driver
|
||||||
title: Google Cloud Logging driver
|
title: Google Cloud Logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/logging/gcplogs/
|
- /engine/admin/logging/gcplogs/
|
||||||
---
|
---
|
||||||
|
|
||||||
The Google Cloud Logging driver sends container logs to
|
The Google Cloud Logging driver sends container logs to
|
||||||
|
|
@ -36,12 +36,13 @@ Restart Docker for the changes to take effect.
|
||||||
You can set the logging driver for a specific container by using the
|
You can set the logging driver for a specific container by using the
|
||||||
`--log-driver` option to `docker run`:
|
`--log-driver` option to `docker run`:
|
||||||
|
|
||||||
docker run --log-driver=gcplogs ...
|
```console
|
||||||
|
$ docker run --log-driver=gcplogs ...
|
||||||
|
```
|
||||||
|
|
||||||
This log driver does not implement a reader so it is incompatible with
|
The `docker logs` command isn't available for this logging driver.
|
||||||
`docker logs`.
|
|
||||||
|
|
||||||
If Docker detects that it is running in a Google Cloud Project, it discovers
|
If Docker detects that it's running in a Google Cloud Project, it discovers
|
||||||
configuration from the
|
configuration from the
|
||||||
[instance metadata service](https://cloud.google.com/compute/docs/metadata).
|
[instance metadata service](https://cloud.google.com/compute/docs/metadata).
|
||||||
Otherwise, the user must specify
|
Otherwise, the user must specify
|
||||||
|
|
@ -49,12 +50,12 @@ which project to log to using the `--gcp-project` log option and Docker
|
||||||
attempts to obtain credentials from the
|
attempts to obtain credentials from the
|
||||||
[Google Application Default Credential](https://developers.google.com/identity/protocols/application-default-credentials).
|
[Google Application Default Credential](https://developers.google.com/identity/protocols/application-default-credentials).
|
||||||
The `--gcp-project` flag takes precedence over information discovered from the
|
The `--gcp-project` flag takes precedence over information discovered from the
|
||||||
metadata server so a Docker daemon running in a Google Cloud Project can be
|
metadata server, so a Docker daemon running in a Google Cloud project can be
|
||||||
overridden to log to a different Google Cloud Project using `--gcp-project`.
|
overridden to log to a different project using `--gcp-project`.
|
||||||
|
|
||||||
Docker fetches the values for zone, instance name and instance ID from Google
|
Docker fetches the values for zone, instance name and instance ID from Google
|
||||||
Cloud metadata server. Those values can be provided via options if metadata
|
Cloud metadata server. Those values can be provided via options if metadata
|
||||||
server is not available. They do not override the values from metadata server.
|
server isn't available. They don't override the values from metadata server.
|
||||||
|
|
||||||
## gcplogs options
|
## gcplogs options
|
||||||
|
|
||||||
|
|
@ -62,8 +63,8 @@ You can use the `--log-opt NAME=VALUE` flag to specify these additional Google
|
||||||
Cloud Logging driver options:
|
Cloud Logging driver options:
|
||||||
|
|
||||||
| Option | Required | Description |
|
| Option | Required | Description |
|
||||||
|:----------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :-------------- | :------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `gcp-project` | optional | Which GCP project to log to. Defaults to discovering this value from the GCE metadata service. |
|
| `gcp-project` | optional | Which Google Cloud project to log to. Defaults to discovering this value from the Google Cloud metadata server. |
|
||||||
| `gcp-log-cmd` | optional | Whether to log the command that the container was started with. Defaults to false. |
|
| `gcp-log-cmd` | optional | Whether to log the command that the container was started with. Defaults to false. |
|
||||||
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for the container. |
|
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for the container. |
|
||||||
| `labels-regex` | optional | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). |
|
| `labels-regex` | optional | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). |
|
||||||
|
|
@ -77,8 +78,8 @@ If there is collision between `label` and `env` keys, the value of the `env`
|
||||||
takes precedence. Both options add additional fields to the attributes of a
|
takes precedence. Both options add additional fields to the attributes of a
|
||||||
logging message.
|
logging message.
|
||||||
|
|
||||||
Below is an example of the logging options required to log to the default
|
The following is an example of the logging options required to log to the default
|
||||||
logging destination which is discovered by querying the GCE metadata server.
|
logging destination which is discovered by querying the Google Cloud metadata server.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run \
|
$ docker run \
|
||||||
|
|
@ -95,8 +96,14 @@ This configuration also directs the driver to include in the payload the label
|
||||||
`location`, the environment variable `ENV`, and the command used to start the
|
`location`, the environment variable `ENV`, and the command used to start the
|
||||||
container.
|
container.
|
||||||
|
|
||||||
An example of the logging options for running outside of GCE (the daemon must be
|
The following example shows logging options for running outside of Google
|
||||||
configured with GOOGLE_APPLICATION_CREDENTIALS):
|
Cloud. The `GOOGLE_APPLICATION_CREDENTIALS` environment variable must be set
|
||||||
|
for the daemon, for example via systemd:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Service]
|
||||||
|
Environment="GOOGLE_APPLICATION_CREDENTIALS=uQWVCPkMTI34bpssr1HI"
|
||||||
|
```
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run \
|
$ docker run \
|
||||||
|
|
@ -105,4 +112,4 @@ $ docker run \
|
||||||
--log-opt gcp-meta-zone=west1 \
|
--log-opt gcp-meta-zone=west1 \
|
||||||
--log-opt gcp-meta-name=`hostname` \
|
--log-opt gcp-meta-name=`hostname` \
|
||||||
your/application
|
your/application
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,23 +1,23 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the Graylog Extended Format logging driver.
|
description: Learn how to use the Graylog Extended Format logging driver with Docker Engine
|
||||||
keywords: graylog, gelf, logging, driver
|
keywords: graylog, gelf, logging, driver
|
||||||
title: Graylog Extended Format logging driver
|
title: Graylog Extended Format logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/gelf/
|
- /engine/reference/logging/gelf/
|
||||||
- /engine/admin/logging/gelf/
|
- /engine/admin/logging/gelf/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `gelf` logging driver is a convenient format that is understood by a number of tools such as
|
The `gelf` logging driver is a convenient format that's understood by a number of tools such as
|
||||||
[Graylog](https://www.graylog.org/), [Logstash](https://www.elastic.co/products/logstash), and
|
[Graylog](https://www.graylog.org/), [Logstash](https://www.elastic.co/products/logstash), and
|
||||||
[Fluentd](https://www.fluentd.org). Many tools use this format.
|
[Fluentd](https://www.fluentd.org). Many tools use this format.
|
||||||
|
|
||||||
In GELF, every log message is a dict with the following fields:
|
In GELF, every log message is a dict with the following fields:
|
||||||
|
|
||||||
- version
|
- Version
|
||||||
- host (who sent the message in the first place)
|
- Host (who sent the message in the first place)
|
||||||
- timestamp
|
- Timestamp
|
||||||
- short and long version of the message
|
- Short and long version of the message
|
||||||
- any custom fields you configure yourself
|
- Any custom fields you configure yourself
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|
@ -60,22 +60,22 @@ $ docker run \
|
||||||
|
|
||||||
The `gelf` logging driver supports the following options:
|
The `gelf` logging driver supports the following options:
|
||||||
|
|
||||||
| Option | Required | Description | Example value |
|
| Option | Required | Description | Example value |
|
||||||
| :------------------------- | :-------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------- |
|
| :------------------------- | :------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------- |
|
||||||
| `gelf-address` | required | The address of the GELF server. `tcp` and `udp` are the only supported URI specifier and you must specify the port. | `--log-opt gelf-address=udp://192.168.0.42:12201` |
|
| `gelf-address` | required | The address of the GELF server. `tcp` and `udp` are the only supported URI specifier and you must specify the port. | `--log-opt gelf-address=udp://192.168.0.42:12201` |
|
||||||
| `gelf-compression-type` | optional | `UDP Only` The type of compression the GELF driver uses to compress each log message. Allowed values are `gzip`, `zlib` and `none`. The default is `gzip`. **Note that enabled compression leads to excessive CPU usage, so it is highly recommended to set this to `none`**. | `--log-opt gelf-compression-type=gzip` |
|
| `gelf-compression-type` | optional | `UDP Only` The type of compression the GELF driver uses to compress each log message. Allowed values are `gzip`, `zlib` and `none`. The default is `gzip`. Note that enabled compression leads to excessive CPU usage, so it's highly recommended to set this to `none`. | `--log-opt gelf-compression-type=gzip` |
|
||||||
| `gelf-compression-level` | optional | `UDP Only` The level of compression when `gzip` or `zlib` is the `gelf-compression-type`. An integer in the range of `-1` to `9` (BestCompression). Default value is 1 (BestSpeed). Higher levels provide more compression at lower speed. Either `-1` or `0` disables compression. | `--log-opt gelf-compression-level=2` |
|
| `gelf-compression-level` | optional | `UDP Only` The level of compression when `gzip` or `zlib` is the `gelf-compression-type`. An integer in the range of `-1` to `9` (BestCompression). Default value is 1 (BestSpeed). Higher levels provide more compression at lower speed. Either `-1` or `0` disables compression. | `--log-opt gelf-compression-level=2` |
|
||||||
| `gelf-tcp-max-reconnect` | optional | `TCP Only` The maximum number of reconnection attempts when the connection drop. An positive integer. Default value is 3. | `--log-opt gelf-tcp-max-reconnect=3` |
|
| `gelf-tcp-max-reconnect` | optional | `TCP Only` The maximum number of reconnection attempts when the connection drop. An positive integer. Default value is 3. | `--log-opt gelf-tcp-max-reconnect=3` |
|
||||||
| `gelf-tcp-reconnect-delay` | optional | `TCP Only` The number of seconds to wait between reconnection attempts. A positive integer. Default value is 1. | `--log-opt gelf-tcp-reconnect-delay=1` |
|
| `gelf-tcp-reconnect-delay` | optional | `TCP Only` The number of seconds to wait between reconnection attempts. A positive integer. Default value is 1. | `--log-opt gelf-tcp-reconnect-delay=1` |
|
||||||
| `tag` | optional | A string that is appended to the `APP-NAME` in the `gelf` message. By default, Docker uses the first 12 characters of the container ID to tag log messages. Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. | `--log-opt tag=mailer` |
|
| `tag` | optional | A string that's appended to the `APP-NAME` in the `gelf` message. By default, Docker uses the first 12 characters of the container ID to tag log messages. Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. | `--log-opt tag=mailer` |
|
||||||
| `labels` | optional | Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Adds additional key on the `extra` fields, prefixed by an underscore (`_`). Used for advanced [log tag options](log_tags.md). | `--log-opt labels=production_status,geo` |
|
| `labels` | optional | Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Adds additional key on the `extra` fields, prefixed by an underscore (`_`). Used for advanced [log tag options](log_tags.md). | `--log-opt labels=production_status,geo` |
|
||||||
| `labels-regex` | optional | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). | `--log-opt labels-regex=^(production_status|geo)` |
|
| `labels-regex` | optional | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). | `--log-opt labels-regex=^(production_status\|geo)` |
|
||||||
| `env` | optional | Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Adds additional key on the `extra` fields, prefixed by an underscore (`_`). Used for advanced [log tag options](log_tags.md). | `--log-opt env=os,customer` |
|
| `env` | optional | Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Adds additional key on the `extra` fields, prefixed by an underscore (`_`). Used for advanced [log tag options](log_tags.md). | `--log-opt env=os,customer` |
|
||||||
| `env-regex` | optional | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). | `--log-opt env-regex=^(os|customer)` |
|
| `env-regex` | optional | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). | `--log-opt env-regex=^(os\|customer)` |
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> The `gelf` driver does not support TLS for TCP connections. Messages sent to TLS-protected inputs can silently fail.
|
> The `gelf` driver doesn't support TLS for TCP connections. Messages sent to TLS-protected inputs can silently fail.
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
|
|
@ -87,4 +87,4 @@ $ docker run -dit \
|
||||||
--log-driver=gelf \
|
--log-driver=gelf \
|
||||||
--log-opt gelf-address=udp://192.168.0.42:12201 \
|
--log-opt gelf-address=udp://192.168.0.42:12201 \
|
||||||
alpine sh
|
alpine sh
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,10 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the Journald logging driver.
|
description: Learn how to use the Journald logging driver with Docker Engine
|
||||||
keywords: Journald, docker, logging, driver
|
keywords: journald, systemd-journald, docker, logging, driver
|
||||||
title: Journald logging driver
|
title: Journald logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/journald/
|
- /engine/reference/logging/journald/
|
||||||
- /engine/admin/logging/journald/
|
- /engine/admin/logging/journald/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `journald` logging driver sends container logs to the
|
The `journald` logging driver sends container logs to the
|
||||||
|
|
@ -15,13 +15,13 @@ Log entries can be retrieved using the `journalctl` command, through use of the
|
||||||
In addition to the text of the log message itself, the `journald` log driver
|
In addition to the text of the log message itself, the `journald` log driver
|
||||||
stores the following metadata in the journal with each message:
|
stores the following metadata in the journal with each message:
|
||||||
|
|
||||||
| Field | Description |
|
| Field | Description |
|
||||||
|:-------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :----------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `CONTAINER_ID` | The container ID truncated to 12 characters. |
|
| `CONTAINER_ID` | The container ID truncated to 12 characters. |
|
||||||
| `CONTAINER_ID_FULL` | The full 64-character container ID. |
|
| `CONTAINER_ID_FULL` | The full 64-character container ID. |
|
||||||
| `CONTAINER_NAME` | The container name at the time it was started. If you use `docker rename` to rename a container, the new name is not reflected in the journal entries. |
|
| `CONTAINER_NAME` | The container name at the time it was started. If you use `docker rename` to rename a container, the new name isn't reflected in the journal entries. |
|
||||||
| `CONTAINER_TAG`, `SYSLOG_IDENTIFIER` | The container tag ([log tag option documentation](log_tags.md)). |
|
| `CONTAINER_TAG`, `SYSLOG_IDENTIFIER` | The container tag ([log tag option documentation](log_tags.md)). |
|
||||||
| `CONTAINER_PARTIAL_MESSAGE` | A field that flags log integrity. Improve logging of long log lines. |
|
| `CONTAINER_PARTIAL_MESSAGE` | A field that flags log integrity. Improve logging of long log lines. |
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|
@ -55,18 +55,18 @@ Use the `--log-opt NAME=VALUE` flag to specify additional `journald` logging
|
||||||
driver options.
|
driver options.
|
||||||
|
|
||||||
| Option | Required | Description |
|
| Option | Required | Description |
|
||||||
|:---------------|:---------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :------------- | :------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `tag` | optional | Specify template to set `CONTAINER_TAG` and `SYSLOG_IDENTIFIER` value in journald logs. Refer to [log tag option documentation](log_tags.md) to customize the log tag format. |
|
| `tag` | optional | Specify template to set `CONTAINER_TAG` and `SYSLOG_IDENTIFIER` value in journald logs. Refer to [log tag option documentation](log_tags.md) to customize the log tag format. |
|
||||||
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for the container. |
|
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for the container. |
|
||||||
| `labels-regex` | optional | Similar to and compatible with labels. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). |
|
| `labels-regex` | optional | Similar to and compatible with labels. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). |
|
||||||
| `env` | optional | Comma-separated list of keys of environment variables, which should be included in message, if these variables are specified for the container. |
|
| `env` | optional | Comma-separated list of keys of environment variables, which should be included in message, if these variables are specified for the container. |
|
||||||
| `env-regex` | optional | Similar to and compatible with env. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). |
|
| `env-regex` | optional | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). |
|
||||||
|
|
||||||
If a collision occurs between label and env keys, the value of the env takes
|
If a collision occurs between `label` and `env` options, the value of the `env`
|
||||||
precedence. Each option adds additional fields to the attributes of a logging
|
takes precedence. Each option adds additional fields to the attributes of a
|
||||||
message.
|
logging message.
|
||||||
|
|
||||||
Below is an example of the logging options required to log to journald.
|
The following is an example of the logging options required to log to journald.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run \
|
$ docker run \
|
||||||
|
|
@ -79,7 +79,7 @@ $ docker run \
|
||||||
```
|
```
|
||||||
|
|
||||||
This configuration also directs the driver to include in the payload the label
|
This configuration also directs the driver to include in the payload the label
|
||||||
location, and the environment variable TEST. If the `--env "TEST=false"`
|
location, and the environment variable `TEST`. If the `--env "TEST=false"`
|
||||||
or `--label location=west` arguments were omitted, the corresponding key would
|
or `--label location=west` arguments were omitted, the corresponding key would
|
||||||
not be set in the journald log.
|
not be set in the journald log.
|
||||||
|
|
||||||
|
|
@ -87,7 +87,7 @@ not be set in the journald log.
|
||||||
|
|
||||||
The value logged in the `CONTAINER_NAME` field is the name of the container that
|
The value logged in the `CONTAINER_NAME` field is the name of the container that
|
||||||
was set at startup. If you use `docker rename` to rename a container, the new
|
was set at startup. If you use `docker rename` to rename a container, the new
|
||||||
name **is not reflected** in the journal entries. Journal entries continue
|
name isn't reflected in the journal entries. Journal entries continue
|
||||||
to use the original name.
|
to use the original name.
|
||||||
|
|
||||||
## Retrieve log messages with `journalctl`
|
## Retrieve log messages with `journalctl`
|
||||||
|
|
@ -138,4 +138,4 @@ reader.add_match('CONTAINER_NAME=web')
|
||||||
|
|
||||||
for msg in reader:
|
for msg in reader:
|
||||||
print '{CONTAINER_ID_FULL}: {MESSAGE}'.format(**msg)
|
print '{CONTAINER_ID_FULL}: {MESSAGE}'.format(**msg)
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,10 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the json-file logging driver.
|
description: Learn how to use the json-file logging driver with Docker Engine
|
||||||
keywords: json-file, docker, logging, driver
|
keywords: json-file, docker, logging, driver
|
||||||
title: JSON File logging driver
|
title: JSON File logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/json-file/
|
- /engine/reference/logging/json-file/
|
||||||
- /engine/admin/logging/json-file/
|
- /engine/admin/logging/json-file/
|
||||||
---
|
---
|
||||||
|
|
||||||
By default, Docker captures the standard output (and standard error) of all your containers,
|
By default, Docker captures the standard output (and standard error) of all your containers,
|
||||||
|
|
@ -13,15 +13,20 @@ origin (`stdout` or `stderr`) and its timestamp. Each log file contains informat
|
||||||
only one container.
|
only one container.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{"log":"Log line is here\n","stream":"stdout","time":"2019-01-01T11:11:11.111111111Z"}
|
{
|
||||||
|
"log": "Log line is here\n",
|
||||||
|
"stream": "stdout",
|
||||||
|
"time": "2019-01-01T11:11:11.111111111Z"
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> *Warning*
|
> **Warning**
|
||||||
>
|
>
|
||||||
> The `json-file` logging driver uses file-based storage. These files are designed
|
> The `json-file` logging driver uses file-based storage. These files are designed
|
||||||
> to be exclusively accessed by the Docker daemon. Interacting with these files
|
> to be exclusively accessed by the Docker daemon. Interacting with these files
|
||||||
> with external tools may interfere with Docker's logging system and result in
|
> with external tools may interfere with Docker's logging system and result in
|
||||||
> unexpected behavior, and should be avoided.
|
> unexpected behavior, and should be avoided.
|
||||||
|
{ .warning }
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|
@ -40,7 +45,7 @@ and `max-file` options to enable automatic log-rotation.
|
||||||
"log-driver": "json-file",
|
"log-driver": "json-file",
|
||||||
"log-opts": {
|
"log-opts": {
|
||||||
"max-size": "10m",
|
"max-size": "10m",
|
||||||
"max-file": "3"
|
"max-file": "3"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
@ -52,7 +57,7 @@ and `max-file` options to enable automatic log-rotation.
|
||||||
> `max-file` in the example above) must therefore be enclosed in quotes (`"`).
|
> `max-file` in the example above) must therefore be enclosed in quotes (`"`).
|
||||||
|
|
||||||
Restart Docker for the changes to take effect for newly created containers.
|
Restart Docker for the changes to take effect for newly created containers.
|
||||||
Existing containers do not use the new logging configuration.
|
Existing containers don't use the new logging configuration automatically.
|
||||||
|
|
||||||
You can set the logging driver for a specific container by using the
|
You can set the logging driver for a specific container by using the
|
||||||
`--log-driver` flag to `docker container create` or `docker run`:
|
`--log-driver` flag to `docker container create` or `docker run`:
|
||||||
|
|
@ -67,16 +72,15 @@ $ docker run \
|
||||||
|
|
||||||
The `json-file` logging driver supports the following logging options:
|
The `json-file` logging driver supports the following logging options:
|
||||||
|
|
||||||
| Option | Description | Example value |
|
| Option | Description | Example value |
|
||||||
|:---------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------|
|
| :------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------------------------------ | ---------- |
|
||||||
| `max-size` | The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to -1 (unlimited). | `--log-opt max-size=10m` |
|
| `max-size` | The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to -1 (unlimited). | `--log-opt max-size=10m` |
|
||||||
| `max-file` | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. **Only effective when `max-size` is also set.** A positive integer. Defaults to 1. | `--log-opt max-file=3` |
|
| `max-file` | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. **Only effective when `max-size` is also set.** A positive integer. Defaults to 1. | `--log-opt max-file=3` |
|
||||||
| `labels` | Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt labels=production_status,geo` |
|
| `labels` | Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt labels=production_status,geo` |
|
||||||
| `labels-regex` | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). | `--log-opt labels-regex=^(production_status|geo)` |
|
| `labels-regex` | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). | `--log-opt labels-regex=^(production_status | geo)` |
|
||||||
| `env` | Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt env=os,customer` |
|
| `env` | Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt env=os,customer` |
|
||||||
| `env-regex` | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). | `--log-opt env-regex=^(os|customer)` |
|
| `env-regex` | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). | `--log-opt env-regex=^(os | customer)` |
|
||||||
| `compress` | Toggles compression for rotated logs. Default is `disabled`. | `--log-opt compress=true` |
|
| `compress` | Toggles compression for rotated logs. Default is `disabled`. | `--log-opt compress=true` |
|
||||||
|
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
|
|
@ -85,4 +89,4 @@ files no larger than 10 megabytes each.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run -it --log-opt max-size=10m --log-opt max-file=3 alpine ash
|
$ docker run -it --log-opt max-size=10m --log-opt max-file=3 alpine ash
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,26 +1,27 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the local logging driver.
|
description: Learn how to use the local logging driver with Docker Engine
|
||||||
keywords: local, docker, logging, driver
|
keywords: local, docker, logging, driver, file
|
||||||
title: Local File logging driver
|
title: Local file logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/local/
|
- /engine/reference/logging/local/
|
||||||
- /engine/admin/logging/local/
|
- /engine/admin/logging/local/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `local` logging driver captures output from container's stdout/stderr and
|
The `local` logging driver captures output from container's stdout/stderr and
|
||||||
writes them to an internal storage that is optimized for performance and disk
|
writes them to an internal storage that's optimized for performance and disk
|
||||||
use.
|
use.
|
||||||
|
|
||||||
By default, the `local` driver preserves 100MB of log messages per container and
|
By default, the `local` driver preserves 100MB of log messages per container and
|
||||||
uses automatic compression to reduce the size on disk. The 100MB default value is based on a 20M default size
|
uses automatic compression to reduce the size on disk. The 100MB default value is based on a 20M default size
|
||||||
for each file and a default count of 5 for the number of such files (to account for log rotation).
|
for each file and a default count of 5 for the number of such files (to account for log rotation).
|
||||||
|
|
||||||
> *Note*
|
> **Warning**
|
||||||
>
|
>
|
||||||
> The `local` logging driver uses file-based storage. The file-format and
|
> The `local` logging driver uses file-based storage. These files are designed
|
||||||
> storage mechanism are designed to be exclusively accessed by the Docker
|
> to be exclusively accessed by the Docker daemon. Interacting with these files
|
||||||
> daemon, and should not be used by external tools as the implementation may
|
> with external tools may interfere with Docker's logging system and result in
|
||||||
> change in future releases.
|
> unexpected behavior, and should be avoided.
|
||||||
|
{ .warning }
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|
@ -43,7 +44,8 @@ option.
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Restart Docker for the changes to take effect for newly created containers. Existing containers do not use the new logging configuration.
|
Restart Docker for the changes to take effect for newly created containers.
|
||||||
|
Existing containers don't use the new logging configuration automatically.
|
||||||
|
|
||||||
You can set the logging driver for a specific container by using the
|
You can set the logging driver for a specific container by using the
|
||||||
`--log-driver` flag to `docker container create` or `docker run`:
|
`--log-driver` flag to `docker container create` or `docker run`:
|
||||||
|
|
@ -60,11 +62,11 @@ Note that `local` is a bash reserved keyword, so you may need to quote it in scr
|
||||||
|
|
||||||
The `local` logging driver supports the following logging options:
|
The `local` logging driver supports the following logging options:
|
||||||
|
|
||||||
| Option | Description | Example value |
|
| Option | Description | Example value |
|
||||||
|:------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------|
|
| :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------------- |
|
||||||
| `max-size` | The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to 20m. | `--log-opt max-size=10m` |
|
| `max-size` | The maximum size of the log before it's rolled. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to 20m. | `--log-opt max-size=10m` |
|
||||||
| `max-file` | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. A positive integer. Defaults to 5. | `--log-opt max-file=3` |
|
| `max-file` | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. A positive integer. Defaults to 5. | `--log-opt max-file=3` |
|
||||||
| `compress` | Toggle compression of rotated log files. Enabled by default. | `--log-opt compress=false` |
|
| `compress` | Toggle compression of rotated log files. Enabled by default. | `--log-opt compress=false` |
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
|
|
@ -73,4 +75,4 @@ files no larger than 10 megabytes each.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run -it --log-driver local --log-opt max-size=10m --log-opt max-file=3 alpine ash
|
$ docker run -it --log-driver local --log-opt max-size=10m --log-opt max-file=3 alpine ash
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,10 @@
|
||||||
---
|
---
|
||||||
description: Describes how to format tags for.
|
description: Learn about how to format log output with Go templates
|
||||||
keywords: docker, logging, driver, syslog, Fluentd, gelf, journald
|
keywords: docker, logging, driver, syslog, Fluentd, gelf, journald
|
||||||
title: Customize log driver output
|
title: Customize log driver output
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/log_tags/
|
- /engine/reference/logging/log_tags/
|
||||||
- /engine/admin/logging/log_tags/
|
- /engine/admin/logging/log_tags/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `tag` log option specifies how to format a tag that identifies the
|
The `tag` log option specifies how to format a tag that identifies the
|
||||||
|
|
@ -17,18 +17,15 @@ $ docker run --log-driver=fluentd --log-opt fluentd-address=myhost.local:24224 -
|
||||||
|
|
||||||
Docker supports some special template markup you can use when specifying a tag's value:
|
Docker supports some special template markup you can use when specifying a tag's value:
|
||||||
|
|
||||||
|
|
||||||
| Markup | Description |
|
| Markup | Description |
|
||||||
|--------------------|------------------------------------------------------|
|
| ------------------ | ---------------------------------------------------- |
|
||||||
| `{{.ID}}` | The first 12 characters of the container ID. |
|
| `{{.ID}}` | The first 12 characters of the container ID. |
|
||||||
| `{{.FullID}}` | The full container ID. |
|
| `{{.FullID}}` | The full container ID. |
|
||||||
| `{{.Name}}` | The container name. |
|
| `{{.Name}}` | The container name. |
|
||||||
| `{{.ImageID}}` | The first 12 characters of the container's image ID. |
|
| `{{.ImageID}}` | The first 12 characters of the container's image ID. |
|
||||||
| `{{.ImageFullID}}` | The container's full image ID. |
|
| `{{.ImageFullID}}` | The container's full image ID. |
|
||||||
| `{{.ImageName}}` | The name of the image used by the container. |
|
| `{{.ImageName}}` | The name of the image used by the container. |
|
||||||
| `{{.DaemonName}}` | The name of the docker program (`docker`). |
|
| `{{.DaemonName}}` | The name of the Docker program (`docker`). |
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
For example, specifying a `--log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}"` value yields `syslog` log lines like:
|
For example, specifying a `--log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}"` value yields `syslog` log lines like:
|
||||||
|
|
||||||
|
|
@ -37,6 +34,6 @@ Aug 7 18:33:19 HOSTNAME hello-world/foobar/5790672ab6a0[9103]: Hello from Docke
|
||||||
```
|
```
|
||||||
|
|
||||||
At startup time, the system sets the `container_name` field and `{{.Name}}` in
|
At startup time, the system sets the `container_name` field and `{{.Name}}` in
|
||||||
the tags. If you use `docker rename` to rename a container, the new name is not
|
the tags. If you use `docker rename` to rename a container, the new name isn't
|
||||||
reflected in the log messages. Instead, these messages continue to use the
|
reflected in the log messages. Instead, these messages continue to use the
|
||||||
original container name.
|
original container name.
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,9 @@
|
||||||
---
|
---
|
||||||
title: Logentries logging driver
|
title: Logentries logging driver
|
||||||
description: Describes how to use the logentries logging driver.
|
description: Learn how to use the logentries logging driver with Docker Engine
|
||||||
keywords: logentries, docker, logging, driver
|
keywords: logentries, docker, logging, driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/logging/logentries/
|
- /engine/admin/logging/logentries/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `logentries` logging driver sends container logs to the
|
The `logentries` logging driver sends container logs to the
|
||||||
|
|
@ -13,8 +13,8 @@ The `logentries` logging driver sends container logs to the
|
||||||
|
|
||||||
Some options are supported by specifying `--log-opt` as many times as needed:
|
Some options are supported by specifying `--log-opt` as many times as needed:
|
||||||
|
|
||||||
- `logentries-token`: specify the logentries log set token
|
- `logentries-token`: specify the Logentries log set token
|
||||||
- `line-only`: send raw payload only
|
- `line-only`: send raw payload only
|
||||||
|
|
||||||
Configure the default logging driver by passing the
|
Configure the default logging driver by passing the
|
||||||
`--log-driver` option to the Docker daemon:
|
`--log-driver` option to the Docker daemon:
|
||||||
|
|
@ -43,7 +43,7 @@ Users can use the `--log-opt NAME=VALUE` flag to specify additional Logentries l
|
||||||
|
|
||||||
### logentries-token
|
### logentries-token
|
||||||
|
|
||||||
You need to provide your log set token for logentries driver to work:
|
You need to provide your log set token for the Logentries driver to work:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34cd-5678-0123456789ab
|
$ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34cd-5678-0123456789ab
|
||||||
|
|
@ -55,4 +55,4 @@ You could specify whether to send log message wrapped into container data (defau
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34cd-5678-0123456789ab --log-opt line-only=true
|
$ docker run --log-driver=logentries --log-opt logentries-token=abcd1234-12ab-34cd-5678-0123456789ab --log-opt line-only=true
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,9 @@
|
||||||
---
|
---
|
||||||
description: How to use logging driver plugins
|
description: Learn about logging driver plugins for extending and customizing Docker's logging capabilities
|
||||||
title: Use a logging driver plugin
|
title: Use a logging driver plugin
|
||||||
keywords: logging, driver, plugins, monitoring
|
keywords: logging, driver, plugins, monitoring
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/logging/plugins/
|
- /engine/admin/logging/plugins/
|
||||||
---
|
---
|
||||||
|
|
||||||
Docker logging plugins allow you to extend and customize Docker's logging
|
Docker logging plugins allow you to extend and customize Docker's logging
|
||||||
|
|
@ -23,7 +23,7 @@ a specific plugin using `docker inspect`.
|
||||||
|
|
||||||
## Configure the plugin as the default logging driver
|
## Configure the plugin as the default logging driver
|
||||||
|
|
||||||
After the plugin is installed, you can configure the Docker daemon to use it as
|
When the plugin is installed, you can configure the Docker daemon to use it as
|
||||||
the default by setting the plugin's name as the value of the `log-driver`
|
the default by setting the plugin's name as the value of the `log-driver`
|
||||||
key in the `daemon.json`, as detailed in the
|
key in the `daemon.json`, as detailed in the
|
||||||
[logging overview](configure.md#configure-the-default-logging-driver). If the
|
[logging overview](configure.md#configure-the-default-logging-driver). If the
|
||||||
|
|
@ -38,4 +38,4 @@ detailed in the
|
||||||
[logging overview](configure.md#configure-the-logging-driver-for-a-container).
|
[logging overview](configure.md#configure-the-logging-driver-for-a-container).
|
||||||
If the logging driver supports additional options, you can specify them using
|
If the logging driver supports additional options, you can specify them using
|
||||||
one or more `--log-opt` flags with the option name as the key and the option
|
one or more `--log-opt` flags with the option name as the key and the option
|
||||||
value as the value.
|
value as the value.
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,10 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the Splunk logging driver.
|
description: Learn how to use the Splunk logging driver with Docker Engine
|
||||||
keywords: splunk, docker, logging, driver
|
keywords: splunk, docker, logging, driver
|
||||||
title: Splunk logging driver
|
title: Splunk logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/splunk/
|
- /engine/reference/logging/splunk/
|
||||||
- /engine/admin/logging/splunk/
|
- /engine/admin/logging/splunk/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `splunk` logging driver sends container logs to
|
The `splunk` logging driver sends container logs to
|
||||||
|
|
@ -46,38 +46,38 @@ configuring Docker using `daemon.json`, see
|
||||||
To use the `splunk` driver for a specific container, use the commandline flags
|
To use the `splunk` driver for a specific container, use the commandline flags
|
||||||
`--log-driver` and `log-opt` with `docker run`:
|
`--log-driver` and `log-opt` with `docker run`:
|
||||||
|
|
||||||
```
|
```
|
||||||
docker run --log-driver=splunk --log-opt splunk-token=VALUE --log-opt splunk-url=VALUE ...
|
docker run --log-driver=splunk --log-opt splunk-token=VALUE --log-opt splunk-url=VALUE ...
|
||||||
```
|
```
|
||||||
|
|
||||||
## Splunk options
|
## Splunk options
|
||||||
|
|
||||||
The following properties let you configure the splunk logging driver.
|
The following properties let you configure the Splunk logging driver.
|
||||||
|
|
||||||
- To configure the `splunk` driver across the Docker environment, edit
|
- To configure the `splunk` driver across the Docker environment, edit
|
||||||
`daemon.json` with the key, `"log-opts": {"NAME": "VALUE", ...}`.
|
`daemon.json` with the key, `"log-opts": {"NAME": "VALUE", ...}`.
|
||||||
- To configure the `splunk` driver for an individual container, use `docker run`
|
- To configure the `splunk` driver for an individual container, use `docker run`
|
||||||
with the flag, `--log-opt NAME=VALUE ...`.
|
with the flag, `--log-opt NAME=VALUE ...`.
|
||||||
|
|
||||||
| Option | Required | Description |
|
| Option | Required | Description |
|
||||||
|:----------------------------|:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :-------------------------- | :------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `splunk-token` | required | Splunk HTTP Event Collector token. |
|
| `splunk-token` | required | Splunk HTTP Event Collector token. |
|
||||||
| `splunk-url` | required | Path to your Splunk Enterprise, self-service Splunk Cloud instance, or Splunk Cloud managed cluster (including port and scheme used by HTTP Event Collector) in one of the following formats: `https://your_splunk_instance:8088`, `https://input-prd-p-XXXXXXX.cloud.splunk.com:8088`, or `https://http-inputs-XXXXXXXX.splunkcloud.com`. |
|
| `splunk-url` | required | Path to your Splunk Enterprise, self-service Splunk Cloud instance, or Splunk Cloud managed cluster (including port and scheme used by HTTP Event Collector) in one of the following formats: `https://your_splunk_instance:8088`, `https://input-prd-p-XXXXXXX.cloud.splunk.com:8088`, or `https://http-inputs-XXXXXXXX.splunkcloud.com`. |
|
||||||
| `splunk-source` | optional | Event source. |
|
| `splunk-source` | optional | Event source. |
|
||||||
| `splunk-sourcetype` | optional | Event source type. |
|
| `splunk-sourcetype` | optional | Event source type. |
|
||||||
| `splunk-index` | optional | Event index. |
|
| `splunk-index` | optional | Event index. |
|
||||||
| `splunk-capath` | optional | Path to root certificate. |
|
| `splunk-capath` | optional | Path to root certificate. |
|
||||||
| `splunk-caname` | optional | Name to use for validating server certificate; by default the hostname of the `splunk-url` is used. |
|
| `splunk-caname` | optional | Name to use for validating server certificate; by default the hostname of the `splunk-url` is used. |
|
||||||
| `splunk-insecureskipverify` | optional | Ignore server certificate validation. |
|
| `splunk-insecureskipverify` | optional | Ignore server certificate validation. |
|
||||||
| `splunk-format` | optional | Message format. Can be `inline`, `json` or `raw`. Defaults to `inline`. |
|
| `splunk-format` | optional | Message format. Can be `inline`, `json` or `raw`. Defaults to `inline`. |
|
||||||
| `splunk-verify-connection` | optional | Verify on start, that docker can connect to Splunk server. Defaults to true. |
|
| `splunk-verify-connection` | optional | Verify on start, that Docker can connect to Splunk server. Defaults to true. |
|
||||||
| `splunk-gzip` | optional | Enable/disable gzip compression to send events to Splunk Enterprise or Splunk Cloud instance. Defaults to false. |
|
| `splunk-gzip` | optional | Enable/disable gzip compression to send events to Splunk Enterprise or Splunk Cloud instance. Defaults to false. |
|
||||||
| `splunk-gzip-level` | optional | Set compression level for gzip. Valid values are -1 (default), 0 (no compression), 1 (best speed) ... 9 (best compression). Defaults to [DefaultCompression](https://golang.org/pkg/compress/gzip/#DefaultCompression). |
|
| `splunk-gzip-level` | optional | Set compression level for gzip. Valid values are -1 (default), 0 (no compression), 1 (best speed) ... 9 (best compression). Defaults to [DefaultCompression](https://golang.org/pkg/compress/gzip/#DefaultCompression). |
|
||||||
| `tag` | optional | Specify tag for message, which interpret some markup. Default value is `{{.ID}}` (12 characters of the container ID). Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. |
|
| `tag` | optional | Specify tag for message, which interpret some markup. Default value is `{{.ID}}` (12 characters of the container ID). Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. |
|
||||||
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
|
| `labels` | optional | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
|
||||||
| `labels-regex` | optional | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). |
|
| `labels-regex` | optional | Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). |
|
||||||
| `env` | optional | Comma-separated list of keys of environment variables, which should be included in message, if these variables are specified for container. |
|
| `env` | optional | Comma-separated list of keys of environment variables, which should be included in message, if these variables are specified for container. |
|
||||||
| `env-regex` | optional | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). |
|
| `env-regex` | optional | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). |
|
||||||
|
|
||||||
If there is collision between the `label` and `env` keys, the value of the `env`
|
If there is collision between the `label` and `env` keys, the value of the `env`
|
||||||
takes precedence. Both options add additional fields to the attributes of a
|
takes precedence. Both options add additional fields to the attributes of a
|
||||||
|
|
@ -91,7 +91,6 @@ The path to the root certificate and Common Name is specified using an HTTPS
|
||||||
scheme. This is used for verification. The `SplunkServerDefaultCert` is
|
scheme. This is used for verification. The `SplunkServerDefaultCert` is
|
||||||
automatically generated by Splunk certificates.
|
automatically generated by Splunk certificates.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker run \
|
$ docker run \
|
||||||
--log-driver=splunk \
|
--log-driver=splunk \
|
||||||
|
|
@ -107,7 +106,6 @@ $ docker run \
|
||||||
your/application
|
your/application
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
The `splunk-url` for Splunk instances hosted on Splunk Cloud is in a format
|
The `splunk-url` for Splunk instances hosted on Splunk Cloud is in a format
|
||||||
like `https://http-inputs-XXXXXXXX.splunkcloud.com` and does not include a
|
like `https://http-inputs-XXXXXXXX.splunkcloud.com` and does not include a
|
||||||
port specifier.
|
port specifier.
|
||||||
|
|
@ -117,66 +115,72 @@ port specifier.
|
||||||
There are three logging driver messaging formats: `inline` (default), `json`,
|
There are three logging driver messaging formats: `inline` (default), `json`,
|
||||||
and `raw`.
|
and `raw`.
|
||||||
|
|
||||||
|
{{< tabs >}}
|
||||||
|
{{< tab name="Inline" >}}
|
||||||
|
|
||||||
The default format is `inline` where each log message is embedded as a string.
|
The default format is `inline` where each log message is embedded as a string.
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"attrs": {
|
"attrs": {
|
||||||
"env1": "val1",
|
"env1": "val1",
|
||||||
"label1": "label1"
|
"label1": "label1"
|
||||||
},
|
},
|
||||||
"tag": "MyImage/MyContainer",
|
"tag": "MyImage/MyContainer",
|
||||||
"source": "stdout",
|
"source": "stdout",
|
||||||
"line": "my message"
|
"line": "my message"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"attrs": {
|
"attrs": {
|
||||||
"env1": "val1",
|
"env1": "val1",
|
||||||
"label1": "label1"
|
"label1": "label1"
|
||||||
},
|
},
|
||||||
"tag": "MyImage/MyContainer",
|
"tag": "MyImage/MyContainer",
|
||||||
"source": "stdout",
|
"source": "stdout",
|
||||||
"line": "{\"foo\": \"bar\"}"
|
"line": "{\"foo\": \"bar\"}"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**: If your messages are JSON objects, you may want to embed them in the
|
{{< /tab >}}
|
||||||
> message we send to Splunk.
|
{{< tab name="JSON" >}}
|
||||||
|
|
||||||
To format messages as `json` objects, set `--log-opt splunk-format=json`. The
|
To format messages as `json` objects, set `--log-opt splunk-format=json`. The
|
||||||
driver trys to parse every line as a JSON object and send it as an embedded
|
driver attempts to parse every line as a JSON object and send it as an embedded
|
||||||
object. If it cannot parse the message, it is sent `inline`. For example:
|
object. If it can't parse the message, it's sent `inline`. For example:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"attrs": {
|
"attrs": {
|
||||||
"env1": "val1",
|
"env1": "val1",
|
||||||
"label1": "label1"
|
"label1": "label1"
|
||||||
},
|
},
|
||||||
"tag": "MyImage/MyContainer",
|
"tag": "MyImage/MyContainer",
|
||||||
"source": "stdout",
|
"source": "stdout",
|
||||||
"line": "my message"
|
"line": "my message"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"attrs": {
|
"attrs": {
|
||||||
"env1": "val1",
|
"env1": "val1",
|
||||||
"label1": "label1"
|
"label1": "label1"
|
||||||
},
|
},
|
||||||
"tag": "MyImage/MyContainer",
|
"tag": "MyImage/MyContainer",
|
||||||
"source": "stdout",
|
"source": "stdout",
|
||||||
"line": {
|
"line": {
|
||||||
"foo": "bar"
|
"foo": "bar"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
{{< /tab >}}
|
||||||
|
{{< tab name="Raw" >}}
|
||||||
|
|
||||||
To format messages as `raw`, set `--log-opt splunk-format=raw`. Attributes
|
To format messages as `raw`, set `--log-opt splunk-format=raw`. Attributes
|
||||||
(environment variables and labels) and tags are prefixed to the message. For
|
(environment variables and labels) and tags are prefixed to the message. For
|
||||||
example:
|
example:
|
||||||
|
|
@ -186,13 +190,17 @@ MyImage/MyContainer env1=val1 label1=label1 my message
|
||||||
MyImage/MyContainer env1=val1 label1=label1 {"foo": "bar"}
|
MyImage/MyContainer env1=val1 label1=label1 {"foo": "bar"}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
{{< /tab >}}
|
||||||
|
{{< /tabs >}}
|
||||||
|
|
||||||
## Advanced options
|
## Advanced options
|
||||||
|
|
||||||
Splunk Logging Driver allows you to configure few advanced options by specifying next environment variables for the Docker daemon.
|
The Splunk logging driver lets you configure a few advanced options by setting
|
||||||
|
environment variables for the Docker daemon.
|
||||||
|
|
||||||
| Environment variable name | Default value | Description |
|
| Environment variable name | Default value | Description |
|
||||||
|:-------------------------------------------------|:--------------|:---------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :----------------------------------------------- | :------------ | :--------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `SPLUNK_LOGGING_DRIVER_POST_MESSAGES_FREQUENCY` | `5s` | If there is nothing to batch how often driver posts messages. You can think about this as the maximum time to wait for more messages to batch. |
|
| `SPLUNK_LOGGING_DRIVER_POST_MESSAGES_FREQUENCY` | `5s` | The time to wait for more messages to batch. |
|
||||||
| `SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE` | `1000` | How many messages driver should wait before sending them in one batch. |
|
| `SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE` | `1000` | The number of messages that should accumulate before sending them in one batch. |
|
||||||
| `SPLUNK_LOGGING_DRIVER_BUFFER_MAX` | `10 * 1000` | If driver cannot connect to remote server, what is the maximum amount of messages it can hold in buffer for retries. |
|
| `SPLUNK_LOGGING_DRIVER_BUFFER_MAX` | `10 * 1000` | The maximum number of messages held in buffer for retries. |
|
||||||
| `SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE` | `4 * 1000` | How many pending messages can be in the channel which is used to send messages to background logger worker, which batches them. |
|
| `SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE` | `4 * 1000` | The maximum number of pending messages that can be in the channel used to send messages to background logger worker, which batches them. |
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,10 @@
|
||||||
---
|
---
|
||||||
description: Describes how to use the syslog logging driver.
|
description: Learn how to use the syslog logging driver with Docker Engine
|
||||||
keywords: syslog, docker, logging, driver
|
keywords: syslog, docker, logging, driver
|
||||||
title: Syslog logging driver
|
title: Syslog logging driver
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/reference/logging/syslog/
|
- /engine/reference/logging/syslog/
|
||||||
- /engine/admin/logging/syslog/
|
- /engine/admin/logging/syslog/
|
||||||
---
|
---
|
||||||
|
|
||||||
The `syslog` logging driver routes logs to a `syslog` server. The `syslog` protocol uses
|
The `syslog` logging driver routes logs to a `syslog` server. The `syslog` protocol uses
|
||||||
|
|
@ -12,11 +12,11 @@ a raw string as the log message and supports a limited set of metadata. The sysl
|
||||||
message must be formatted in a specific way to be valid. From a valid message, the
|
message must be formatted in a specific way to be valid. From a valid message, the
|
||||||
receiver can extract the following information:
|
receiver can extract the following information:
|
||||||
|
|
||||||
- **priority**: the logging level, such as `debug`, `warning`, `error`, `info`.
|
- Priority: the logging level, such as `debug`, `warning`, `error`, `info`.
|
||||||
- **timestamp**: when the event occurred.
|
- Timestamp: when the event occurred.
|
||||||
- **hostname**: where the event happened.
|
- Hostname: where the event happened.
|
||||||
- **facility**: which subsystem logged the message, such as `mail` or `kernel`.
|
- Facility: which subsystem logged the message, such as `mail` or `kernel`.
|
||||||
- **process name** and **process ID (PID)**: The name and ID of the process that generated the log.
|
- Process name and process ID (PID): The name and ID of the process that generated the log.
|
||||||
|
|
||||||
The format is defined in [RFC 5424](https://tools.ietf.org/html/rfc5424) and Docker's syslog driver implements the
|
The format is defined in [RFC 5424](https://tools.ietf.org/html/rfc5424) and Docker's syslog driver implements the
|
||||||
[ABNF reference](https://tools.ietf.org/html/rfc5424#section-6) in the following way:
|
[ABNF reference](https://tools.ietf.org/html/rfc5424#section-6) in the following way:
|
||||||
|
|
@ -58,7 +58,7 @@ Restart Docker for the changes to take effect.
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> `log-opts` configuration options in the `daemon.json` configuration file must
|
> `log-opts` configuration options in the `daemon.json` configuration file must
|
||||||
> be provided as strings. Numeric and boolean values (such as the value for
|
> be provided as strings. Numeric and Boolean values (such as the value for
|
||||||
> `syslog-tls-skip-verify`) must therefore be enclosed in quotes (`"`).
|
> `syslog-tls-skip-verify`) must therefore be enclosed in quotes (`"`).
|
||||||
|
|
||||||
You can set the logging driver for a specific container by using the
|
You can set the logging driver for a specific container by using the
|
||||||
|
|
@ -79,16 +79,16 @@ container by adding a `--log-opt <key>=<value>` flag for each option when
|
||||||
starting the container.
|
starting the container.
|
||||||
|
|
||||||
| Option | Description | Example value |
|
| Option | Description | Example value |
|
||||||
|:-------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------|
|
| :----------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------- |
|
||||||
| `syslog-address` | The address of an external `syslog` server. The URI specifier may be `[tcp|udp|tcp+tls]://host:port`, `unix://path`, or `unixgram://path`. If the transport is `tcp`, `udp`, or `tcp+tls`, the default port is `514`. | `--log-opt syslog-address=tcp+tls://192.168.1.3:514`, `--log-opt syslog-address=unix:///tmp/syslog.sock` |
|
| `syslog-address` | The address of an external `syslog` server. The URI specifier may be `[tcp\|udp\|tcp+tls]://host:port`, `unix://path`, or `unixgram://path`. If the transport is `tcp`, `udp`, or `tcp+tls`, the default port is `514`. | `--log-opt syslog-address=tcp+tls://192.168.1.3:514`, `--log-opt syslog-address=unix:///tmp/syslog.sock` |
|
||||||
| `syslog-facility` | The `syslog` facility to use. Can be the number or name for any valid `syslog` facility. See the [syslog documentation](https://tools.ietf.org/html/rfc5424#section-6.2.1). | `--log-opt syslog-facility=daemon` |
|
| `syslog-facility` | The `syslog` facility to use. Can be the number or name for any valid `syslog` facility. See the [syslog documentation](https://tools.ietf.org/html/rfc5424#section-6.2.1). | `--log-opt syslog-facility=daemon` |
|
||||||
| `syslog-tls-ca-cert` | The absolute path to the trust certificates signed by the CA. **Ignored if the address protocol is not `tcp+tls`.** | `--log-opt syslog-tls-ca-cert=/etc/ca-certificates/custom/ca.pem` |
|
| `syslog-tls-ca-cert` | The absolute path to the trust certificates signed by the CA. Ignored if the address protocol isn't `tcp+tls`. | `--log-opt syslog-tls-ca-cert=/etc/ca-certificates/custom/ca.pem` |
|
||||||
| `syslog-tls-cert` | The absolute path to the TLS certificate file. **Ignored if the address protocol is not `tcp+tls`**. | `--log-opt syslog-tls-cert=/etc/ca-certificates/custom/cert.pem` |
|
| `syslog-tls-cert` | The absolute path to the TLS certificate file. Ignored if the address protocol isn't `tcp+tls`. | `--log-opt syslog-tls-cert=/etc/ca-certificates/custom/cert.pem` |
|
||||||
| `syslog-tls-key` | The absolute path to the TLS key file. **Ignored if the address protocol is not `tcp+tls`.** | `--log-opt syslog-tls-key=/etc/ca-certificates/custom/key.pem` |
|
| `syslog-tls-key` | The absolute path to the TLS key file. Ignored if the address protocol isn't `tcp+tls`. | `--log-opt syslog-tls-key=/etc/ca-certificates/custom/key.pem` |
|
||||||
| `syslog-tls-skip-verify` | If set to `true`, TLS verification is skipped when connecting to the `syslog` daemon. Defaults to `false`. **Ignored if the address protocol is not `tcp+tls`.** | `--log-opt syslog-tls-skip-verify=true` |
|
| `syslog-tls-skip-verify` | If set to `true`, TLS verification is skipped when connecting to the `syslog` daemon. Defaults to `false`. Ignored if the address protocol isn't `tcp+tls`. | `--log-opt syslog-tls-skip-verify=true` |
|
||||||
| `tag` | A string that is appended to the `APP-NAME` in the `syslog` message. By default, Docker uses the first 12 characters of the container ID to tag log messages. Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. | `--log-opt tag=mailer` |
|
| `tag` | A string that's appended to the `APP-NAME` in the `syslog` message. By default, Docker uses the first 12 characters of the container ID to tag log messages. Refer to the [log tag option documentation](log_tags.md) for customizing the log tag format. | `--log-opt tag=mailer` |
|
||||||
| `syslog-format` | The `syslog` message format to use. If not specified the local UNIX syslog format is used, without a specified hostname. Specify `rfc3164` for the RFC-3164 compatible format, `rfc5424` for RFC-5424 compatible format, or `rfc5424micro` for RFC-5424 compatible format with microsecond timestamp resolution. | `--log-opt syslog-format=rfc5424micro` |
|
| `syslog-format` | The `syslog` message format to use. If not specified the local Unix syslog format is used, without a specified hostname. Specify `rfc3164` for the RFC-3164 compatible format, `rfc5424` for RFC-5424 compatible format, or `rfc5424micro` for RFC-5424 compatible format with microsecond timestamp resolution. | `--log-opt syslog-format=rfc5424micro` |
|
||||||
| `labels` | Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt labels=production_status,geo` |
|
| `labels` | Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt labels=production_status,geo` |
|
||||||
| `labels-regex` | Applies when starting the Docker daemon. Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). | `--log-opt labels-regex=^(production_status\|geo)` |
|
| `labels-regex` | Applies when starting the Docker daemon. Similar to and compatible with `labels`. A regular expression to match logging-related labels. Used for advanced [log tag options](log_tags.md). | `--log-opt labels-regex=^(production_status\|geo)` |
|
||||||
| `env` | Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt env=os,customer` |
|
| `env` | Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt env=os,customer` |
|
||||||
| `env-regex` | Applies when starting the Docker daemon. Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). | `--log-opt env-regex=^(os\|customer)` |
|
| `env-regex` | Applies when starting the Docker daemon. Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). | `--log-opt env-regex=^(os\|customer)` |
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,12 @@
|
||||||
---
|
---
|
||||||
description: How to run more than one process in a container
|
description: Learn how to run more than one process in a single container
|
||||||
keywords: docker, supervisor, process management
|
keywords: docker, supervisor, process management
|
||||||
title: Run multiple services in a container
|
title: Run multiple processes in a container
|
||||||
aliases:
|
aliases:
|
||||||
- /articles/using_supervisord/
|
- /articles/using_supervisord/
|
||||||
- /engine/admin/multi-service_container/
|
- /engine/admin/multi-service_container/
|
||||||
- /engine/admin/using_supervisord/
|
- /engine/admin/using_supervisord/
|
||||||
- /engine/articles/using_supervisord/
|
- /engine/articles/using_supervisord/
|
||||||
---
|
---
|
||||||
|
|
||||||
A container's main running process is the `ENTRYPOINT` and/or `CMD` at the
|
A container's main running process is the `ENTRYPOINT` and/or `CMD` at the
|
||||||
|
|
@ -34,7 +34,7 @@ this in a few different ways.
|
||||||
## Use a wrapper script
|
## Use a wrapper script
|
||||||
|
|
||||||
Put all of your commands in a wrapper script, complete with testing and
|
Put all of your commands in a wrapper script, complete with testing and
|
||||||
debugging information. Run the wrapper script as your `CMD`. This is a very
|
debugging information. Run the wrapper script as your `CMD`. The following is a
|
||||||
naive example. First, the wrapper script:
|
naive example. First, the wrapper script:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -66,10 +66,9 @@ CMD ./my_wrapper_script.sh
|
||||||
|
|
||||||
## Use Bash job controls
|
## Use Bash job controls
|
||||||
|
|
||||||
If you have one main process that needs to start first and stay running but
|
If you have one main process that needs to start first and stay running but you
|
||||||
you temporarily need to run some other processes (perhaps to interact with
|
temporarily need to run some other processes (perhaps to interact with the main
|
||||||
the main process) then you can use bash's job control to facilitate that.
|
process) then you can use bash's job control. First, the wrapper script:
|
||||||
First, the wrapper script:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
@ -103,8 +102,8 @@ CMD ./my_wrapper_script.sh
|
||||||
|
|
||||||
## Use a process manager
|
## Use a process manager
|
||||||
|
|
||||||
Use a process manager like `supervisord`. This is a moderately heavy-weight
|
Use a process manager like `supervisord`. This is more involved than the other
|
||||||
approach that requires you to package `supervisord` and its configuration in
|
options, as it requires you to bundle `supervisord` and its configuration into
|
||||||
your image (or base your image on one that includes `supervisord`), along with
|
your image (or base your image on one that includes `supervisord`), along with
|
||||||
the different applications it manages. Then you start `supervisord`, which
|
the different applications it manages. Then you start `supervisord`, which
|
||||||
manages your processes for you.
|
manages your processes for you.
|
||||||
|
|
@ -140,4 +139,4 @@ logfile_maxbytes=0
|
||||||
stdout_logfile=/dev/fd/1
|
stdout_logfile=/dev/fd/1
|
||||||
stdout_logfile_maxbytes=0
|
stdout_logfile_maxbytes=0
|
||||||
redirect_stderr=true
|
redirect_stderr=true
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -3,8 +3,8 @@ title: Runtime options with Memory, CPUs, and GPUs
|
||||||
description: Specify the runtime options for a container
|
description: Specify the runtime options for a container
|
||||||
keywords: docker, daemon, configuration, runtime
|
keywords: docker, daemon, configuration, runtime
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/articles/systemd/
|
- /engine/articles/systemd/
|
||||||
- /engine/admin/resource_constraints/
|
- /engine/admin/resource_constraints/
|
||||||
---
|
---
|
||||||
|
|
||||||
By default, a container has no resource constraints and can use as much of a
|
By default, a container has no resource constraints and can use as much of a
|
||||||
|
|
@ -29,10 +29,10 @@ for more information.
|
||||||
|
|
||||||
## Memory
|
## Memory
|
||||||
|
|
||||||
### Understand the risks of running out of memory
|
## Understand the risks of running out of memory
|
||||||
|
|
||||||
It is important not to allow a running container to consume too much of the
|
It's important not to allow a running container to consume too much of the
|
||||||
host machine's memory. On Linux hosts, if the kernel detects that there is not
|
host machine's memory. On Linux hosts, if the kernel detects that there isn't
|
||||||
enough memory to perform important system functions, it throws an `OOME`, or
|
enough memory to perform important system functions, it throws an `OOME`, or
|
||||||
`Out Of Memory Exception`, and starts killing processes to free up
|
`Out Of Memory Exception`, and starts killing processes to free up
|
||||||
memory. Any process is subject to killing, including Docker and other important
|
memory. Any process is subject to killing, including Docker and other important
|
||||||
|
|
@ -40,10 +40,10 @@ applications. This can effectively bring the entire system down if the wrong
|
||||||
process is killed.
|
process is killed.
|
||||||
|
|
||||||
Docker attempts to mitigate these risks by adjusting the OOM priority on the
|
Docker attempts to mitigate these risks by adjusting the OOM priority on the
|
||||||
Docker daemon so that it is less likely to be killed than other processes
|
Docker daemon so that it's less likely to be killed than other processes
|
||||||
on the system. The OOM priority on containers is not adjusted. This makes it more
|
on the system. The OOM priority on containers isn't adjusted. This makes it more
|
||||||
likely for an individual container to be killed than for the Docker daemon
|
likely for an individual container to be killed than for the Docker daemon
|
||||||
or other system processes to be killed. You should not try to circumvent
|
or other system processes to be killed. You shouldn't try to circumvent
|
||||||
these safeguards by manually setting `--oom-score-adj` to an extreme negative
|
these safeguards by manually setting `--oom-score-adj` to an extreme negative
|
||||||
number on the daemon or a container, or by setting `--oom-kill-disable` on a
|
number on the daemon or a container, or by setting `--oom-kill-disable` on a
|
||||||
container.
|
container.
|
||||||
|
|
@ -53,37 +53,40 @@ For more information about the Linux kernel's OOM management, see
|
||||||
|
|
||||||
You can mitigate the risk of system instability due to OOME by:
|
You can mitigate the risk of system instability due to OOME by:
|
||||||
|
|
||||||
- Perform tests to understand the memory requirements of your application before
|
- Perform tests to understand the memory requirements of your application
|
||||||
placing it into production.
|
before placing it into production.
|
||||||
- Ensure that your application runs only on hosts with adequate resources.
|
- Ensure that your application runs only on hosts with adequate resources.
|
||||||
- Limit the amount of memory your container can use, as described below.
|
- Limit the amount of memory your container can use, as described below.
|
||||||
- Be mindful when configuring swap on your Docker hosts. Swap is slower and
|
- Be mindful when configuring swap on your Docker hosts. Swap is slower than
|
||||||
less performant than memory but can provide a buffer against running out of
|
memory but can provide a buffer against running out of system memory.
|
||||||
system memory.
|
- Consider converting your container to a
|
||||||
- Consider converting your container to a [service](../../engine/swarm/services.md),
|
[service](../../engine/swarm/services.md), and using service-level constraints
|
||||||
and using service-level constraints and node labels to ensure that the
|
and node labels to ensure that the application runs only on hosts with enough
|
||||||
application runs only on hosts with enough memory
|
memory
|
||||||
|
|
||||||
### Limit a container's access to memory
|
### Limit a container's access to memory
|
||||||
|
|
||||||
Docker can enforce hard memory limits, which allow the container to use no more
|
Docker can enforce hard or soft memory limits.
|
||||||
than a given amount of user or system memory, or soft limits, which allow the
|
|
||||||
container to use as much memory as it needs unless certain conditions are met,
|
- Hard limits lets the container use no more than a fixed amount of memory.
|
||||||
such as when the kernel detects low memory or contention on the host machine.
|
- Soft limits lets the container use as much memory as it needs unless certain
|
||||||
|
conditions are met, such as when the kernel detects low memory or contention on
|
||||||
|
the host machine.
|
||||||
|
|
||||||
Some of these options have different effects when used alone or when more than
|
Some of these options have different effects when used alone or when more than
|
||||||
one option is set.
|
one option is set.
|
||||||
|
|
||||||
Most of these options take a positive integer, followed by a suffix of `b`, `k`,
|
Most of these options take a positive integer, followed by a suffix of `b`, `k`,
|
||||||
`m`, `g`, to indicate bytes, kilobytes, megabytes, or gigabytes.
|
`m`, `g`, to indicate bytes, kilobytes, megabytes, or gigabytes.
|
||||||
|
|
||||||
| Option | Description |
|
| Option | Description |
|
||||||
|:-----------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :--------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `-m` or `--memory=` | The maximum amount of memory the container can use. If you set this option, the minimum allowed value is `6m` (6 megabytes). That is, you must set the value to at least 6 megabytes. |
|
| `-m` or `--memory=` | The maximum amount of memory the container can use. If you set this option, the minimum allowed value is `6m` (6 megabytes). That is, you must set the value to at least 6 megabytes. |
|
||||||
| `--memory-swap`* | The amount of memory this container is allowed to swap to disk. See [`--memory-swap` details](#--memory-swap-details). |
|
| `--memory-swap`\* | The amount of memory this container is allowed to swap to disk. See [`--memory-swap` details](#--memory-swap-details). |
|
||||||
| `--memory-swappiness` | By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set `--memory-swappiness` to a value between 0 and 100, to tune this percentage. See [`--memory-swappiness` details](#--memory-swappiness-details). |
|
| `--memory-swappiness` | By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set `--memory-swappiness` to a value between 0 and 100, to tune this percentage. See [`--memory-swappiness` details](#--memory-swappiness-details). |
|
||||||
| `--memory-reservation` | Allows you to specify a soft limit smaller than `--memory` which is activated when Docker detects contention or low memory on the host machine. If you use `--memory-reservation`, it must be set lower than `--memory` for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn't exceed the limit. |
|
| `--memory-reservation` | Allows you to specify a soft limit smaller than `--memory` which is activated when Docker detects contention or low memory on the host machine. If you use `--memory-reservation`, it must be set lower than `--memory` for it to take precedence. Because it is a soft limit, it doesn't guarantee that the container doesn't exceed the limit. |
|
||||||
| `--kernel-memory` | The maximum amount of kernel memory the container can use. The minimum allowed value is `6m`. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See [`--kernel-memory` details](#--kernel-memory-details). |
|
| `--kernel-memory` | The maximum amount of kernel memory the container can use. The minimum allowed value is `6m`. Because kernel memory can't be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See [`--kernel-memory` details](#--kernel-memory-details). |
|
||||||
| `--oom-kill-disable` | By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. To change this behavior, use the `--oom-kill-disable` option. Only disable the OOM killer on containers where you have also set the `-m/--memory` option. If the `-m` flag is not set, the host can run out of memory and the kernel may need to kill the host system's processes to free memory. |
|
| `--oom-kill-disable` | By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. To change this behavior, use the `--oom-kill-disable` option. Only disable the OOM killer on containers where you have also set the `-m/--memory` option. If the `-m` flag isn't set, the host can run out of memory and the kernel may need to kill the host system's processes to free memory. |
|
||||||
|
|
||||||
For more information about cgroups and memory in general, see the documentation
|
For more information about cgroups and memory in general, see the documentation
|
||||||
for [Memory Resource Controller](https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt).
|
for [Memory Resource Controller](https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt).
|
||||||
|
|
@ -92,7 +95,7 @@ for [Memory Resource Controller](https://www.kernel.org/doc/Documentation/cgroup
|
||||||
|
|
||||||
`--memory-swap` is a modifier flag that only has meaning if `--memory` is also
|
`--memory-swap` is a modifier flag that only has meaning if `--memory` is also
|
||||||
set. Using swap allows the container to write excess memory requirements to disk
|
set. Using swap allows the container to write excess memory requirements to disk
|
||||||
when the container has exhausted all the RAM that is available to it. There is a
|
when the container has exhausted all the RAM that's available to it. There is a
|
||||||
performance penalty for applications that swap memory to disk often.
|
performance penalty for applications that swap memory to disk often.
|
||||||
|
|
||||||
Its setting can have complicated effects:
|
Its setting can have complicated effects:
|
||||||
|
|
@ -107,7 +110,7 @@ Its setting can have complicated effects:
|
||||||
treated as unset.
|
treated as unset.
|
||||||
|
|
||||||
- If `--memory-swap` is set to the same value as `--memory`, and `--memory` is
|
- If `--memory-swap` is set to the same value as `--memory`, and `--memory` is
|
||||||
set to a positive integer, **the container does not have access to swap**.
|
set to a positive integer, **the container doesn't have access to swap**.
|
||||||
See
|
See
|
||||||
[Prevent a container from using swap](#prevent-a-container-from-using-swap).
|
[Prevent a container from using swap](#prevent-a-container-from-using-swap).
|
||||||
|
|
||||||
|
|
@ -132,7 +135,7 @@ of physical memory that can be used.
|
||||||
|
|
||||||
- A value of 0 turns off anonymous page swapping.
|
- A value of 0 turns off anonymous page swapping.
|
||||||
- A value of 100 sets all anonymous pages as swappable.
|
- A value of 100 sets all anonymous pages as swappable.
|
||||||
- By default, if you do not set `--memory-swappiness`, the value is
|
- By default, if you don't set `--memory-swappiness`, the value is
|
||||||
inherited from the host machine.
|
inherited from the host machine.
|
||||||
|
|
||||||
### `--kernel-memory` details
|
### `--kernel-memory` details
|
||||||
|
|
@ -145,10 +148,10 @@ a container. Consider the following scenarios:
|
||||||
- **Unlimited memory, limited kernel memory**: This is appropriate when the
|
- **Unlimited memory, limited kernel memory**: This is appropriate when the
|
||||||
amount of memory needed by all cgroups is greater than the amount of
|
amount of memory needed by all cgroups is greater than the amount of
|
||||||
memory that actually exists on the host machine. You can configure the
|
memory that actually exists on the host machine. You can configure the
|
||||||
kernel memory to never go over what is available on the host machine,
|
kernel memory to never go over what's available on the host machine,
|
||||||
and containers which need more memory need to wait for it.
|
and containers which need more memory need to wait for it.
|
||||||
- **Limited memory, unlimited kernel memory**: The overall memory is
|
- **Limited memory, unlimited kernel memory**: The overall memory is
|
||||||
limited, but the kernel memory is not.
|
limited, but the kernel memory isn't.
|
||||||
- **Limited memory, limited kernel memory**: Limiting both user and kernel
|
- **Limited memory, limited kernel memory**: Limiting both user and kernel
|
||||||
memory can be useful for debugging memory-related problems. If a container
|
memory can be useful for debugging memory-related problems. If a container
|
||||||
is using an unexpected amount of either type of memory, it runs out
|
is using an unexpected amount of either type of memory, it runs out
|
||||||
|
|
@ -156,12 +159,12 @@ a container. Consider the following scenarios:
|
||||||
this setting, if the kernel memory limit is lower than the user memory
|
this setting, if the kernel memory limit is lower than the user memory
|
||||||
limit, running out of kernel memory causes the container to experience
|
limit, running out of kernel memory causes the container to experience
|
||||||
an OOM error. If the kernel memory limit is higher than the user memory
|
an OOM error. If the kernel memory limit is higher than the user memory
|
||||||
limit, the kernel limit does not cause the container to experience an OOM.
|
limit, the kernel limit doesn't cause the container to experience an OOM.
|
||||||
|
|
||||||
When you turn on any kernel memory limits, the host machine tracks "high water
|
When you enable kernel memory limits, the host machine tracks "high water mark"
|
||||||
mark" statistics on a per-process basis, so you can track which processes (in
|
statistics on a per-process basis, so you can track which processes (in this
|
||||||
this case, containers) are using excess memory. This can be seen per process
|
case, containers) are using excess memory. This can be seen per process by
|
||||||
by viewing `/proc/<PID>/status` on the host machine.
|
viewing `/proc/<PID>/status` on the host machine.
|
||||||
|
|
||||||
## CPU
|
## CPU
|
||||||
|
|
||||||
|
|
@ -169,22 +172,22 @@ By default, each container's access to the host machine's CPU cycles is unlimite
|
||||||
You can set various constraints to limit a given container's access to the host
|
You can set various constraints to limit a given container's access to the host
|
||||||
machine's CPU cycles. Most users use and configure the
|
machine's CPU cycles. Most users use and configure the
|
||||||
[default CFS scheduler](#configure-the-default-cfs-scheduler). You can also
|
[default CFS scheduler](#configure-the-default-cfs-scheduler). You can also
|
||||||
configure the [realtime scheduler](#configure-the-realtime-scheduler).
|
configure the [real-time scheduler](#configure-the-real-time-scheduler).
|
||||||
|
|
||||||
### Configure the default CFS scheduler
|
### Configure the default CFS scheduler
|
||||||
|
|
||||||
The CFS is the Linux kernel CPU scheduler for normal Linux processes. Several
|
The CFS is the Linux kernel CPU scheduler for normal Linux processes. Several
|
||||||
runtime flags allow you to configure the amount of access to CPU resources your
|
runtime flags let you configure the amount of access to CPU resources your
|
||||||
container has. When you use these settings, Docker modifies the settings for
|
container has. When you use these settings, Docker modifies the settings for
|
||||||
the container's cgroup on the host machine.
|
the container's cgroup on the host machine.
|
||||||
|
|
||||||
| Option | Description |
|
| Option | Description |
|
||||||
|:-----------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :--------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. |
|
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. |
|
||||||
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. For most use-cases, `--cpus` is a more convenient alternative. |
|
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users don't change this from the default. For most use-cases, `--cpus` is a more convenient alternative. |
|
||||||
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. For most use-cases, `--cpus` is a more convenient alternative. |
|
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. For most use-cases, `--cpus` is a more convenient alternative. |
|
||||||
| `--cpuset-cpus` | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be `0-3` (to use the first, second, third, and fourth CPU) or `1,3` (to use the second and fourth CPU). |
|
| `--cpuset-cpus` | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be `0-3` (to use the first, second, third, and fourth CPU) or `1,3` (to use the second and fourth CPU). |
|
||||||
| `--cpu-shares` | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. `--cpu-shares` does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access. |
|
| `--cpu-shares` | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. `--cpu-shares` doesn't prevent containers from being scheduled in Swarm mode. It prioritizes container CPU resources for the available CPU cycles. It doesn't guarantee or reserve any specific CPU access. |
|
||||||
|
|
||||||
If you have 1 CPU, each of the following commands guarantees the container at
|
If you have 1 CPU, each of the following commands guarantees the container at
|
||||||
most 50% of the CPU every second.
|
most 50% of the CPU every second.
|
||||||
|
|
@ -199,10 +202,10 @@ Which is the equivalent to manually specifying `--cpu-period` and `--cpu-quota`;
|
||||||
$ docker run -it --cpu-period=100000 --cpu-quota=50000 ubuntu /bin/bash
|
$ docker run -it --cpu-period=100000 --cpu-quota=50000 ubuntu /bin/bash
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the realtime scheduler
|
### Configure the real-time scheduler
|
||||||
|
|
||||||
You can configure your container to use the realtime scheduler, for tasks which
|
You can configure your container to use the real-time scheduler, for tasks which
|
||||||
cannot use the CFS scheduler. You need to
|
can't use the CFS scheduler. You need to
|
||||||
[make sure the host machine's kernel is configured correctly](#configure-the-host-machines-kernel)
|
[make sure the host machine's kernel is configured correctly](#configure-the-host-machines-kernel)
|
||||||
before you can [configure the Docker daemon](#configure-the-docker-daemon) or
|
before you can [configure the Docker daemon](#configure-the-docker-daemon) or
|
||||||
[configure individual containers](#configure-individual-containers).
|
[configure individual containers](#configure-individual-containers).
|
||||||
|
|
@ -210,7 +213,7 @@ before you can [configure the Docker daemon](#configure-the-docker-daemon) or
|
||||||
> **Warning**
|
> **Warning**
|
||||||
>
|
>
|
||||||
> CPU scheduling and prioritization are advanced kernel-level features. Most
|
> CPU scheduling and prioritization are advanced kernel-level features. Most
|
||||||
> users do not need to change these values from their defaults. Setting these
|
> users don't need to change these values from their defaults. Setting these
|
||||||
> values incorrectly can cause your host system to become unstable or unusable.
|
> values incorrectly can cause your host system to become unstable or unusable.
|
||||||
{ .warning }
|
{ .warning }
|
||||||
|
|
||||||
|
|
@ -219,18 +222,18 @@ before you can [configure the Docker daemon](#configure-the-docker-daemon) or
|
||||||
Verify that `CONFIG_RT_GROUP_SCHED` is enabled in the Linux kernel by running
|
Verify that `CONFIG_RT_GROUP_SCHED` is enabled in the Linux kernel by running
|
||||||
`zcat /proc/config.gz | grep CONFIG_RT_GROUP_SCHED` or by checking for the
|
`zcat /proc/config.gz | grep CONFIG_RT_GROUP_SCHED` or by checking for the
|
||||||
existence of the file `/sys/fs/cgroup/cpu.rt_runtime_us`. For guidance on
|
existence of the file `/sys/fs/cgroup/cpu.rt_runtime_us`. For guidance on
|
||||||
configuring the kernel realtime scheduler, consult the documentation for your
|
configuring the kernel real-time scheduler, consult the documentation for your
|
||||||
operating system.
|
operating system.
|
||||||
|
|
||||||
#### Configure the Docker daemon
|
#### Configure the Docker daemon
|
||||||
|
|
||||||
To run containers using the realtime scheduler, run the Docker daemon with
|
To run containers using the real-time scheduler, run the Docker daemon with
|
||||||
the `--cpu-rt-runtime` flag set to the maximum number of microseconds reserved
|
the `--cpu-rt-runtime` flag set to the maximum number of microseconds reserved
|
||||||
for realtime tasks per runtime period. For instance, with the default period of
|
for real-time tasks per runtime period. For instance, with the default period of
|
||||||
1000000 microseconds (1 second), setting `--cpu-rt-runtime=950000` ensures that
|
1000000 microseconds (1 second), setting `--cpu-rt-runtime=950000` ensures that
|
||||||
containers using the realtime scheduler can run for 950000 microseconds for every
|
containers using the real-time scheduler can run for 950000 microseconds for every
|
||||||
1000000-microsecond period, leaving at least 50000 microseconds available for
|
1000000-microsecond period, leaving at least 50000 microseconds available for
|
||||||
non-realtime tasks. To make this configuration permanent on systems which use
|
non-real-time tasks. To make this configuration permanent on systems which use
|
||||||
`systemd`, see [Control and configure Docker with systemd](../daemon/systemd.md).
|
`systemd`, see [Control and configure Docker with systemd](../daemon/systemd.md).
|
||||||
|
|
||||||
#### Configure individual containers
|
#### Configure individual containers
|
||||||
|
|
@ -240,10 +243,10 @@ start the container using `docker run`. Consult your operating system's
|
||||||
documentation or the `ulimit` command for information on appropriate values.
|
documentation or the `ulimit` command for information on appropriate values.
|
||||||
|
|
||||||
| Option | Description |
|
| Option | Description |
|
||||||
|:---------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `--cap-add=sys_nice` | Grants the container the `CAP_SYS_NICE` capability, which allows the container to raise process `nice` values, set real-time scheduling policies, set CPU affinity, and other operations. |
|
| `--cap-add=sys_nice` | Grants the container the `CAP_SYS_NICE` capability, which allows the container to raise process `nice` values, set real-time scheduling policies, set CPU affinity, and other operations. |
|
||||||
| `--cpu-rt-runtime=<value>` | The maximum number of microseconds the container can run at realtime priority within the Docker daemon's realtime scheduler period. You also need the `--cap-add=sys_nice` flag. |
|
| `--cpu-rt-runtime=<value>` | The maximum number of microseconds the container can run at real-time priority within the Docker daemon's real-time scheduler period. You also need the `--cap-add=sys_nice` flag. |
|
||||||
| `--ulimit rtprio=<value>` | The maximum realtime priority allowed for the container. You also need the `--cap-add=sys_nice` flag. |
|
| `--ulimit rtprio=<value>` | The maximum real-time priority allowed for the container. You also need the `--cap-add=sys_nice` flag. |
|
||||||
|
|
||||||
The following example command sets each of these three flags on a `debian:jessie`
|
The following example command sets each of these three flags on a `debian:jessie`
|
||||||
container.
|
container.
|
||||||
|
|
@ -256,7 +259,7 @@ $ docker run -it \
|
||||||
debian:jessie
|
debian:jessie
|
||||||
```
|
```
|
||||||
|
|
||||||
If the kernel or Docker daemon is not configured correctly, an error occurs.
|
If the kernel or Docker daemon isn't configured correctly, an error occurs.
|
||||||
|
|
||||||
## GPU
|
## GPU
|
||||||
|
|
||||||
|
|
@ -299,21 +302,21 @@ $ docker run -it --rm --gpus all ubuntu nvidia-smi
|
||||||
Exposes all available GPUs and returns a result akin to the following:
|
Exposes all available GPUs and returns a result akin to the following:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
+-----------------------------------------------------------------------------+
|
+-------------------------------------------------------------------------------+
|
||||||
| NVIDIA-SMI 384.130 Driver Version: 384.130 |
|
| NVIDIA-SMI 384.130 Driver Version: 384.130 |
|
||||||
|-------------------------------+----------------------+----------------------+
|
|-------------------------------+----------------------+------------------------+
|
||||||
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||||
|===============================+======================+======================|
|
|===============================+======================+========================|
|
||||||
| 0 GRID K520 Off | 00000000:00:03.0 Off | N/A |
|
| 0 GRID K520 Off | 00000000:00:03.0 Off | N/A |
|
||||||
| N/A 36C P0 39W / 125W | 0MiB / 4036MiB | 0% Default |
|
| N/A 36C P0 39W / 125W | 0MiB / 4036MiB | 0% Default |
|
||||||
+-------------------------------+----------------------+----------------------+
|
+-------------------------------+----------------------+------------------------+
|
||||||
+-----------------------------------------------------------------------------+
|
+-------------------------------------------------------------------------------+
|
||||||
| Processes: GPU Memory |
|
| Processes: GPU Memory |
|
||||||
| GPU PID Type Process name Usage |
|
| GPU PID Type Process name Usage |
|
||||||
|=============================================================================|
|
|===============================================================================|
|
||||||
| No running processes found |
|
| No running processes found |
|
||||||
+-----------------------------------------------------------------------------+
|
+-------------------------------------------------------------------------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
Use the `device` option to specify GPUs. For example:
|
Use the `device` option to specify GPUs. For example:
|
||||||
|
|
@ -351,6 +354,6 @@ environment variables. More information on valid variables can be found at the
|
||||||
[nvidia-container-runtime](https://github.com/NVIDIA/nvidia-container-runtime)
|
[nvidia-container-runtime](https://github.com/NVIDIA/nvidia-container-runtime)
|
||||||
GitHub page. These variables can be set in a Dockerfile.
|
GitHub page. These variables can be set in a Dockerfile.
|
||||||
|
|
||||||
You can also utitize CUDA images which sets these variables automatically. See
|
You can also use CUDA images which sets these variables automatically. See the
|
||||||
the [CUDA images](https://github.com/NVIDIA/nvidia-docker/wiki/CUDA) GitHub page
|
[CUDA images](https://github.com/NVIDIA/nvidia-docker/wiki/CUDA) GitHub page
|
||||||
for more information.
|
for more information.
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,12 @@
|
||||||
---
|
---
|
||||||
description: Measure the behavior of running containers
|
description: Learn how to measure running containers, and about the different metrics
|
||||||
keywords: docker, metrics, CPU, memory, disk, IO, run, runtime, stats
|
keywords: docker, metrics, CPU, memory, disk, IO, run, runtime, stats
|
||||||
title: Runtime metrics
|
title: Runtime metrics
|
||||||
aliases:
|
aliases:
|
||||||
- /articles/runmetrics/
|
- /articles/runmetrics/
|
||||||
- /engine/articles/run_metrics/
|
- /engine/articles/run_metrics/
|
||||||
- /engine/articles/runmetrics/
|
- /engine/articles/runmetrics/
|
||||||
- /engine/admin/runmetrics/
|
- /engine/admin/runmetrics/
|
||||||
---
|
---
|
||||||
|
|
||||||
## Docker stats
|
## Docker stats
|
||||||
|
|
@ -25,23 +25,21 @@ redis1 0.07% 796 KB / 64 MB 1.21%
|
||||||
redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB / 648 B 12.4 MB / 0 B
|
redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB / 648 B 12.4 MB / 0 B
|
||||||
```
|
```
|
||||||
|
|
||||||
The [docker stats](../../engine/reference/commandline/stats.md) reference page has
|
The [`docker stats`](../../engine/reference/commandline/stats.md) reference
|
||||||
more details about the `docker stats` command.
|
page has more details about the `docker stats` command.
|
||||||
|
|
||||||
## Control groups
|
## Control groups
|
||||||
|
|
||||||
Linux Containers rely on [control groups](
|
Linux Containers rely on [control groups](https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt)
|
||||||
https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt)
|
|
||||||
which not only track groups of processes, but also expose metrics about
|
which not only track groups of processes, but also expose metrics about
|
||||||
CPU, memory, and block I/O usage. You can access those metrics and
|
CPU, memory, and block I/O usage. You can access those metrics and
|
||||||
obtain network usage metrics as well. This is relevant for "pure" LXC
|
obtain network usage metrics as well. This is relevant for "pure" LXC
|
||||||
containers, as well as for Docker containers.
|
containers, as well as for Docker containers.
|
||||||
|
|
||||||
Control groups are exposed through a pseudo-filesystem. In recent
|
Control groups are exposed through a pseudo-filesystem. In modern distros, you
|
||||||
distros, you should find this filesystem under `/sys/fs/cgroup`. Under
|
should find this filesystem under `/sys/fs/cgroup`. Under that directory, you
|
||||||
that directory, you see multiple sub-directories, called devices,
|
see multiple sub-directories, called `devices`, `freezer`, `blkio`, and so on.
|
||||||
freezer, blkio, etc.; each sub-directory actually corresponds to a different
|
Each sub-directory actually corresponds to a different cgroup hierarchy.
|
||||||
cgroup hierarchy.
|
|
||||||
|
|
||||||
On older systems, the control groups might be mounted on `/cgroup`, without
|
On older systems, the control groups might be mounted on `/cgroup`, without
|
||||||
distinct hierarchies. In that case, instead of seeing the sub-directories,
|
distinct hierarchies. In that case, instead of seeing the sub-directories,
|
||||||
|
|
@ -63,17 +61,19 @@ otherwise you are using v1.
|
||||||
Refer to the subsection that corresponds to your cgroup version.
|
Refer to the subsection that corresponds to your cgroup version.
|
||||||
|
|
||||||
cgroup v2 is used by default on the following distributions:
|
cgroup v2 is used by default on the following distributions:
|
||||||
|
|
||||||
- Fedora (since 31)
|
- Fedora (since 31)
|
||||||
- Debian GNU/Linux (since 11)
|
- Debian GNU/Linux (since 11)
|
||||||
- Ubuntu (since 21.10)
|
- Ubuntu (since 21.10)
|
||||||
|
|
||||||
#### cgroup v1
|
#### cgroup v1
|
||||||
|
|
||||||
You can look into `/proc/cgroups` to see the different control group subsystems
|
You can look into `/proc/cgroups` to see the different control group subsystems
|
||||||
known to the system, the hierarchy they belong to, and how many groups they contain.
|
known to the system, the hierarchy they belong to, and how many groups they contain.
|
||||||
|
|
||||||
You can also look at `/proc/<pid>/cgroup` to see which control groups a process
|
You can also look at `/proc/<pid>/cgroup` to see which control groups a process
|
||||||
belongs to. The control group is shown as a path relative to the root of
|
belongs to. The control group is shown as a path relative to the root of
|
||||||
the hierarchy mountpoint. `/` means the process has not been assigned to a
|
the hierarchy mountpoint. `/` means the process hasn't been assigned to a
|
||||||
group, while `/lxc/pumpkin` indicates that the process is a member of a
|
group, while `/lxc/pumpkin` indicates that the process is a member of a
|
||||||
container named `pumpkin`.
|
container named `pumpkin`.
|
||||||
|
|
||||||
|
|
@ -87,30 +87,32 @@ See `/sys/fs/cgroup/cgroup.controllers` to the available controllers.
|
||||||
Changing cgroup version requires rebooting the entire system.
|
Changing cgroup version requires rebooting the entire system.
|
||||||
|
|
||||||
On systemd-based systems, cgroup v2 can be enabled by adding `systemd.unified_cgroup_hierarchy=1`
|
On systemd-based systems, cgroup v2 can be enabled by adding `systemd.unified_cgroup_hierarchy=1`
|
||||||
to the kernel cmdline.
|
to the kernel command line.
|
||||||
To revert the cgroup version to v1, you need to set `systemd.unified_cgroup_hierarchy=0` instead.
|
To revert the cgroup version to v1, you need to set `systemd.unified_cgroup_hierarchy=0` instead.
|
||||||
|
|
||||||
If `grubby` command is available on your system (e.g. on Fedora), the cmdline can be modified as follows:
|
If `grubby` command is available on your system (e.g. on Fedora), the command line can be modified as follows:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1"
|
$ sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1"
|
||||||
```
|
```
|
||||||
|
|
||||||
If `grubby` command is not available, edit the `GRUB_CMDLINE_LINUX` line in `/etc/default/grub`
|
If `grubby` command isn't available, edit the `GRUB_CMDLINE_LINUX` line in `/etc/default/grub`
|
||||||
and run `sudo update-grub`.
|
and run `sudo update-grub`.
|
||||||
|
|
||||||
### Running Docker on cgroup v2
|
### Running Docker on cgroup v2
|
||||||
|
|
||||||
Docker supports cgroup v2 since Docker 20.10.
|
Docker supports cgroup v2 since Docker 20.10.
|
||||||
Running Docker on cgroup v2 also requires the following conditions to be satisfied:
|
Running Docker on cgroup v2 also requires the following conditions to be satisfied:
|
||||||
* containerd: v1.4 or later
|
|
||||||
* runc: v1.0.0-rc91 or later
|
- containerd: v1.4 or later
|
||||||
* Kernel: v4.15 or later (v5.2 or later is recommended)
|
- runc: v1.0.0-rc91 or later
|
||||||
|
- Kernel: v4.15 or later (v5.2 or later is recommended)
|
||||||
|
|
||||||
Note that the cgroup v2 mode behaves slightly different from the cgroup v1 mode:
|
Note that the cgroup v2 mode behaves slightly different from the cgroup v1 mode:
|
||||||
* The default cgroup driver (`dockerd --exec-opt native.cgroupdriver`) is "systemd" on v2, "cgroupfs" on v1.
|
|
||||||
* The default cgroup namespace mode (`docker run --cgroupns`) is "private" on v2, "host" on v1.
|
- The default cgroup driver (`dockerd --exec-opt native.cgroupdriver`) is `systemd` on v2, `cgroupfs` on v1.
|
||||||
* The `docker run` flags `--oom-kill-disable` and `--kernel-memory` are discarded on v2.
|
- The default cgroup namespace mode (`docker run --cgroupns`) is `private` on v2, `host` on v1.
|
||||||
|
- The `docker run` flags `--oom-kill-disable` and `--kernel-memory` are discarded on v2.
|
||||||
|
|
||||||
### Find the cgroup for a given container
|
### Find the cgroup for a given container
|
||||||
|
|
||||||
|
|
@ -127,6 +129,7 @@ look it up with `docker inspect` or `docker ps --no-trunc`.
|
||||||
|
|
||||||
Putting everything together to look at the memory metrics for a Docker
|
Putting everything together to look at the memory metrics for a Docker
|
||||||
container, take a look at the following paths:
|
container, take a look at the following paths:
|
||||||
|
|
||||||
- `/sys/fs/cgroup/memory/docker/<longid>/` on cgroup v1, `cgroupfs` driver
|
- `/sys/fs/cgroup/memory/docker/<longid>/` on cgroup v1, `cgroupfs` driver
|
||||||
- `/sys/fs/cgroup/memory/system.slice/docker-<longid>.scope/` on cgroup v1, `systemd` driver
|
- `/sys/fs/cgroup/memory/system.slice/docker-<longid>.scope/` on cgroup v1, `systemd` driver
|
||||||
- `/sys/fs/cgroup/docker/<longid>/` on cgroup v2, `cgroupfs` driver
|
- `/sys/fs/cgroup/docker/<longid>/` on cgroup v2, `cgroupfs` driver
|
||||||
|
|
@ -136,7 +139,7 @@ container, take a look at the following paths:
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> This section is not yet updated for cgroup v2.
|
> This section isn't yet updated for cgroup v2.
|
||||||
> For further information about cgroup v2, refer to [the kernel documentation](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html).
|
> For further information about cgroup v2, refer to [the kernel documentation](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html).
|
||||||
|
|
||||||
For each subsystem (memory, CPU, and block I/O), one or
|
For each subsystem (memory, CPU, and block I/O), one or
|
||||||
|
|
@ -144,7 +147,7 @@ more pseudo-files exist and contain statistics.
|
||||||
|
|
||||||
#### Memory metrics: `memory.stat`
|
#### Memory metrics: `memory.stat`
|
||||||
|
|
||||||
Memory metrics are found in the "memory" cgroup. The memory
|
Memory metrics are found in the `memory` cgroup. The memory
|
||||||
control group adds a little overhead, because it does very fine-grained
|
control group adds a little overhead, because it does very fine-grained
|
||||||
accounting of the memory usage on your host. Therefore, many distros
|
accounting of the memory usage on your host. Therefore, many distros
|
||||||
chose to not enable it by default. Generally, to enable it, all you have
|
chose to not enable it by default. Generally, to enable it, all you have
|
||||||
|
|
@ -193,26 +196,83 @@ Some others are "counters", or values that can only go up, because
|
||||||
they represent occurrences of a specific event. For instance, `pgfault`
|
they represent occurrences of a specific event. For instance, `pgfault`
|
||||||
indicates the number of page faults since the creation of the cgroup.
|
indicates the number of page faults since the creation of the cgroup.
|
||||||
|
|
||||||
<style>table tr > td:first-child { white-space: nowrap;}</style>
|
`cache`
|
||||||
|
: The amount of memory used by the processes of this control group that can be
|
||||||
|
associated precisely with a block on a block device. When you read from and
|
||||||
|
write to files on disk, this amount increases. This is the case if you use
|
||||||
|
"conventional" I/O (`open`, `read`, `write` syscalls) as well as mapped files
|
||||||
|
(with `mmap`). It also accounts for the memory used by `tmpfs` mounts, though
|
||||||
|
the reasons are unclear.
|
||||||
|
|
||||||
Metric | Description
|
`rss`
|
||||||
--------------------------------------|-----------------------------------------------------------
|
: The amount of memory that doesn't correspond to anything on disk: stacks,
|
||||||
**cache** | The amount of memory used by the processes of this control group that can be associated precisely with a block on a block device. When you read from and write to files on disk, this amount increases. This is the case if you use "conventional" I/O (`open`, `read`, `write` syscalls) as well as mapped files (with `mmap`). It also accounts for the memory used by `tmpfs` mounts, though the reasons are unclear.
|
heaps, and anonymous memory maps.
|
||||||
**rss** | The amount of memory that *doesn't* correspond to anything on disk: stacks, heaps, and anonymous memory maps.
|
|
||||||
**mapped_file** | Indicates the amount of memory mapped by the processes in the control group. It doesn't give you information about *how much* memory is used; it rather tells you *how* it is used.
|
`mapped_file`
|
||||||
**pgfault**, **pgmajfault** | Indicate the number of times that a process of the cgroup triggered a "page fault" and a "major fault", respectively. A page fault happens when a process accesses a part of its virtual memory space which is nonexistent or protected. The former can happen if the process is buggy and tries to access an invalid address (it is sent a `SIGSEGV` signal, typically killing it with the famous `Segmentation fault` message). The latter can happen when the process reads from a memory zone which has been swapped out, or which corresponds to a mapped file: in that case, the kernel loads the page from disk, and let the CPU complete the memory access. It can also happen when the process writes to a copy-on-write memory zone: likewise, the kernel preempts the process, duplicate the memory page, and resume the write operation on the process's own copy of the page. "Major" faults happen when the kernel actually needs to read the data from disk. When it just duplicates an existing page, or allocate an empty page, it's a regular (or "minor") fault.
|
: Indicates the amount of memory mapped by the processes in the control group.
|
||||||
**swap** | The amount of swap currently used by the processes in this cgroup.
|
It doesn't give you information about how much memory is used; it rather
|
||||||
**active_anon**, **inactive_anon** | The amount of *anonymous* memory that has been identified has respectively *active* and *inactive* by the kernel. "Anonymous" memory is the memory that is *not* linked to disk pages. In other words, that's the equivalent of the rss counter described above. In fact, the very definition of the rss counter is **active_anon** + **inactive_anon** - **tmpfs** (where tmpfs is the amount of memory used up by `tmpfs` filesystems mounted by this control group). Now, what's the difference between "active" and "inactive"? Pages are initially "active"; and at regular intervals, the kernel sweeps over the memory, and tags some pages as "inactive". Whenever they are accessed again, they are immediately retagged "active". When the kernel is almost out of memory, and time comes to swap out to disk, the kernel swaps "inactive" pages.
|
tells you how it's used.
|
||||||
**active_file**, **inactive_file** | Cache memory, with *active* and *inactive* similar to the *anon* memory above. The exact formula is **cache** = **active_file** + **inactive_file** + **tmpfs**. The exact rules used by the kernel to move memory pages between active and inactive sets are different from the ones used for anonymous memory, but the general principle is the same. When the kernel needs to reclaim memory, it is cheaper to reclaim a clean (=non modified) page from this pool, since it can be reclaimed immediately (while anonymous pages and dirty/modified pages need to be written to disk first).
|
|
||||||
**unevictable** | The amount of memory that cannot be reclaimed; generally, it accounts for memory that has been "locked" with `mlock`. It is often used by crypto frameworks to make sure that secret keys and other sensitive material never gets swapped out to disk.
|
`pgfault`, `pgmajfault`
|
||||||
**memory_limit**, **memsw_limit** | These are not really metrics, but a reminder of the limits applied to this cgroup. The first one indicates the maximum amount of physical memory that can be used by the processes of this control group; the second one indicates the maximum amount of RAM+swap.
|
: Indicate the number of times that a process of the cgroup triggered a "page
|
||||||
|
fault" and a "major fault", respectively. A page fault happens when a process
|
||||||
|
accesses a part of its virtual memory space which is nonexistent or protected.
|
||||||
|
The former can happen if the process is buggy and tries to access an invalid
|
||||||
|
address (it is sent a `SIGSEGV` signal, typically killing it with the famous
|
||||||
|
`Segmentation fault` message). The latter can happen when the process reads
|
||||||
|
from a memory zone which has been swapped out, or which corresponds to a mapped
|
||||||
|
file: in that case, the kernel loads the page from disk, and let the CPU
|
||||||
|
complete the memory access. It can also happen when the process writes to a
|
||||||
|
copy-on-write memory zone: likewise, the kernel preempts the process, duplicate
|
||||||
|
the memory page, and resume the write operation on the process's own copy of
|
||||||
|
the page. "Major" faults happen when the kernel actually needs to read the data
|
||||||
|
from disk. When it just duplicates an existing page, or allocate an empty page,
|
||||||
|
it's a regular (or "minor") fault.
|
||||||
|
|
||||||
|
`swap`
|
||||||
|
: The amount of swap currently used by the processes in this cgroup.
|
||||||
|
|
||||||
|
`active_anon`, `inactive_anon`
|
||||||
|
: The amount of anonymous memory that has been identified has respectively
|
||||||
|
_active_ and _inactive_ by the kernel. "Anonymous" memory is the memory that is
|
||||||
|
_not_ linked to disk pages. In other words, that's the equivalent of the rss
|
||||||
|
counter described above. In fact, the very definition of the rss counter is
|
||||||
|
`active_anon` + `inactive_anon` - `tmpfs` (where tmpfs is the amount of
|
||||||
|
memory used up by `tmpfs` filesystems mounted by this control group). Now,
|
||||||
|
what's the difference between "active" and "inactive"? Pages are initially
|
||||||
|
"active"; and at regular intervals, the kernel sweeps over the memory, and tags
|
||||||
|
some pages as "inactive". Whenever they're accessed again, they're
|
||||||
|
immediately re-tagged "active". When the kernel is almost out of memory, and
|
||||||
|
time comes to swap out to disk, the kernel swaps "inactive" pages.
|
||||||
|
|
||||||
|
`active_file`, `inactive_file`
|
||||||
|
: Cache memory, with _active_ and _inactive_ similar to the _anon_ memory
|
||||||
|
above. The exact formula is `cache` = `active_file` + `inactive_file` +
|
||||||
|
`tmpfs`. The exact rules used by the kernel to move memory pages between
|
||||||
|
active and inactive sets are different from the ones used for anonymous memory,
|
||||||
|
but the general principle is the same. When the kernel needs to reclaim memory,
|
||||||
|
it's cheaper to reclaim a clean (=non modified) page from this pool, since it
|
||||||
|
can be reclaimed immediately (while anonymous pages and dirty/modified pages
|
||||||
|
need to be written to disk first).
|
||||||
|
|
||||||
|
`unevictable`
|
||||||
|
: The amount of memory that cannot be reclaimed; generally, it accounts for
|
||||||
|
memory that has been "locked" with `mlock`. It's often used by crypto
|
||||||
|
frameworks to make sure that secret keys and other sensitive material never
|
||||||
|
gets swapped out to disk.
|
||||||
|
|
||||||
|
`memory_limit`, `memsw_limit`
|
||||||
|
: These aren't really metrics, but a reminder of the limits applied to this
|
||||||
|
cgroup. The first one indicates the maximum amount of physical memory that can
|
||||||
|
be used by the processes of this control group; the second one indicates the
|
||||||
|
maximum amount of RAM+swap.
|
||||||
|
|
||||||
Accounting for memory in the page cache is very complex. If two
|
Accounting for memory in the page cache is very complex. If two
|
||||||
processes in different control groups both read the same file
|
processes in different control groups both read the same file
|
||||||
(ultimately relying on the same blocks on disk), the corresponding
|
(ultimately relying on the same blocks on disk), the corresponding
|
||||||
memory charge is split between the control groups. It's nice, but
|
memory charge is split between the control groups. It's nice, but
|
||||||
it also means that when a cgroup is terminated, it could increase the
|
it also means that when a cgroup is terminated, it could increase the
|
||||||
memory usage of another cgroup, because they are not splitting the cost
|
memory usage of another cgroup, because they're not splitting the cost
|
||||||
anymore for those memory pages.
|
anymore for those memory pages.
|
||||||
|
|
||||||
### CPU metrics: `cpuacct.stat`
|
### CPU metrics: `cpuacct.stat`
|
||||||
|
|
@ -231,7 +291,7 @@ accumulated by the processes of the container, broken down into `user` and
|
||||||
the process.
|
the process.
|
||||||
|
|
||||||
Those times are expressed in ticks of 1/100th of a second, also called "user
|
Those times are expressed in ticks of 1/100th of a second, also called "user
|
||||||
jiffies". There are `USER_HZ` *"jiffies"* per second, and on x86 systems,
|
jiffies". There are `USER_HZ` _"jiffies"_ per second, and on x86 systems,
|
||||||
`USER_HZ` is 100. Historically, this mapped exactly to the number of scheduler
|
`USER_HZ` is 100. Historically, this mapped exactly to the number of scheduler
|
||||||
"ticks" per second, but higher frequency scheduling and
|
"ticks" per second, but higher frequency scheduling and
|
||||||
[tickless kernels](https://lwn.net/Articles/549580/) have made the number of
|
[tickless kernels](https://lwn.net/Articles/549580/) have made the number of
|
||||||
|
|
@ -241,24 +301,40 @@ ticks irrelevant.
|
||||||
|
|
||||||
Block I/O is accounted in the `blkio` controller.
|
Block I/O is accounted in the `blkio` controller.
|
||||||
Different metrics are scattered across different files. While you can
|
Different metrics are scattered across different files. While you can
|
||||||
find in-depth details in the [blkio-controller](
|
find in-depth details in the [blkio-controller](https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt)
|
||||||
https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt)
|
|
||||||
file in the kernel documentation, here is a short list of the most
|
file in the kernel documentation, here is a short list of the most
|
||||||
relevant ones:
|
relevant ones:
|
||||||
|
|
||||||
|
`blkio.sectors`
|
||||||
|
: Contains the number of 512-bytes sectors read and written by the processes
|
||||||
|
member of the cgroup, device by device. Reads and writes are merged in a single
|
||||||
|
counter.
|
||||||
|
|
||||||
Metric | Description
|
`blkio.io_service_bytes`
|
||||||
----------------------------|-----------------------------------------------------------
|
: Indicates the number of bytes read and written by the cgroup. It has 4
|
||||||
**blkio.sectors** | Contains the number of 512-bytes sectors read and written by the processes member of the cgroup, device by device. Reads and writes are merged in a single counter.
|
counters per device, because for each device, it differentiates between
|
||||||
**blkio.io_service_bytes** | Indicates the number of bytes read and written by the cgroup. It has 4 counters per device, because for each device, it differentiates between synchronous vs. asynchronous I/O, and reads vs. writes.
|
synchronous vs. asynchronous I/O, and reads vs. writes.
|
||||||
**blkio.io_serviced** | The number of I/O operations performed, regardless of their size. It also has 4 counters per device.
|
|
||||||
**blkio.io_queued** | Indicates the number of I/O operations currently queued for this cgroup. In other words, if the cgroup isn't doing any I/O, this is zero. The opposite is not true. In other words, if there is no I/O queued, it does not mean that the cgroup is idle (I/O-wise). It could be doing purely synchronous reads on an otherwise quiescent device, which can therefore handle them immediately, without queuing. Also, while it is helpful to figure out which cgroup is putting stress on the I/O subsystem, keep in mind that it is a relative quantity. Even if a process group does not perform more I/O, its queue size can increase just because the device load increases because of other devices.
|
`blkio.io_serviced`
|
||||||
|
: The number of I/O operations performed, regardless of their size. It also has
|
||||||
|
4 counters per device.
|
||||||
|
|
||||||
|
`blkio.io_queued`
|
||||||
|
: Indicates the number of I/O operations currently queued for this cgroup. In
|
||||||
|
other words, if the cgroup isn't doing any I/O, this is zero. The opposite is
|
||||||
|
not true. In other words, if there is no I/O queued, it doesn't mean that the
|
||||||
|
cgroup is idle (I/O-wise). It could be doing purely synchronous reads on an
|
||||||
|
otherwise quiescent device, which can therefore handle them immediately,
|
||||||
|
without queuing. Also, while it's helpful to figure out which cgroup is
|
||||||
|
putting stress on the I/O subsystem, keep in mind that it's a relative
|
||||||
|
quantity. Even if a process group doesn't perform more I/O, its queue size can
|
||||||
|
increase just because the device load increases because of other devices.
|
||||||
|
|
||||||
### Network metrics
|
### Network metrics
|
||||||
|
|
||||||
Network metrics are not exposed directly by control groups. There is a
|
Network metrics aren't exposed directly by control groups. There is a
|
||||||
good explanation for that: network interfaces exist within the context
|
good explanation for that: network interfaces exist within the context
|
||||||
of *network namespaces*. The kernel could probably accumulate metrics
|
of _network namespaces_. The kernel could probably accumulate metrics
|
||||||
about packets and bytes sent and received by a group of processes, but
|
about packets and bytes sent and received by a group of processes, but
|
||||||
those metrics wouldn't be very useful. You want per-interface metrics
|
those metrics wouldn't be very useful. You want per-interface metrics
|
||||||
(because traffic happening on the local `lo`
|
(because traffic happening on the local `lo`
|
||||||
|
|
@ -269,11 +345,11 @@ interfaces, potentially multiple `eth0`
|
||||||
interfaces, etc.; so this is why there is no easy way to gather network
|
interfaces, etc.; so this is why there is no easy way to gather network
|
||||||
metrics with control groups.
|
metrics with control groups.
|
||||||
|
|
||||||
Instead we can gather network metrics from other sources:
|
Instead you can gather network metrics from other sources.
|
||||||
|
|
||||||
#### IPtables
|
#### iptables
|
||||||
|
|
||||||
IPtables (or rather, the netfilter framework for which iptables is just
|
iptables (or rather, the netfilter framework for which iptables is just
|
||||||
an interface) can do some serious accounting.
|
an interface) can do some serious accounting.
|
||||||
|
|
||||||
For instance, you can setup a rule to account for the outbound HTTP
|
For instance, you can setup a rule to account for the outbound HTTP
|
||||||
|
|
@ -293,7 +369,7 @@ Later, you can check the values of the counters, with:
|
||||||
$ iptables -nxvL OUTPUT
|
$ iptables -nxvL OUTPUT
|
||||||
```
|
```
|
||||||
|
|
||||||
Technically, `-n` is not required, but it
|
Technically, `-n` isn't required, but it
|
||||||
prevents iptables from doing DNS reverse lookups, which are probably
|
prevents iptables from doing DNS reverse lookups, which are probably
|
||||||
useless in this scenario.
|
useless in this scenario.
|
||||||
|
|
||||||
|
|
@ -317,15 +393,15 @@ to a virtual Ethernet interface in your host, with a name like `vethKk8Zqi`.
|
||||||
Figuring out which interface corresponds to which container is, unfortunately,
|
Figuring out which interface corresponds to which container is, unfortunately,
|
||||||
difficult.
|
difficult.
|
||||||
|
|
||||||
But for now, the best way is to check the metrics *from within the
|
But for now, the best way is to check the metrics _from within the
|
||||||
containers*. To accomplish this, you can run an executable from the host
|
containers_. To accomplish this, you can run an executable from the host
|
||||||
environment within the network namespace of a container using **ip-netns
|
environment within the network namespace of a container using **ip-netns
|
||||||
magic**.
|
magic**.
|
||||||
|
|
||||||
The `ip-netns exec` command allows you to execute any
|
The `ip-netns exec` command allows you to execute any
|
||||||
program (present in the host system) within any network namespace
|
program (present in the host system) within any network namespace
|
||||||
visible to the current process. This means that your host can
|
visible to the current process. This means that your host can
|
||||||
enter the network namespace of your containers, but your containers
|
enter the network namespace of your containers, but your containers
|
||||||
can't access the host or other peer containers.
|
can't access the host or other peer containers.
|
||||||
Containers can interact with their sub-containers, though.
|
Containers can interact with their sub-containers, though.
|
||||||
|
|
||||||
|
|
@ -341,7 +417,7 @@ For example:
|
||||||
$ ip netns exec mycontainer netstat -i
|
$ ip netns exec mycontainer netstat -i
|
||||||
```
|
```
|
||||||
|
|
||||||
`ip netns` finds the "mycontainer" container by
|
`ip netns` finds the `mycontainer` container by
|
||||||
using namespaces pseudo-files. Each process belongs to one network
|
using namespaces pseudo-files. Each process belongs to one network
|
||||||
namespace, one PID namespace, one `mnt` namespace,
|
namespace, one PID namespace, one `mnt` namespace,
|
||||||
etc., and those namespaces are materialized under
|
etc., and those namespaces are materialized under
|
||||||
|
|
@ -382,7 +458,7 @@ $ ip netns exec $CID netstat -i
|
||||||
Running a new process each time you want to update metrics is
|
Running a new process each time you want to update metrics is
|
||||||
(relatively) expensive. If you want to collect metrics at high
|
(relatively) expensive. If you want to collect metrics at high
|
||||||
resolutions, and/or over a large number of containers (think 1000
|
resolutions, and/or over a large number of containers (think 1000
|
||||||
containers on a single host), you do not want to fork a new process each
|
containers on a single host), you don't want to fork a new process each
|
||||||
time.
|
time.
|
||||||
|
|
||||||
Here is how to collect metrics from a single process. You need to
|
Here is how to collect metrics from a single process. You need to
|
||||||
|
|
@ -395,7 +471,7 @@ the namespace pseudo-file (remember: that's the pseudo-file in
|
||||||
|
|
||||||
However, there is a catch: you must not keep this file descriptor open.
|
However, there is a catch: you must not keep this file descriptor open.
|
||||||
If you do, when the last process of the control group exits, the
|
If you do, when the last process of the control group exits, the
|
||||||
namespace is not destroyed, and its network resources (like the
|
namespace isn't destroyed, and its network resources (like the
|
||||||
virtual interface of the container) stays around forever (or until
|
virtual interface of the container) stays around forever (or until
|
||||||
you close that file descriptor).
|
you close that file descriptor).
|
||||||
|
|
||||||
|
|
@ -404,7 +480,7 @@ container, and re-open the namespace pseudo-file each time.
|
||||||
|
|
||||||
## Collect metrics when a container exits
|
## Collect metrics when a container exits
|
||||||
|
|
||||||
Sometimes, you do not care about real time metric collection, but when a
|
Sometimes, you don't care about real time metric collection, but when a
|
||||||
container exits, you want to know how much CPU, memory, etc. it has
|
container exits, you want to know how much CPU, memory, etc. it has
|
||||||
used.
|
used.
|
||||||
|
|
||||||
|
|
@ -434,4 +510,4 @@ and remove the container control group. To remove a control group, just
|
||||||
`rmdir` its directory. It's counter-intuitive to
|
`rmdir` its directory. It's counter-intuitive to
|
||||||
`rmdir` a directory as it still contains files; but
|
`rmdir` a directory as it still contains files; but
|
||||||
remember that this is a pseudo-filesystem, so usual rules don't apply.
|
remember that this is a pseudo-filesystem, so usual rules don't apply.
|
||||||
After the cleanup is done, the collection process can exit safely.
|
After the cleanup is done, the collection process can exit safely.
|
||||||
|
|
|
||||||
|
|
@ -1,28 +1,28 @@
|
||||||
---
|
---
|
||||||
description: Configuring and troubleshooting the Docker daemon
|
description: Configuring the Docker daemon
|
||||||
keywords: docker, daemon, configuration, troubleshooting
|
keywords: docker, daemon, configuration
|
||||||
title: Docker daemon configuration overview
|
title: Docker daemon configuration overview
|
||||||
aliases:
|
aliases:
|
||||||
- /articles/chef/
|
- /articles/chef/
|
||||||
- /articles/configuring/
|
- /articles/configuring/
|
||||||
- /articles/dsc/
|
- /articles/dsc/
|
||||||
- /articles/puppet/
|
- /articles/puppet/
|
||||||
- /config/thirdparty/
|
- /config/thirdparty/
|
||||||
- /config/thirdparty/ansible/
|
- /config/thirdparty/ansible/
|
||||||
- /config/thirdparty/chef/
|
- /config/thirdparty/chef/
|
||||||
- /config/thirdparty/dsc/
|
- /config/thirdparty/dsc/
|
||||||
- /config/thirdparty/puppet/
|
- /config/thirdparty/puppet/
|
||||||
- /engine/admin/
|
- /engine/admin/
|
||||||
- /engine/admin/ansible/
|
- /engine/admin/ansible/
|
||||||
- /engine/admin/chef/
|
- /engine/admin/chef/
|
||||||
- /engine/admin/configuring/
|
- /engine/admin/configuring/
|
||||||
- /engine/admin/dsc/
|
- /engine/admin/dsc/
|
||||||
- /engine/admin/puppet/
|
- /engine/admin/puppet/
|
||||||
- /engine/articles/chef/
|
- /engine/articles/chef/
|
||||||
- /engine/articles/configuring/
|
- /engine/articles/configuring/
|
||||||
- /engine/articles/dsc/
|
- /engine/articles/dsc/
|
||||||
- /engine/articles/puppet/
|
- /engine/articles/puppet/
|
||||||
- /engine/userguide/
|
- /engine/userguide/
|
||||||
---
|
---
|
||||||
|
|
||||||
This page shows you how to customize the Docker daemon, `dockerd`.
|
This page shows you how to customize the Docker daemon, `dockerd`.
|
||||||
|
|
@ -52,23 +52,6 @@ To configure the Docker daemon using a JSON file, create a file at
|
||||||
`/etc/docker/daemon.json` on Linux systems, or
|
`/etc/docker/daemon.json` on Linux systems, or
|
||||||
`C:\ProgramData\docker\config\daemon.json` on Windows.
|
`C:\ProgramData\docker\config\daemon.json` on Windows.
|
||||||
|
|
||||||
Here's what the configuration file might look like:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"builder": {
|
|
||||||
"gc": {
|
|
||||||
"defaultKeepStorage": "20GB",
|
|
||||||
"enabled": true
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"experimental": false
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
In addition to Docker Desktop default values, this configuration enables garbage
|
|
||||||
collection at a 20GB threshold, and enables buildkit.
|
|
||||||
|
|
||||||
Using this configuration file, run the Docker daemon in debug mode, using TLS, and
|
Using this configuration file, run the Docker daemon in debug mode, using TLS, and
|
||||||
listen for traffic routed to `192.168.59.3` on port `2376`. You can learn what
|
listen for traffic routed to `192.168.59.3` on port `2376`. You can learn what
|
||||||
configuration options are available in the
|
configuration options are available in the
|
||||||
|
|
@ -132,4 +115,4 @@ You can configure the Docker daemon to use a different directory, using the
|
||||||
Since the state of a Docker daemon is kept on this directory, make sure you use
|
Since the state of a Docker daemon is kept on this directory, make sure you use
|
||||||
a dedicated directory for each daemon. If two daemons share the same directory,
|
a dedicated directory for each daemon. If two daemons share the same directory,
|
||||||
for example, an NFS share, you are going to experience errors that are difficult
|
for example, an NFS share, you are going to experience errors that are difficult
|
||||||
to troubleshoot.
|
to troubleshoot.
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
---
|
---
|
||||||
title: Read the daemon logs
|
title: Read the daemon logs
|
||||||
description: How to read the container logs for the Docker daemon.
|
description: How to read the event logs for the Docker daemon
|
||||||
keywords: docker, daemon, configuration, troubleshooting, logging
|
keywords: docker, daemon, configuration, troubleshooting, logging
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -13,8 +13,8 @@ subsystem used:
|
||||||
| Linux | Use the command `journalctl -xu docker.service` (or read `/var/log/syslog` or `/var/log/messages`, depending on your Linux Distribution) |
|
| Linux | Use the command `journalctl -xu docker.service` (or read `/var/log/syslog` or `/var/log/messages`, depending on your Linux Distribution) |
|
||||||
| macOS (`dockerd` logs) | `~/Library/Containers/com.docker.docker/Data/log/vm/dockerd.log` |
|
| macOS (`dockerd` logs) | `~/Library/Containers/com.docker.docker/Data/log/vm/dockerd.log` |
|
||||||
| macOS (`containerd` logs) | `~/Library/Containers/com.docker.docker/Data/log/vm/containerd.log` |
|
| macOS (`containerd` logs) | `~/Library/Containers/com.docker.docker/Data/log/vm/containerd.log` |
|
||||||
| Windows (WSL2) (`dockerd` logs) | `%LOCALAPPDATA%\Docker\log\vm\dockerd.log` |
|
| Windows (WSL2) (`dockerd` logs) | `%LOCALAPPDATA%\Docker\log\vm\dockerd.log` |
|
||||||
| Windows (WSL2) (`containerd` logs) | `%LOCALAPPDATA%\Docker\log\vm\containerd.log` |
|
| Windows (WSL2) (`containerd` logs) | `%LOCALAPPDATA%\Docker\log\vm\containerd.log` |
|
||||||
| Windows (Windows containers) | Logs are in the Windows Event Log |
|
| Windows (Windows containers) | Logs are in the Windows Event Log |
|
||||||
|
|
||||||
To view the `dockerd` logs on macOS, open a terminal Window, and use the `tail`
|
To view the `dockerd` logs on macOS, open a terminal Window, and use the `tail`
|
||||||
|
|
@ -40,9 +40,8 @@ There are two ways to enable debugging. The recommended approach is to set the
|
||||||
Docker platform.
|
Docker platform.
|
||||||
|
|
||||||
1. Edit the `daemon.json` file, which is usually located in `/etc/docker/`. You
|
1. Edit the `daemon.json` file, which is usually located in `/etc/docker/`. You
|
||||||
may need to create this file, if it does not yet exist. On macOS or Windows,
|
may need to create this file, if it doesn't yet exist. On macOS or Windows,
|
||||||
do not edit the file directly. Instead, go to **Preferences** / **Daemon** /
|
don't edit the file directly. Instead, edit the file through the Docker Desktop settings.
|
||||||
**Advanced**.
|
|
||||||
|
|
||||||
2. If the file is empty, add the following:
|
2. If the file is empty, add the following:
|
||||||
|
|
||||||
|
|
@ -53,9 +52,9 @@ Docker platform.
|
||||||
```
|
```
|
||||||
|
|
||||||
If the file already contains JSON, just add the key `"debug": true`, being
|
If the file already contains JSON, just add the key `"debug": true`, being
|
||||||
careful to add a comma to the end of the line if it is not the last line
|
careful to add a comma to the end of the line if it's not the last line
|
||||||
before the closing bracket. Also verify that if the `log-level` key is set,
|
before the closing bracket. Also verify that if the `log-level` key is set,
|
||||||
it is set to either `info` or `debug`. `info` is the default, and possible
|
it's set to either `info` or `debug`. `info` is the default, and possible
|
||||||
values are `debug`, `info`, `warn`, `error`, `fatal`.
|
values are `debug`, `info`, `warn`, `error`, `fatal`.
|
||||||
|
|
||||||
3. Send a `HUP` signal to the daemon to cause it to reload its configuration.
|
3. Send a `HUP` signal to the daemon to cause it to reload its configuration.
|
||||||
|
|
@ -91,7 +90,7 @@ sending a `SIGUSR1` signal to the daemon.
|
||||||
|
|
||||||
Run the executable with the flag `--pid=<PID of daemon>`.
|
Run the executable with the flag `--pid=<PID of daemon>`.
|
||||||
|
|
||||||
This forces a stack trace to be logged but does not stop the daemon. Daemon logs
|
This forces a stack trace to be logged but doesn't stop the daemon. Daemon logs
|
||||||
show the stack trace or the path to a file containing the stack trace if it was
|
show the stack trace or the path to a file containing the stack trace if it was
|
||||||
logged to a file.
|
logged to a file.
|
||||||
|
|
||||||
|
|
@ -109,7 +108,7 @@ The Docker daemon log can be viewed by using one of the following methods:
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> It is not possible to manually generate a stack trace on Docker Desktop for
|
> It isn't possible to manually generate a stack trace on Docker Desktop for
|
||||||
> Mac or Docker Desktop for Windows. However, you can click the Docker taskbar
|
> Mac or Docker Desktop for Windows. However, you can click the Docker taskbar
|
||||||
> icon and choose **Troubleshoot** to send information to Docker if you run into
|
> icon and choose **Troubleshoot** to send information to Docker if you run into
|
||||||
> issues.
|
> issues.
|
||||||
|
|
@ -123,4 +122,4 @@ Look in the Docker logs for a message like the following:
|
||||||
The locations where Docker saves these stack traces and dumps depends on your
|
The locations where Docker saves these stack traces and dumps depends on your
|
||||||
operating system and configuration. You can sometimes get useful diagnostic
|
operating system and configuration. You can sometimes get useful diagnostic
|
||||||
information straight from the stack traces and dumps. Otherwise, you can provide
|
information straight from the stack traces and dumps. Otherwise, you can provide
|
||||||
this information to Docker for help diagnosing the problem.
|
this information to Docker for help diagnosing the problem.
|
||||||
|
|
|
||||||
|
|
@ -3,9 +3,9 @@ description: Collecting Docker metrics with Prometheus
|
||||||
keywords: prometheus, metrics
|
keywords: prometheus, metrics
|
||||||
title: Collect Docker metrics with Prometheus
|
title: Collect Docker metrics with Prometheus
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/prometheus/
|
- /engine/admin/prometheus/
|
||||||
- /config/thirdparty/monitoring/
|
- /config/thirdparty/monitoring/
|
||||||
- /config/thirdparty/prometheus/
|
- /config/thirdparty/prometheus/
|
||||||
---
|
---
|
||||||
|
|
||||||
[Prometheus](https://prometheus.io/) is an open-source systems monitoring and
|
[Prometheus](https://prometheus.io/) is an open-source systems monitoring and
|
||||||
|
|
@ -19,13 +19,13 @@ container, and monitor your Docker instance using Prometheus.
|
||||||
> development and may change at any time.
|
> development and may change at any time.
|
||||||
{ .warning }
|
{ .warning }
|
||||||
|
|
||||||
Currently, you can only monitor Docker itself. You cannot currently monitor your
|
Currently, you can only monitor Docker itself. You can't currently monitor your
|
||||||
application using the Docker target.
|
application using the Docker target.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
1. One or more Docker engines are joined into a Docker swarm, using `docker
|
1. One or more Docker engines are joined into a Docker Swarm, using `docker
|
||||||
swarm init` on one manager and `docker swarm join` on other managers and
|
swarm init` on one manager and `docker swarm join` on other managers and
|
||||||
worker nodes.
|
worker nodes.
|
||||||
2. You need an internet connection to pull the Prometheus image.
|
2. You need an internet connection to pull the Prometheus image.
|
||||||
|
|
||||||
|
|
@ -33,23 +33,22 @@ application using the Docker target.
|
||||||
|
|
||||||
To configure the Docker daemon as a Prometheus target, you need to specify the
|
To configure the Docker daemon as a Prometheus target, you need to specify the
|
||||||
`metrics-address`. The best way to do this is via the `daemon.json`, which is
|
`metrics-address`. The best way to do this is via the `daemon.json`, which is
|
||||||
located at one of the following locations by default. If the file does not
|
located at one of the following locations by default. If the file doesn't
|
||||||
exist, create it.
|
exist, create it.
|
||||||
|
|
||||||
- **Linux**: `/etc/docker/daemon.json`
|
- **Linux**: `/etc/docker/daemon.json`
|
||||||
- **Windows Server**: `C:\ProgramData\docker\config\daemon.json`
|
- **Windows Server**: `C:\ProgramData\docker\config\daemon.json`
|
||||||
- **Docker Desktop for Mac / Docker Desktop for Windows**: Click the Docker icon in the toolbar,
|
- **Docker Desktop**: Open the Docker Desktop settings and select **Docker Engine**.
|
||||||
select **Settings**, then select **Docker Engine**.
|
|
||||||
|
|
||||||
If the file is currently empty, paste the following:
|
If the file is currently empty, paste the following:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"metrics-addr" : "127.0.0.1:9323"
|
"metrics-addr": "127.0.0.1:9323"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
If the file is not empty, add the new key, making sure that the resulting
|
If the file isn't empty, add the new key, making sure that the resulting
|
||||||
file is valid JSON. Be careful that every line ends with a comma (`,`) except
|
file is valid JSON. Be careful that every line ends with a comma (`,`) except
|
||||||
for the last line.
|
for the last line.
|
||||||
|
|
||||||
|
|
@ -69,14 +68,14 @@ except for the addition of the Docker job definition at the bottom of the file.
|
||||||
```yml
|
```yml
|
||||||
# my global config
|
# my global config
|
||||||
global:
|
global:
|
||||||
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
|
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
|
||||||
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
|
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
|
||||||
# scrape_timeout is set to the global default (10s).
|
# scrape_timeout is set to the global default (10s).
|
||||||
|
|
||||||
# Attach these labels to any time series or alerts when communicating with
|
# Attach these labels to any time series or alerts when communicating with
|
||||||
# external systems (federation, remote storage, Alertmanager).
|
# external systems (federation, remote storage, Alertmanager).
|
||||||
external_labels:
|
external_labels:
|
||||||
monitor: 'codelab-monitor'
|
monitor: "codelab-monitor"
|
||||||
|
|
||||||
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
|
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
|
||||||
rule_files:
|
rule_files:
|
||||||
|
|
@ -87,20 +86,21 @@ rule_files:
|
||||||
# Here it's Prometheus itself.
|
# Here it's Prometheus itself.
|
||||||
scrape_configs:
|
scrape_configs:
|
||||||
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
|
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
|
||||||
- job_name: 'prometheus'
|
- job_name: "prometheus"
|
||||||
|
|
||||||
# metrics_path defaults to '/metrics'
|
# metrics_path defaults to '/metrics'
|
||||||
# scheme defaults to 'http'.
|
# scheme defaults to 'http'.
|
||||||
|
|
||||||
static_configs:
|
static_configs:
|
||||||
- targets: ['host.docker.internal:9090']
|
- targets: ["host.docker.internal:9090"]
|
||||||
|
|
||||||
- job_name: 'docker'
|
- job_name:
|
||||||
# metrics_path defaults to '/metrics'
|
"docker"
|
||||||
# scheme defaults to 'http'.
|
# metrics_path defaults to '/metrics'
|
||||||
|
# scheme defaults to 'http'.
|
||||||
|
|
||||||
static_configs:
|
static_configs:
|
||||||
- targets: ['localhost:9323']
|
- targets: ["localhost:9323"]
|
||||||
```
|
```
|
||||||
|
|
||||||
Next, start a single-replica Prometheus service using this configuration.
|
Next, start a single-replica Prometheus service using this configuration.
|
||||||
|
|
@ -124,16 +124,16 @@ Next, start a single-replica Prometheus service using this configuration.
|
||||||
prom/prometheus
|
prom/prometheus
|
||||||
```
|
```
|
||||||
|
|
||||||
Verify that the Docker target is listed at http://localhost:9090/targets/.
|
Verify that the Docker target is listed at `http://localhost:9090/targets/`.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can't access the endpoint URLs directly if you use Docker Desktop
|
You can't access the endpoint URLs directly if you use Docker Desktop
|
||||||
for Mac or Docker Desktop for Windows.
|
for Mac or Docker Desktop for Windows.
|
||||||
|
|
||||||
## Use Prometheus
|
## Use Prometheus
|
||||||
|
|
||||||
Create a graph. Click the **Graphs** link in the Prometheus UI. Choose a metric
|
Create a graph. Select the **Graphs** link in the Prometheus UI. Choose a metric
|
||||||
from the combo box to the right of the **Execute** button, and click
|
from the combo box to the right of the **Execute** button, and click
|
||||||
**Execute**. The screenshot below shows the graph for
|
**Execute**. The screenshot below shows the graph for
|
||||||
`engine_daemon_network_actions_seconds_count`.
|
`engine_daemon_network_actions_seconds_count`.
|
||||||
|
|
@ -160,7 +160,7 @@ your graph.
|
||||||

|

|
||||||
|
|
||||||
When you are ready, stop and remove the `ping_service` service, so that you
|
When you are ready, stop and remove the `ping_service` service, so that you
|
||||||
are not flooding a host with pings for no reason.
|
aren't flooding a host with pings for no reason.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker service remove ping_service
|
$ docker service remove ping_service
|
||||||
|
|
@ -169,8 +169,7 @@ $ docker service remove ping_service
|
||||||
Wait a few minutes and you should see that the graph falls back to the idle
|
Wait a few minutes and you should see that the graph falls back to the idle
|
||||||
level.
|
level.
|
||||||
|
|
||||||
|
|
||||||
## Next steps
|
## Next steps
|
||||||
|
|
||||||
- Read the [Prometheus documentation](https://prometheus.io/docs/introduction/overview/)
|
- Read the [Prometheus documentation](https://prometheus.io/docs/introduction/overview/)
|
||||||
- Set up some [alerts](https://prometheus.io/docs/alerting/overview/)
|
- Set up some [alerts](https://prometheus.io/docs/alerting/overview/)
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,8 @@
|
||||||
---
|
---
|
||||||
description: 'Configuring remote access allows Docker to accept requests from remote
|
description:
|
||||||
|
Configuring remote access allows Docker to accept requests from remote
|
||||||
hosts by configuring it to listen on an IP address and port as well as the Unix
|
hosts by configuring it to listen on an IP address and port as well as the Unix
|
||||||
socket
|
socket
|
||||||
|
|
||||||
'
|
|
||||||
keywords: configuration, daemon, remote access, engine
|
keywords: configuration, daemon, remote access, engine
|
||||||
title: Configure remote access for Docker daemon
|
title: Configure remote access for Docker daemon
|
||||||
---
|
---
|
||||||
|
|
@ -16,7 +15,7 @@ refer to the
|
||||||
[dockerd CLI reference](/engine/reference/commandline/dockerd/#bind-docker-to-another-hostport-or-a-unix-socket).
|
[dockerd CLI reference](/engine/reference/commandline/dockerd/#bind-docker-to-another-hostport-or-a-unix-socket).
|
||||||
|
|
||||||
<!-- prettier-ignore -->
|
<!-- prettier-ignore -->
|
||||||
> Secure your connection
|
> **Warning**
|
||||||
>
|
>
|
||||||
> Before configuring Docker to accept connections from remote hosts it's
|
> Before configuring Docker to accept connections from remote hosts it's
|
||||||
> critically important that you understand the security implications of opening
|
> critically important that you understand the security implications of opening
|
||||||
|
|
@ -88,4 +87,4 @@ you can use the `daemon.json` file, if your distribution doesn't use systemd.
|
||||||
```console
|
```console
|
||||||
$ sudo netstat -lntp | grep dockerd
|
$ sudo netstat -lntp | grep dockerd
|
||||||
tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 3758/dockerd
|
tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 3758/dockerd
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,12 @@
|
||||||
---
|
---
|
||||||
description: Controlling and configuring Docker using systemd
|
description: Learn about controlling and configuring the Docker daemon using systemd
|
||||||
keywords: dockerd, daemon, systemd, configuration, proxy, networking
|
keywords: dockerd, daemon, systemd, configuration, proxy, networking
|
||||||
title: Configure the daemon with systemd
|
title: Configure the daemon with systemd
|
||||||
aliases:
|
aliases:
|
||||||
- /articles/host_integration/
|
- /articles/host_integration/
|
||||||
- /articles/systemd/
|
- /articles/systemd/
|
||||||
- /engine/admin/systemd/
|
- /engine/admin/systemd/
|
||||||
- /engine/articles/systemd/
|
- /engine/articles/systemd/
|
||||||
---
|
---
|
||||||
|
|
||||||
This page describes how to customize daemon settings when using systemd.
|
This page describes how to customize daemon settings when using systemd.
|
||||||
|
|
@ -22,7 +22,7 @@ more information.
|
||||||
When installing the binary without a package manager, you may want to integrate
|
When installing the binary without a package manager, you may want to integrate
|
||||||
Docker with systemd. For this, install the two unit files (`service` and
|
Docker with systemd. For this, install the two unit files (`service` and
|
||||||
`socket`) from
|
`socket`) from
|
||||||
[the github repository](https://github.com/moby/moby/tree/master/contrib/init/systemd)
|
[the GitHub repository](https://github.com/moby/moby/tree/master/contrib/init/systemd)
|
||||||
to `/etc/systemd/system`.
|
to `/etc/systemd/system`.
|
||||||
|
|
||||||
### Configure the Docker daemon to use a proxy server {#httphttps-proxy}
|
### Configure the Docker daemon to use a proxy server {#httphttps-proxy}
|
||||||
|
|
@ -52,7 +52,7 @@ behavior for the daemon in the [`daemon.json` file](./index.md#configure-the-doc
|
||||||
|
|
||||||
These configurations override the default `docker.service` systemd file.
|
These configurations override the default `docker.service` systemd file.
|
||||||
|
|
||||||
If you are behind an HTTP or HTTPS proxy server, for example in corporate
|
If you're behind an HTTP or HTTPS proxy server, for example in corporate
|
||||||
settings, the daemon proxy configurations must be specified in the systemd
|
settings, the daemon proxy configurations must be specified in the systemd
|
||||||
service file, not in the `daemon.json` file or using environment variables.
|
service file, not in the `daemon.json` file or using environment variables.
|
||||||
|
|
||||||
|
|
@ -68,6 +68,7 @@ service file, not in the `daemon.json` file or using environment variables.
|
||||||
|
|
||||||
{{< tabs >}}
|
{{< tabs >}}
|
||||||
{{< tab name="regular install" >}}
|
{{< tab name="regular install" >}}
|
||||||
|
|
||||||
1. Create a systemd drop-in directory for the `docker` service:
|
1. Create a systemd drop-in directory for the `docker` service:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
@ -104,7 +105,7 @@ service file, not in the `daemon.json` file or using environment variables.
|
||||||
> Special characters in the proxy value, such as `#?!()[]{}`, must be double
|
> Special characters in the proxy value, such as `#?!()[]{}`, must be double
|
||||||
> escaped using `%%`. For example:
|
> escaped using `%%`. For example:
|
||||||
>
|
>
|
||||||
> ```
|
> ```systemd
|
||||||
> [Service]
|
> [Service]
|
||||||
> Environment="HTTP_PROXY=http://domain%%5Cuser:complex%%23pass@proxy.example.com:3128/"
|
> Environment="HTTP_PROXY=http://domain%%5Cuser:complex%%23pass@proxy.example.com:3128/"
|
||||||
> ```
|
> ```
|
||||||
|
|
@ -127,7 +128,7 @@ service file, not in the `daemon.json` file or using environment variables.
|
||||||
- Literal port numbers are accepted by IP address prefixes (`1.2.3.4:80`) and
|
- Literal port numbers are accepted by IP address prefixes (`1.2.3.4:80`) and
|
||||||
domain names (`foo.example.com:80`)
|
domain names (`foo.example.com:80`)
|
||||||
|
|
||||||
Config example:
|
Example:
|
||||||
|
|
||||||
```systemd
|
```systemd
|
||||||
[Service]
|
[Service]
|
||||||
|
|
@ -151,8 +152,10 @@ service file, not in the `daemon.json` file or using environment variables.
|
||||||
|
|
||||||
Environment=HTTP_PROXY=http://proxy.example.com:3128 HTTPS_PROXY=https://proxy.example.com:3129 NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp
|
Environment=HTTP_PROXY=http://proxy.example.com:3128 HTTPS_PROXY=https://proxy.example.com:3129 NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp
|
||||||
```
|
```
|
||||||
|
|
||||||
{{< /tab >}}
|
{{< /tab >}}
|
||||||
{{< tab name="rootless mode" >}}
|
{{< tab name="rootless mode" >}}
|
||||||
|
|
||||||
1. Create a systemd drop-in directory for the `docker` service:
|
1. Create a systemd drop-in directory for the `docker` service:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
@ -189,7 +192,7 @@ service file, not in the `daemon.json` file or using environment variables.
|
||||||
> Special characters in the proxy value, such as `#?!()[]{}`, must be double
|
> Special characters in the proxy value, such as `#?!()[]{}`, must be double
|
||||||
> escaped using `%%`. For example:
|
> escaped using `%%`. For example:
|
||||||
>
|
>
|
||||||
> ```
|
> ```systemd
|
||||||
> [Service]
|
> [Service]
|
||||||
> Environment="HTTP_PROXY=http://domain%%5Cuser:complex%%23pass@proxy.example.com:3128/"
|
> Environment="HTTP_PROXY=http://domain%%5Cuser:complex%%23pass@proxy.example.com:3128/"
|
||||||
> ```
|
> ```
|
||||||
|
|
@ -212,7 +215,7 @@ service file, not in the `daemon.json` file or using environment variables.
|
||||||
- Literal port numbers are accepted by IP address prefixes (`1.2.3.4:80`) and
|
- Literal port numbers are accepted by IP address prefixes (`1.2.3.4:80`) and
|
||||||
domain names (`foo.example.com:80`)
|
domain names (`foo.example.com:80`)
|
||||||
|
|
||||||
Config example:
|
Example:
|
||||||
|
|
||||||
```systemd
|
```systemd
|
||||||
[Service]
|
[Service]
|
||||||
|
|
@ -236,5 +239,6 @@ service file, not in the `daemon.json` file or using environment variables.
|
||||||
|
|
||||||
Environment=HTTP_PROXY=http://proxy.example.com:3128 HTTPS_PROXY=https://proxy.example.com:3129 NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp
|
Environment=HTTP_PROXY=http://proxy.example.com:3128 HTTPS_PROXY=https://proxy.example.com:3129 NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp
|
||||||
```
|
```
|
||||||
|
|
||||||
{{< /tab >}}
|
{{< /tab >}}
|
||||||
{{< /tabs >}}
|
{{< /tabs >}}
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
---
|
---
|
||||||
title: Troubleshoot the Docker daemon
|
title: Troubleshooting the Docker daemon
|
||||||
description: Configuring and troubleshooting the Docker daemon
|
description: Learn how to troubleshoot errors and misconfigurations in the Docker daemon
|
||||||
keywords: docker, daemon, configuration, troubleshooting
|
keywords: docker, daemon, configuration, troubleshooting, error, fail to start
|
||||||
---
|
---
|
||||||
|
|
||||||
This page describes how to troubleshoot and debug the daemon if you run into
|
This page describes how to troubleshoot and debug the daemon if you run into
|
||||||
|
|
@ -29,9 +29,11 @@ If you see an error similar to this one and you are starting the daemon manually
|
||||||
with flags, you may need to adjust your flags or the `daemon.json` to remove the
|
with flags, you may need to adjust your flags or the `daemon.json` to remove the
|
||||||
conflict.
|
conflict.
|
||||||
|
|
||||||
> **Note**: If you see this specific error, continue to the
|
> **Note**
|
||||||
> [next section](#use-the-hosts-key-in-daemonjson-with-systemd) for a
|
>
|
||||||
> workaround.
|
> If you see this specific error, continue to the
|
||||||
|
> [next section](#use-the-hosts-key-in-daemonjson-with-systemd)
|
||||||
|
> for a workaround.
|
||||||
|
|
||||||
If you are starting Docker using your operating system's init scripts, you may
|
If you are starting Docker using your operating system's init scripts, you may
|
||||||
need to override the defaults in these scripts in ways that are specific to the
|
need to override the defaults in these scripts in ways that are specific to the
|
||||||
|
|
@ -39,7 +41,7 @@ operating system.
|
||||||
|
|
||||||
### Use the hosts key in daemon.json with systemd
|
### Use the hosts key in daemon.json with systemd
|
||||||
|
|
||||||
One notable example of a configuration conflict that is difficult to
|
One notable example of a configuration conflict that's difficult to
|
||||||
troubleshoot is when you want to specify a different daemon address from the
|
troubleshoot is when you want to specify a different daemon address from the
|
||||||
default. Docker listens on a socket by default. On Debian and Ubuntu systems
|
default. Docker listens on a socket by default. On Debian and Ubuntu systems
|
||||||
using `systemd`, this means that a host flag `-H` is always used when starting
|
using `systemd`, this means that a host flag `-H` is always used when starting
|
||||||
|
|
@ -48,9 +50,9 @@ configuration conflict (as in the above message) and Docker fails to start.
|
||||||
|
|
||||||
To work around this problem, create a new file
|
To work around this problem, create a new file
|
||||||
`/etc/systemd/system/docker.service.d/docker.conf` with the following contents,
|
`/etc/systemd/system/docker.service.d/docker.conf` with the following contents,
|
||||||
to remove the `-H` argument that is used when starting the daemon by default.
|
to remove the `-H` argument that's used when starting the daemon by default.
|
||||||
|
|
||||||
```none
|
```systemd
|
||||||
[Service]
|
[Service]
|
||||||
ExecStart=
|
ExecStart=
|
||||||
ExecStart=/usr/bin/dockerd
|
ExecStart=/usr/bin/dockerd
|
||||||
|
|
@ -59,18 +61,20 @@ ExecStart=/usr/bin/dockerd
|
||||||
There are other times when you might need to configure `systemd` with Docker,
|
There are other times when you might need to configure `systemd` with Docker,
|
||||||
such as [configuring a HTTP or HTTPS proxy](systemd.md#httphttps-proxy).
|
such as [configuring a HTTP or HTTPS proxy](systemd.md#httphttps-proxy).
|
||||||
|
|
||||||
> **Note**: If you override this option and then do not specify a `hosts` entry
|
> **Note**
|
||||||
> in the `daemon.json` or a `-H` flag when starting Docker manually, Docker
|
>
|
||||||
> fails to start.
|
> If you override this option without specifying a `hosts` entry in the
|
||||||
|
> `daemon.json` or a `-H` flag when starting Docker manually, Docker fails to
|
||||||
|
> start.
|
||||||
|
|
||||||
Run `sudo systemctl daemon-reload` before attempting to start Docker. If Docker
|
Run `sudo systemctl daemon-reload` before attempting to start Docker. If Docker
|
||||||
starts successfully, it is now listening on the IP address specified in the
|
starts successfully, it's now listening on the IP address specified in the
|
||||||
`hosts` key of the `daemon.json` instead of a socket.
|
`hosts` key of the `daemon.json` instead of a socket.
|
||||||
|
|
||||||
<!-- prettier-ignore -->
|
<!-- prettier-ignore -->
|
||||||
> **Important**
|
> **Important**
|
||||||
>
|
>
|
||||||
> Setting `hosts` in the `daemon.json` is not supported on Docker
|
> Setting `hosts` in the `daemon.json` isn't supported on Docker
|
||||||
> Desktop for Windows or Docker Desktop for Mac.
|
> Desktop for Windows or Docker Desktop for Mac.
|
||||||
{ .important }
|
{ .important }
|
||||||
|
|
||||||
|
|
@ -94,4 +98,4 @@ You can also use operating system utilities, such as
|
||||||
utilities.
|
utilities.
|
||||||
|
|
||||||
Finally, you can check in the process list for the `dockerd` process, using
|
Finally, you can check in the process list for the `dockerd` process, using
|
||||||
commands like `ps` or `top`.
|
commands like `ps` or `top`.
|
||||||
|
|
|
||||||
|
|
@ -3,36 +3,32 @@ description: CLI and log output formatting reference
|
||||||
keywords: format, formatting, output, templates, log
|
keywords: format, formatting, output, templates, log
|
||||||
title: Format command and log output
|
title: Format command and log output
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/admin/formatting/
|
- /engine/admin/formatting/
|
||||||
---
|
---
|
||||||
|
|
||||||
Docker uses [Go templates](https://golang.org/pkg/text/template/) which you can
|
Docker supports [Go templates](https://golang.org/pkg/text/template/) which you
|
||||||
use to manipulate the output format of certain commands and log drivers.
|
can use to manipulate the output format of certain commands and log drivers.
|
||||||
|
|
||||||
Docker provides a set of basic functions to manipulate template elements.
|
Docker provides a set of basic functions to manipulate template elements.
|
||||||
All of these examples use the `docker inspect` command, but many other CLI
|
All of these examples use the `docker inspect` command, but many other CLI
|
||||||
commands have a `--format` flag, and many of the CLI command references
|
commands have a `--format` flag, and many of the CLI command references
|
||||||
include examples of customizing the output format.
|
include examples of customizing the output format.
|
||||||
|
|
||||||
>**Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> When using the `--format` flag, you need observe your shell environment.
|
> When using the `--format` flag, you need observe your shell environment.
|
||||||
> In a Posix shell, you can run the following with a single quote:
|
> In a POSIX shell, you can run the following with a single quote:
|
||||||
>
|
>
|
||||||
>
|
|
||||||
> ```console
|
> ```console
|
||||||
> $ docker inspect --format '{{join .Args " , "}}'
|
> $ docker inspect --format '{{join .Args " , "}}'
|
||||||
> ```
|
> ```
|
||||||
>
|
|
||||||
>
|
>
|
||||||
> Otherwise, in a Windows shell (for example, PowerShell), you need to use single quotes, but
|
> Otherwise, in a Windows shell (for example, PowerShell), you need to use single quotes, but
|
||||||
> escape the double quotes inside the params as follows:
|
> escape the double quotes inside the parameters as follows:
|
||||||
>
|
>
|
||||||
>
|
|
||||||
> ```console
|
> ```console
|
||||||
> $ docker inspect --format '{{join .Args \" , \"}}'
|
> $ docker inspect --format '{{join .Args \" , \"}}'
|
||||||
> ```
|
> ```
|
||||||
>
|
|
||||||
>
|
>
|
||||||
{ .important }
|
{ .important }
|
||||||
|
|
||||||
|
|
@ -41,89 +37,70 @@ include examples of customizing the output format.
|
||||||
`join` concatenates a list of strings to create a single string.
|
`join` concatenates a list of strings to create a single string.
|
||||||
It puts a separator between each element in the list.
|
It puts a separator between each element in the list.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect --format '{{join .Args " , "}}' container
|
$ docker inspect --format '{{join .Args " , "}}' container
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## table
|
## table
|
||||||
|
|
||||||
`table` specifies which fields you want to see its output.
|
`table` specifies which fields you want to see its output.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker image list --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}\t{{.Size}}"
|
$ docker image list --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}\t{{.Size}}"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## json
|
## json
|
||||||
|
|
||||||
`json` encodes an element as a json string.
|
`json` encodes an element as a json string.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect --format '{{json .Mounts}}' container
|
$ docker inspect --format '{{json .Mounts}}' container
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## lower
|
## lower
|
||||||
|
|
||||||
`lower` transforms a string into its lowercase representation.
|
`lower` transforms a string into its lowercase representation.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect --format "{{lower .Name}}" container
|
$ docker inspect --format "{{lower .Name}}" container
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## split
|
## split
|
||||||
|
|
||||||
`split` slices a string into a list of strings separated by a separator.
|
`split` slices a string into a list of strings separated by a separator.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect --format '{{split .Image ":"}}' container
|
$ docker inspect --format '{{split .Image ":"}}' container
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## title
|
## title
|
||||||
|
|
||||||
`title` capitalizes the first character of a string.
|
`title` capitalizes the first character of a string.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect --format "{{title .Name}}" container
|
$ docker inspect --format "{{title .Name}}" container
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## upper
|
## upper
|
||||||
|
|
||||||
`upper` transforms a string into its uppercase representation.
|
`upper` transforms a string into its uppercase representation.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect --format "{{upper .Name}}" container
|
$ docker inspect --format "{{upper .Name}}" container
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## println
|
## println
|
||||||
|
|
||||||
`println` prints each value on a new line.
|
`println` prints each value on a new line.
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{println .IPAddress}}{{end}}' container
|
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{println .IPAddress}}{{end}}' container
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
# Hint
|
# Hint
|
||||||
|
|
||||||
To find out what data can be printed, show all content as json:
|
To find out what data can be printed, show all content as json:
|
||||||
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker container ls --format='{{json .}}'
|
$ docker container ls --format='{{json .}}'
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,9 @@
|
||||||
---
|
---
|
||||||
description: Description of labels, which are used to manage metadata on Docker objects.
|
description: Learn about labels, a tool to manage metadata on Docker objects.
|
||||||
keywords: Usage, user guide, labels, metadata, docker, documentation, examples, annotating
|
keywords: labels, metadata, docker, annotations
|
||||||
title: Docker object labels
|
title: Docker object labels
|
||||||
aliases:
|
aliases:
|
||||||
- /engine/userguide/labels-custom-metadata/
|
- /engine/userguide/labels-custom-metadata/
|
||||||
---
|
---
|
||||||
|
|
||||||
Labels are a mechanism for applying metadata to Docker objects, including:
|
Labels are a mechanism for applying metadata to Docker objects, including:
|
||||||
|
|
@ -29,7 +29,7 @@ all previous values.
|
||||||
|
|
||||||
### Key format recommendations
|
### Key format recommendations
|
||||||
|
|
||||||
A label _key_ is the left-hand side of the key-value pair. Keys are alphanumeric
|
A label key is the left-hand side of the key-value pair. Keys are alphanumeric
|
||||||
strings which may contain periods (`.`) and hyphens (`-`). Most Docker users use
|
strings which may contain periods (`.`) and hyphens (`-`). Most Docker users use
|
||||||
images created by other organizations, and the following guidelines help to
|
images created by other organizations, and the following guidelines help to
|
||||||
prevent inadvertent duplication of labels across objects, especially if you plan
|
prevent inadvertent duplication of labels across objects, especially if you plan
|
||||||
|
|
@ -38,20 +38,20 @@ to use labels as a mechanism for automation.
|
||||||
- Authors of third-party tools should prefix each label key with the
|
- Authors of third-party tools should prefix each label key with the
|
||||||
reverse DNS notation of a domain they own, such as `com.example.some-label`.
|
reverse DNS notation of a domain they own, such as `com.example.some-label`.
|
||||||
|
|
||||||
- Do not use a domain in your label key without the domain owner's permission.
|
- Don't use a domain in your label key without the domain owner's permission.
|
||||||
|
|
||||||
- The `com.docker.*`, `io.docker.*`, and `org.dockerproject.*` namespaces are
|
- The `com.docker.*`, `io.docker.*`, and `org.dockerproject.*` namespaces are
|
||||||
reserved by Docker for internal use.
|
reserved by Docker for internal use.
|
||||||
|
|
||||||
- Label keys should begin and end with a lower-case letter and should only
|
- Label keys should begin and end with a lower-case letter and should only
|
||||||
contain lower-case alphanumeric characters, the period character (`.`), and
|
contain lower-case alphanumeric characters, the period character (`.`), and
|
||||||
the hyphen character (`-`). Consecutive periods or hyphens are not allowed.
|
the hyphen character (`-`). Consecutive periods or hyphens aren't allowed.
|
||||||
|
|
||||||
- The period character (`.`) separates namespace "fields". Label keys without
|
- The period character (`.`) separates namespace "fields". Label keys without
|
||||||
namespaces are reserved for CLI use, allowing users of the CLI to interactively
|
namespaces are reserved for CLI use, allowing users of the CLI to interactively
|
||||||
label Docker objects using shorter typing-friendly strings.
|
label Docker objects using shorter typing-friendly strings.
|
||||||
|
|
||||||
These guidelines are not currently enforced and additional guidelines may apply
|
These guidelines aren't currently enforced and additional guidelines may apply
|
||||||
to specific use cases.
|
to specific use cases.
|
||||||
|
|
||||||
### Value guidelines
|
### Value guidelines
|
||||||
|
|
@ -62,7 +62,7 @@ that the value be serialized to a string first, using a mechanism specific to
|
||||||
the type of structure. For instance, to serialize JSON into a string, you might
|
the type of structure. For instance, to serialize JSON into a string, you might
|
||||||
use the `JSON.stringify()` JavaScript method.
|
use the `JSON.stringify()` JavaScript method.
|
||||||
|
|
||||||
Since Docker does not deserialize the value, you cannot treat a JSON or XML
|
Since Docker doesn't deserialize the value, you can't treat a JSON or XML
|
||||||
document as a nested structure when querying or filtering by label value unless
|
document as a nested structure when querying or filtering by label value unless
|
||||||
you build this functionality into third-party tooling.
|
you build this functionality into third-party tooling.
|
||||||
|
|
||||||
|
|
@ -75,9 +75,10 @@ Docker deployments.
|
||||||
|
|
||||||
Labels on images, containers, local daemons, volumes, and networks are static for
|
Labels on images, containers, local daemons, volumes, and networks are static for
|
||||||
the lifetime of the object. To change these labels you must recreate the object.
|
the lifetime of the object. To change these labels you must recreate the object.
|
||||||
Labels on swarm nodes and services can be updated dynamically.
|
Labels on Swarm nodes and services can be updated dynamically.
|
||||||
|
|
||||||
- Images and containers
|
- Images and containers
|
||||||
|
|
||||||
- [Adding labels to images](../engine/reference/builder.md#label)
|
- [Adding labels to images](../engine/reference/builder.md#label)
|
||||||
- [Overriding a container's labels at runtime](../engine/reference/commandline/run.md#label)
|
- [Overriding a container's labels at runtime](../engine/reference/commandline/run.md#label)
|
||||||
- [Inspecting labels on images or containers](../engine/reference/commandline/inspect.md)
|
- [Inspecting labels on images or containers](../engine/reference/commandline/inspect.md)
|
||||||
|
|
@ -85,26 +86,30 @@ Labels on swarm nodes and services can be updated dynamically.
|
||||||
- [Filtering containers by label](../engine/reference/commandline/ps.md#filter)
|
- [Filtering containers by label](../engine/reference/commandline/ps.md#filter)
|
||||||
|
|
||||||
- Local Docker daemons
|
- Local Docker daemons
|
||||||
|
|
||||||
- [Adding labels to a Docker daemon at runtime](../engine/reference/commandline/dockerd.md)
|
- [Adding labels to a Docker daemon at runtime](../engine/reference/commandline/dockerd.md)
|
||||||
- [Inspecting a Docker daemon's labels](../engine/reference/commandline/info.md)
|
- [Inspecting a Docker daemon's labels](../engine/reference/commandline/info.md)
|
||||||
|
|
||||||
- Volumes
|
- Volumes
|
||||||
|
|
||||||
- [Adding labels to volumes](../engine/reference/commandline/volume_create.md)
|
- [Adding labels to volumes](../engine/reference/commandline/volume_create.md)
|
||||||
- [Inspecting a volume's labels](../engine/reference/commandline/volume_inspect.md)
|
- [Inspecting a volume's labels](../engine/reference/commandline/volume_inspect.md)
|
||||||
- [Filtering volumes by label](../engine/reference/commandline/volume_ls.md#filter)
|
- [Filtering volumes by label](../engine/reference/commandline/volume_ls.md#filter)
|
||||||
|
|
||||||
- Networks
|
- Networks
|
||||||
|
|
||||||
- [Adding labels to a network](../engine/reference/commandline/network_create.md)
|
- [Adding labels to a network](../engine/reference/commandline/network_create.md)
|
||||||
- [Inspecting a network's labels](../engine/reference/commandline/network_inspect.md)
|
- [Inspecting a network's labels](../engine/reference/commandline/network_inspect.md)
|
||||||
- [Filtering networks by label](../engine/reference/commandline/network_ls.md#filter)
|
- [Filtering networks by label](../engine/reference/commandline/network_ls.md#filter)
|
||||||
|
|
||||||
- Swarm nodes
|
- Swarm nodes
|
||||||
- [Adding or updating a swarm node's labels](../engine/reference/commandline/node_update.md#label-add)
|
|
||||||
- [Inspecting a swarm node's labels](../engine/reference/commandline/node_inspect.md)
|
- [Adding or updating a Swarm node's labels](../engine/reference/commandline/node_update.md#label-add)
|
||||||
- [Filtering swarm nodes by label](../engine/reference/commandline/node_ls.md#filter)
|
- [Inspecting a Swarm node's labels](../engine/reference/commandline/node_inspect.md)
|
||||||
|
- [Filtering Swarm nodes by label](../engine/reference/commandline/node_ls.md#filter)
|
||||||
|
|
||||||
- Swarm services
|
- Swarm services
|
||||||
- [Adding labels when creating a swarm service](../engine/reference/commandline/service_create.md#label)
|
- [Adding labels when creating a Swarm service](../engine/reference/commandline/service_create.md#label)
|
||||||
- [Updating a swarm service's labels](../engine/reference/commandline/service_update.md)
|
- [Updating a Swarm service's labels](../engine/reference/commandline/service_update.md)
|
||||||
- [Inspecting a swarm service's labels](../engine/reference/commandline/service_inspect.md)
|
- [Inspecting a Swarm service's labels](../engine/reference/commandline/service_inspect.md)
|
||||||
- [Filtering swarm services by label](../engine/reference/commandline/service_ls.md#filter)
|
- [Filtering Swarm services by label](../engine/reference/commandline/service_ls.md#filter)
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
description: Pruning unused objects
|
description: Free up disk space by removing unused resources with the prune command
|
||||||
keywords: pruning, prune, images, volumes, containers, networks, disk, administration,
|
keywords: pruning, prune, images, volumes, containers, networks, disk, administration,
|
||||||
garbage collection
|
garbage collection
|
||||||
title: Prune unused Docker objects
|
title: Prune unused Docker objects
|
||||||
|
|
@ -9,7 +9,7 @@ aliases:
|
||||||
|
|
||||||
Docker takes a conservative approach to cleaning up unused objects (often
|
Docker takes a conservative approach to cleaning up unused objects (often
|
||||||
referred to as "garbage collection"), such as images, containers, volumes, and
|
referred to as "garbage collection"), such as images, containers, volumes, and
|
||||||
networks: these objects are generally not removed unless you explicitly ask
|
networks. These objects are generally not removed unless you explicitly ask
|
||||||
Docker to do so. This can cause Docker to use extra disk space. For each type of
|
Docker to do so. This can cause Docker to use extra disk space. For each type of
|
||||||
object, Docker provides a `prune` command. In addition, you can use `docker
|
object, Docker provides a `prune` command. In addition, you can use `docker
|
||||||
system prune` to clean up multiple types of objects at once. This topic shows
|
system prune` to clean up multiple types of objects at once. This topic shows
|
||||||
|
|
@ -19,7 +19,7 @@ how to use these `prune` commands.
|
||||||
|
|
||||||
The `docker image prune` command allows you to clean up unused images. By
|
The `docker image prune` command allows you to clean up unused images. By
|
||||||
default, `docker image prune` only cleans up _dangling_ images. A dangling image
|
default, `docker image prune` only cleans up _dangling_ images. A dangling image
|
||||||
is one that is not tagged and is not referenced by any container. To remove
|
is one that isn't tagged, and isn't referenced by any container. To remove
|
||||||
dangling images:
|
dangling images:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
@ -29,7 +29,7 @@ WARNING! This will remove all dangling images.
|
||||||
Are you sure you want to continue? [y/N] y
|
Are you sure you want to continue? [y/N] y
|
||||||
```
|
```
|
||||||
|
|
||||||
To remove all images which are not used by existing containers, use the `-a`
|
To remove all images which aren't used by existing containers, use the `-a`
|
||||||
flag:
|
flag:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
@ -56,7 +56,7 @@ for more examples.
|
||||||
|
|
||||||
## Prune containers
|
## Prune containers
|
||||||
|
|
||||||
When you stop a container, it is not automatically removed unless you started it
|
When you stop a container, it isn't automatically removed unless you started it
|
||||||
with the `--rm` flag. To see all containers on the Docker host, including
|
with the `--rm` flag. To see all containers on the Docker host, including
|
||||||
stopped containers, use `docker ps -a`. You may be surprised how many containers
|
stopped containers, use `docker ps -a`. You may be surprised how many containers
|
||||||
exist, especially on a development system! A stopped container's writable layers
|
exist, especially on a development system! A stopped container's writable layers
|
||||||
|
|
@ -70,7 +70,7 @@ WARNING! This will remove all stopped containers.
|
||||||
Are you sure you want to continue? [y/N] y
|
Are you sure you want to continue? [y/N] y
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, you are prompted to continue. To bypass the prompt, use the `-f` or
|
By default, you're prompted to continue. To bypass the prompt, use the `-f` or
|
||||||
`--force` flag.
|
`--force` flag.
|
||||||
|
|
||||||
By default, all stopped containers are removed. You can limit the scope using
|
By default, all stopped containers are removed. You can limit the scope using
|
||||||
|
|
@ -103,7 +103,7 @@ By default, you are prompted to continue. To bypass the prompt, use the `-f` or
|
||||||
|
|
||||||
By default, all unused volumes are removed. You can limit the scope using
|
By default, all unused volumes are removed. You can limit the scope using
|
||||||
the `--filter` flag. For instance, the following command only removes
|
the `--filter` flag. For instance, the following command only removes
|
||||||
volumes which are not labelled with the `keep` label:
|
volumes which aren't labelled with the `keep` label:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker volume prune --filter "label!=keep"
|
$ docker volume prune --filter "label!=keep"
|
||||||
|
|
@ -127,7 +127,7 @@ WARNING! This will remove all networks not used by at least one container.
|
||||||
Are you sure you want to continue? [y/N] y
|
Are you sure you want to continue? [y/N] y
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, you are prompted to continue. To bypass the prompt, use the `-f` or
|
By default, you're prompted to continue. To bypass the prompt, use the `-f` or
|
||||||
`--force` flag.
|
`--force` flag.
|
||||||
|
|
||||||
By default, all unused networks are removed. You can limit the scope using
|
By default, all unused networks are removed. You can limit the scope using
|
||||||
|
|
@ -145,7 +145,7 @@ for more examples.
|
||||||
## Prune everything
|
## Prune everything
|
||||||
|
|
||||||
The `docker system prune` command is a shortcut that prunes images, containers,
|
The `docker system prune` command is a shortcut that prunes images, containers,
|
||||||
and networks. Volumes are not pruned by default, and you must specify the
|
and networks. Volumes aren't pruned by default, and you must specify the
|
||||||
`--volumes` flag for `docker system prune` to prune volumes.
|
`--volumes` flag for `docker system prune` to prune volumes.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
@ -173,12 +173,12 @@ WARNING! This will remove:
|
||||||
Are you sure you want to continue? [y/N] y
|
Are you sure you want to continue? [y/N] y
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, you are prompted to continue. To bypass the prompt, use the `-f` or
|
By default, you're prompted to continue. To bypass the prompt, use the `-f` or
|
||||||
`--force` flag.
|
`--force` flag.
|
||||||
|
|
||||||
By default, all unused containers, networks, images (both dangling and unreferenced)
|
By default, all unused containers, networks, and images are removed. You can
|
||||||
are removed. You can limit the scope using the
|
limit the scope using the `--filter` flag. For instance, the following command
|
||||||
`--filter` flag. For instance, the following command removes items older than 24 hours:
|
removes items older than 24 hours:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker system prune --filter "until=24h"
|
$ docker system prune --filter "until=24h"
|
||||||
|
|
|
||||||
|
|
@ -1652,7 +1652,7 @@ Manuals:
|
||||||
- path: /config/containers/runmetrics/
|
- path: /config/containers/runmetrics/
|
||||||
title: Collect runtime metrics
|
title: Collect runtime metrics
|
||||||
- path: /config/containers/multi-service_container/
|
- path: /config/containers/multi-service_container/
|
||||||
title: Run multiple services in a container
|
title: Run multiple processes in a container
|
||||||
- path: /config/daemon/prometheus/
|
- path: /config/daemon/prometheus/
|
||||||
title: Collect metrics with Prometheus
|
title: Collect metrics with Prometheus
|
||||||
- sectiontitle: Daemon configuration
|
- sectiontitle: Daemon configuration
|
||||||
|
|
@ -1662,7 +1662,7 @@ Manuals:
|
||||||
- path: /config/daemon/systemd/
|
- path: /config/daemon/systemd/
|
||||||
title: Configure with systemd
|
title: Configure with systemd
|
||||||
- path: /config/containers/live-restore/
|
- path: /config/containers/live-restore/
|
||||||
title: Keep containers alive during daemon downtime
|
title: Live restore
|
||||||
- path: /config/daemon/troubleshoot/
|
- path: /config/daemon/troubleshoot/
|
||||||
title: Troubleshoot
|
title: Troubleshoot
|
||||||
- path: /config/daemon/remote-access/
|
- path: /config/daemon/remote-access/
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue