The currently used idiom for handling key specific subcompletions
did not work here: behind `docker event -f type=network `, the completion
of networks triggered. The expected behaviour is not to complete
anything here.
In order to limit the scope of the corresponding PR, the new idiom is
currently only used in `docker events --filter`.
Signed-off-by: Harald Albers <github@albersweb.de>
The description "set `-1` to disable swap" is wrong, `build`,
`create` and `run` already fixed, we need to fix `update` as well.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
When authorization errors are returned by the token process the error will be wrapped in url.Error.
In order to check the underlying error for retry this error message should be unwrapped.
Unwrapping this error allows failure to push due to an unauthorized response to keep from retrying, possibly resulting in later 429 responses.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
Fixes#19506
This fixes the issue of errors on create and the tty not being able to
be restored to its previous state because of a race where it was
in the hijack goroutine.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
The daemon uses two similar filename extensions to identify different
kinds of certificates. ".crt" files are interpreted as CA certificates,
and ".cert" files are interprted as client certificates. If a CA
certificate is accidentally given the extension ".cert", it will lead to
the following error message:
Missing key ca.key for certificate ca.cert
To make this slightly less confusing, clarify the error message with a
note that CA certificates should use the extension ".crt".
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
This helps ensure that only one thing is trying to intialize a plugin at
once while also keeping the global lock free during initialization.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
- Set the daemon log level to what's set in the configuration.
- Enable TLS when TLSVerify is enabled.
Signed-off-by: David Calavera <david.calavera@gmail.com>
During daemon startup, all containers are registered before any are
started.
During container registration it was calling out to initialize volumes.
If the volume uses a plugin that is running in a container, this will
cause the restart of that container to fail since the plugin is not yet
running.
This also slowed down daemon startup since volume initialization was
happening sequentially, which can be slow (and is flat out slow since
initialization would fail but take 8 seconds for each volume to do it).
This fix holds off on volume initialization until after containers are
restarted and does the initialization in parallel.
The containers that are restarted will have thier volumes initialized
because they are being started. If any of these containers are using a
plugin they will just keep retrying to reach the plugin (up to the
timeout, which is 8seconds) until the container with the plugin is up
and running.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>