libcontainer v0.0.4 introduces setting `/proc/self/oom_score_adj` to
better tune oom killing preferences for container process. This patch
simply integrates OomScoreAdj libcontainer's config option and adjust
the cli with this new option.
Signed-off-by: Antonio Murdaca <amurdaca@redhat.com>
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Cgroup integtaion tests should cover:
- docker can run sucessfully with these options
- these cgroup options are set to HostConfig as expected
- these cgroup options are really set to cgroup files as expected
- other cases (wrong value, combinations etc..)
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
bump runc/libcontainer to v0.0.5
bump runc deps to latest
bump docker/libnetwork to 04cc1fa0a89f8c407b7be8cab883d4b17531ea7d
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
When tools like kubernetes and cockpit are talking to the docker daemon
actively, we are seeing large number of log messages that look like debug
information.
For example
docker info adds the following line to journald.
Nov 26 07:09:23 dhcp-10-19-62-196.boston.devel.redhat.com docker[32686]: time="2015-11-26T07:09:23.124503455-05:00" level=info msg="GET /v1.22/info"
We think this should be Debug level not Info level.
Signed-off-by: Dan Walsh <dwalsh@redhat.com>
Moved a defer up to a better spot.
Fixed TestUntarPathWithInvalidDest to actually fail for the right reason
Closes#18170
Signed-off-by: Doug Davis <dug@us.ibm.com>
Currently, the resources associated with the io.Reader returned by
TarStream are only freed when it is read until EOF. This means that
partial uploads or exports (for example, in the case of a full disk or
severed connection) can leak a goroutine and open file. This commit
changes TarStream to return an io.ReadCloser. Resources are freed when
Close is called.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
PR https://github.com/docker/docker/pull/17986
inadvertently included changes to some vendored files.
This reverts those changes.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Docker daemon uses kv-store as the host-discovery backend.
Discovery module tracks the liveness of a node through a simple
keepalive mechanism. The keepalive mechanism depends on every
node performing heartbeat by registering itself with the discovery
module (via KV-Store Put operation). And for every Put operation,
the discovery module in all other nodes will receive a Watch
notification. That keeps the node alive.
Any node that fails to register itself within the TTL timer is
considered dead and removed from the discovery database.
The default timer (heartbeat = 20 seconds & ttl = 60 seconds)
works fine for small clusters. But for large clusters, these
default timers are extremely aggressive and that causes high CPU
& most of the processing is spent managing the node discovery
and that impacts normal daemon operation.
Hence we need a way to make the discovery ttl and heartbeat
configurable. As the cluster size grows, the user can change
these timers to make sure the daemon scales.
Signed-off-by: Madhu Venugopal <madhu@docker.com>