Fix typos in the Resource constraints manual (#22594)

This commit is contained in:
Andrey Sobolev 2025-05-06 16:13:34 +03:00 committed by GitHub
parent aec91a451b
commit fe9389f4be
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 6 additions and 6 deletions

View File

@ -69,8 +69,8 @@ You can mitigate the risk of system instability due to OOME by:
Docker can enforce hard or soft memory limits. Docker can enforce hard or soft memory limits.
- Hard limits lets the container use no more than a fixed amount of memory. - Hard limits let the container use no more than a fixed amount of memory.
- Soft limits lets the container use as much memory as it needs unless certain - Soft limits let the container use as much memory as it needs unless certain
conditions are met, such as when the kernel detects low memory or contention on conditions are met, such as when the kernel detects low memory or contention on
the host machine. the host machine.
@ -162,7 +162,7 @@ a container. Consider the following scenarios:
an OOM error. If the kernel memory limit is higher than the user memory an OOM error. If the kernel memory limit is higher than the user memory
limit, the kernel limit doesn't cause the container to experience an OOM. limit, the kernel limit doesn't cause the container to experience an OOM.
When you enable kernel memory limits, the host machine tracks "high water mark" When you enable kernel memory limits, the host machine tracks the "high water mark"
statistics on a per-process basis, so you can track which processes (in this statistics on a per-process basis, so you can track which processes (in this
case, containers) are using excess memory. This can be seen per process by case, containers) are using excess memory. This can be seen per process by
viewing `/proc/<PID>/status` on the host machine. viewing `/proc/<PID>/status` on the host machine.
@ -186,7 +186,7 @@ the container's cgroup on the host machine.
| :--------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | :--------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. | | `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. |
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users don't change this from the default. For most use-cases, `--cpus` is a more convenient alternative. | | `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users don't change this from the default. For most use-cases, `--cpus` is a more convenient alternative. |
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. For most use-cases, `--cpus` is a more convenient alternative. | | `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before being throttled. As such acting as the effective ceiling. For most use-cases, `--cpus` is a more convenient alternative. |
| `--cpuset-cpus` | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be `0-3` (to use the first, second, third, and fourth CPU) or `1,3` (to use the second and fourth CPU). | | `--cpuset-cpus` | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be `0-3` (to use the first, second, third, and fourth CPU) or `1,3` (to use the second and fourth CPU). |
| `--cpu-shares` | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. `--cpu-shares` doesn't prevent containers from being scheduled in Swarm mode. It prioritizes container CPU resources for the available CPU cycles. It doesn't guarantee or reserve any specific CPU access. | | `--cpu-shares` | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. `--cpu-shares` doesn't prevent containers from being scheduled in Swarm mode. It prioritizes container CPU resources for the available CPU cycles. It doesn't guarantee or reserve any specific CPU access. |
@ -234,7 +234,7 @@ for real-time tasks per runtime period. For instance, with the default period of
containers using the real-time scheduler can run for 950000 microseconds for every containers using the real-time scheduler can run for 950000 microseconds for every
1000000-microsecond period, leaving at least 50000 microseconds available for 1000000-microsecond period, leaving at least 50000 microseconds available for
non-real-time tasks. To make this configuration permanent on systems which use non-real-time tasks. To make this configuration permanent on systems which use
`systemd`, create a systemd unit file for the `docker` service. For an example, `systemd`, create a systemd unit file for the `docker` service. For example,
see the instruction on how to configure the daemon to use a proxy with a see the instruction on how to configure the daemon to use a proxy with a
[systemd unit file](../daemon/proxy.md#systemd-unit-file). [systemd unit file](../daemon/proxy.md#systemd-unit-file).
@ -343,6 +343,6 @@ environment variables. More information on valid variables can be found in the
[nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html) [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html)
documentation. These variables can be set in a Dockerfile. documentation. These variables can be set in a Dockerfile.
You can also use CUDA images which sets these variables automatically. See the You can also use CUDA images, which set these variables automatically. See the
official [CUDA images](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda) official [CUDA images](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda)
NGC catalog page. NGC catalog page.