2.3 KiB
2.3 KiB
Using GPUs inside containers
nerdctl provides docker-compatible NVIDIA GPU support.
Prerequisites
- NVIDIA Drivers
- Same requirement as when you use GPUs on Docker. For details, please refer to the doc by NVIDIA.
nvidia-container-cli- containerd relies on this CLI for setting up GPUs inside container. You can install this via
libnvidia-containerpackage.
- containerd relies on this CLI for setting up GPUs inside container. You can install this via
Options for nerdctl run --gpus
nerdctl run --gpus is compatible to docker run --gpus.
You can specify number of GPUs to use via --gpus option.
The following example exposes all available GPUs.
nerdctl run -it --rm --gpus all nvidia/cuda:9.0-base nvidia-smi
You can also pass detailed configuration to --gpus option as a list of key-value pairs. The following options are provided.
count: number of GPUs to use.allexposes all available GPUs.device: IDs of GPUs to use. UUID or numbers of GPUs can be specified.capabilities: Driver capabilities. If unset,utilityis used.
The following example exposes a specific GPU to the container.
nerdctl run -it --rm --gpus capabilities=utility,device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a nvidia/cuda:9.0-base nvidia-smi
Fields for nerdctl compose
nerdctl compose also supports GPUs following compose-spec.
You can use GPUs on compose when you specify some of the following capabilities in services.demo.deploy.resources.reservations.devices.
gpunvidia- all allowed capabilities for
nerdctl run --gpus
Avaliable fields are the same as nerdctl run --gpus.
The following exposes all available GPUs to the container.
version: "3.8"
services:
demo:
image: nvidia/cuda:9.0-base
command: nvidia-smi
deploy:
resources:
reservations:
devices:
- capabilities: ["utility"]
count: all