mirror of https://github.com/vllm-project/vllm.git
576 B
576 B
title |
---|
Quantization |
{ #quantization-index }
Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.
Contents: