vllm/docs/source/features/quantization/index.md

331 B

(quantization-index)=

Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

:::{toctree} :caption: Contents :maxdepth: 1

supported_hardware auto_awq bnb bitblas gguf gptqmodel int4 int8 fp8 modelopt quark quantized_kvcache torchao :::