This PR addresses https://github.com/vllm-project/llm-compressor/issues/2160 by introducing token-level masking infrastructure for calibration, enabling modifiers to focus optimization on specific tokens (e.g., assistant responses only) while ignoring others during calibration. ### Motivation When quantizing instruction-tuned models, calibration data typically contains both user prompts and assistant responses. However, quantization should primarily preserve the quality of model outputs (assistant responses), not the processing of inputs. Token masking allows modifiers to compute loss only on tokens that matter, improving quantization quality for instruction-tuned models. ### Usage Users provide a `loss_mask` field in their dataset (1 for tokens to include, 0 to exclude) and enable masking with `use_loss_mask=True`: ```python oneshot( model=model, dataset=ds, # dataset with "loss_mask" field recipe=recipe, use_loss_mask=True, ) ``` ### Results Model: meta-llama/Meta-Llama-3-8B-Instruct Benchmark: gsm8k AWQ + int4a16 + gs128 + asymmetric quantization ``` |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr| |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.7202|± |0.0124| | | |strict-match | 5|exact_match|↑ |0.7202|± |0.0124| ``` Same setup with token masking (see `examples/awq/llama_example_with_masking.py` for details) ``` |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr| |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.7255|± |0.0123| | | |strict-match | 5|exact_match|↑ |0.7263|± |0.0123| ``` ### Implementation The implementation adds core infrastructure that can be leveraged by any modifier: - State extension: loss_masks and current_batch_idx fields allow pipelines to pass mask data to modifiers during calibration - Pipeline integration: Both basic and sequential pipelines collect loss masks from batches and track the current batch index - AWQ integration: AWQ modifier updated to use masks in activation accumulation and loss calculation as a reference implementation ### File Changes - `src/llmcompressor/core/state.py` extends the State class with two new fields for token masking: - loss_masks: stores the list of loss mask tensors for each batch - current_batch_idx: tracks which batch is being processed, allowing modifiers to retrieve the correct mask - `src/llmcompressor/args/dataset_arguments.py` adds the use_loss_mask argument to enable token masking during calibration. - Pipeline updates to collect and provide loss masks to modifiers: - `src/llmcompressor/pipelines/basic/pipeline.py` collects loss masks from each batch and updates current_batch_idx before each forward pass. - `src/llmcompressor/pipelines/sequential/pipeline.py` populates loss masks from cached activations and tracks batch index during subgraph calibration. - `src/llmcompressor/modifiers/awq/base.py` integrates token masking into AWQ as a reference implementation: - Activation accumulation hook filters activations using the loss mask - Loss calculation applies masking to compute MSE only on relevant tokens - `src/llmcompressor/modifiers/utils/pytorch_helpers.py` adds get_loss_mask_from_batch helper function to extract loss masks from batch dictionaries. - `src/llmcompressor/datasets/utils.py` updates the data collator to handle the loss_mask field during truncation. - `examples/awq/llama_example_with_masking.py` demonstrates token masking with Llama-3-8B-Instruct, showing how to create masks that target assistant responses in chat data. ### Pending - ~~Currently token masking doesn't support the `up_proj <-> down_proj` mapping in MoE models, as masking dispatch is required. I will add an error message when this case happens.~~ Done - ~~There are several AWQ-related PRs pending review, need to resolve the merge conflicts later. ~~ Done - ~~@HDCharles recommended moving the masking logic into the `_run_sample` function in `awq/base.py` instead of the `_compute_loss` function. I will address this later.~~ I think it doesn't matter for now. --------- Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com> Co-authored-by: HDCharles <39544797+HDCharles@users.noreply.github.com> |
||
|---|---|---|
| .github | ||
| docs | ||
| examples | ||
| experimental | ||
| src/llmcompressor | ||
| tests | ||
| tools | ||
| .MAINTAINERS | ||
| .coveragerc | ||
| .gitignore | ||
| .readthedocs.yaml | ||
| CITATION.cff | ||
| CODE_OF_CONDUCT.md | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| MANIFEST.in | ||
| Makefile | ||
| NOTICE | ||
| README.md | ||
| mkdocs.yml | ||
| pyproject.toml | ||
| setup.py | ||
README.md
llmcompressor is an easy-to-use library for optimizing models for deployment with vllm, including:
- Comprehensive set of quantization algorithms for weight-only and activation quantization
- Seamless integration with Hugging Face models and repositories
safetensors-based file format compatible withvllm- Large model support via
accelerate
✨ Read the announcement blog here! ✨
💬 Join us on the vLLM Community Slack and share your questions, thoughts, or ideas in:
#sig-quantization#llm-compressor
🚀 What's New!
Big updates have landed in LLM Compressor! To get a more in-depth look, check out the LLM Compressor overview.
Some of the exciting new features include:
- Batched Calibration Support: LLM Compressor now supports calibration with batch sizes > 1. A new
batch_sizeargument has been added to thedataset_argumentsenabling the option to improve quantization speed. Defaultbatch_sizeis currently set to 1 - New Model-Free PTQ Pathway: A new model-free PTQ pathway has been added to LLM Compressor, called
model_free_ptq. This pathway allows you to quantize your model without the requirement of Hugging Face model definition and is especially useful in cases whereoneshotmay fail. This pathway is currently supported for data-free pathways only i.e FP8 quantization and was leveraged to quantize the Mistral Large 3 model. Additional examples have been added illustrating how LLM Compressor can be used for Kimi K2 - Extended KV Cache and Attention Quantization Support: LLM Compressor now supports attention quantization. KV Cache quantization, which previously only supported per-tensor scales, has been extended to support any quantization scheme including a new
per-headquantization scheme. Support for these checkpoints is on-going in vLLM and scripts to get started have been added to the experimental folder - Generalized AWQ Support: The AWQModifier has been updated to support quantization schemes beyond W4A16 (e.g W4AFp8). In particular, AWQ no longer constrains that the quantization config needs to have the same settings for
group_size,symmetric, andnum_bitsfor each config_group - AutoRound Quantization Support: Added
AutoRoundModifierfor quantization using AutoRound, an advanced post-training algorithm that optimizes rounding and clipping ranges through sign-gradient descent. This approach combines the efficiency of post-training quantization with the adaptability of parameter tuning, delivering robust compression for large language models while maintaining strong performance - Experimental MXFP4 Support: Models can now be quantized using an
MXFP4pre-set scheme. Examples can be found under the experimental folder. This pathway is still experimental as support and validation with vLLM is still a WIP. - R3 Transform Support: LLM Compressor now supports applying transforms to attention in the style of SpinQuant's R3 rotation. Note: this feature is currently not yet supported in vLLM. An example applying R3 can be found in the experimental folder
Supported Formats
- Activation Quantization: W8A8 (int8 and fp8)
- Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)
- 2:4 Semi-structured and Unstructured Sparsity
Supported Algorithms
- Simple PTQ
- GPTQ
- AWQ
- SmoothQuant
- SparseGPT
- AutoRound
When to Use Which Optimization
Please refer to compression_schemes.md for detailed information about available optimization schemes and their use cases.
Installation
pip install llmcompressor
Get Started
End-to-End Examples
Applying quantization with llmcompressor:
- Activation quantization to
int8 - Activation quantization to
fp8 - Activation quantization to
fp4 - Activation quantization to
fp4using AutoRound - Activation quantization to
fp8and weight quantization toint4 - Weight only quantization to
fp4(NVFP4 format) - Weight only quantization to
fp4(MXFP4 format) - Weight only quantization to
int4using GPTQ - Weight only quantization to
int4using AWQ - Weight only quantization to
int4using AutoRound - KV Cache quantization to
fp8 - Attention quantization to
fp8(experimental) - Attention quantization to
nvfp4with SpinQuant (experimental) - Quantizing MoE LLMs
- Quantizing Vision-Language Models
- Quantizing Audio-Language Models
- Quantizing Models Non-uniformly
User Guides
Deep dives into advanced usage of llmcompressor:
Quick Tour
Let's quantize Qwen3-30B-A3B with FP8 weights and activations using the Round-to-Nearest algorithm.
Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe may be changed to target different quantization algorithms or formats.
Apply Quantization
Quantization is applied by selecting an algorithm and calling the oneshot API.
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.utils import dispatch_for_generation
MODEL_ID = "Qwen/Qwen3-30B-A3B"
# Load model.
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# Configure the quantization algorithm and scheme.
# In this case, we:
# * quantize the weights to FP8 using RTN with block_size 128
# * quantize the activations dynamically to FP8 during inference
recipe = QuantizationModifier(
targets="Linear",
scheme="FP8_BLOCK",
ignore=["lm_head", "re:.*mlp.gate$"],
)
# Apply quantization.
oneshot(model=model, recipe=recipe)
# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
dispatch_for_generation(model)
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to(
model.device
)
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")
# Save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-BLOCK"
model.save_pretrained(SAVE_DIR)
tokenizer.save_pretrained(SAVE_DIR)
Inference with vLLM
The checkpoints created by llmcompressor can be loaded and run in vllm:
Install:
pip install vllm
Run:
from vllm import LLM
model = LLM("Qwen/Qwen3-30B-A3B-FP8-BLOCK")
output = model.generate("My name is")
Questions / Contribution
- If you have any questions or requests open an issue and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.
Citation
If you find LLM Compressor useful in your research or projects, please consider citing it:
@software{llmcompressor2024,
title={{LLM Compressor}},
author={Red Hat AI and vLLM Project},
year={2024},
month={8},
url={https://github.com/vllm-project/llm-compressor},
}