vllm/csrc/quantization
Robert Shaw c0c2335ce0
Integrate Marlin Kernels for Int4 GPTQ inference (#2497)
Co-authored-by: Robert Shaw <114415538+rib-2@users.noreply.github.com>
Co-authored-by: alexm <alexm@neuralmagic.com>
2024-03-01 12:47:51 -08:00
..
awq Refactor 2 awq gemm kernels into m16nXk32 (#2723) 2024-02-12 11:02:17 -08:00
fp8_e5m2_kvcache Fix compile error when using rocm (#2648) 2024-02-01 09:35:09 -08:00
gptq Add Support for 2/3/8-bit GPTQ Quantization Models (#2330) 2024-02-28 21:52:23 -08:00
marlin Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
squeezellm Enable CUDA graph for GPTQ & SqueezeLLM (#2318) 2024-01-03 09:52:29 -08:00