vllm/csrc/attention
Kaixi Hou 41aa578428
[NVIDIA] Add Cutlass MLA backend (#17625)
2025-06-03 21:40:26 -07:00
..
mla [NVIDIA] Add Cutlass MLA backend (#17625) 2025-06-03 21:40:26 -07:00
attention_dtypes.h Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
attention_generic.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
attention_kernels.cuh fix: typos (#18151) 2025-05-15 02:16:15 -07:00
attention_utils.cuh [AMD][CI/Build] Disambiguation of the function call for ROCm 6.2 headers compatibility (#7477) 2024-08-21 16:47:36 -07:00
dtype_bfloat16.cuh [CI/Build] Suppress divide-by-zero and missing return statement warnings (#7001) 2024-08-05 16:00:01 -04:00
dtype_float16.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
dtype_float32.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
dtype_fp8.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
merge_attn_states.cu [BugFix] FA2 MLA Accuracy Issue (#18807) 2025-05-28 08:59:39 +00:00
paged_attention_v1.cu [FP8][Kernel] Dynamic kv cache scaling factors computation (#11906) 2025-01-23 18:04:03 +00:00
paged_attention_v2.cu [FP8][Kernel] Dynamic kv cache scaling factors computation (#11906) 2025-01-23 18:04:03 +00:00
vertical_slash_index.cu Implements dual-chunk-flash-attn backend for dual chunk attention with sparse attention support (#11844) 2025-05-12 19:52:47 -07:00