vllm/csrc/attention
DefTruth e82ee40de3
[Bugfix][Kernel] fix potential cuda graph broken for merge_attn_states kernel (#16693)
Signed-off-by: DefTruth <qiustudent_r@163.com>
2025-04-16 03:31:39 -07:00
..
attention_dtypes.h Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
attention_generic.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
attention_kernels.cuh [FP8][Kernel] Dynamic kv cache scaling factors computation (#11906) 2025-01-23 18:04:03 +00:00
attention_utils.cuh [AMD][CI/Build] Disambiguation of the function call for ROCm 6.2 headers compatibility (#7477) 2024-08-21 16:47:36 -07:00
dtype_bfloat16.cuh [CI/Build] Suppress divide-by-zero and missing return statement warnings (#7001) 2024-08-05 16:00:01 -04:00
dtype_float16.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
dtype_float32.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
dtype_fp8.cuh [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
merge_attn_states.cu [Bugfix][Kernel] fix potential cuda graph broken for merge_attn_states kernel (#16693) 2025-04-16 03:31:39 -07:00
paged_attention_v1.cu [FP8][Kernel] Dynamic kv cache scaling factors computation (#11906) 2025-01-23 18:04:03 +00:00
paged_attention_v2.cu [FP8][Kernel] Dynamic kv cache scaling factors computation (#11906) 2025-01-23 18:04:03 +00:00