vllm/requirements
Nicolò Lucchesi d1b689c445
[Bugfix] Fix flaky `test_streaming_response` test (#20363)
Signed-off-by: NickLucche <nlucches@redhat.com>
2025-07-03 14:46:24 +00:00
..
build.txt [Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (#19019) 2025-06-03 00:16:17 -07:00
common.txt [Bugfix] Fix flaky `test_streaming_response` test (#20363) 2025-07-03 14:46:24 +00:00
cpu-build.txt [CPU] Fix torch version in x86 CPU backend (#19258) 2025-06-26 03:34:47 -07:00
cpu.txt [CPU] Fix torch version in x86 CPU backend (#19258) 2025-06-26 03:34:47 -07:00
cuda.txt Update PyTorch to 2.7.0 (#16859) 2025-04-29 19:08:04 -07:00
dev.txt Move requirements into their own directory (#12547) 2025-03-08 16:44:35 +00:00
docs.txt [CI/Build] Remove imports of built-in `re` (#18750) 2025-05-27 09:19:18 +00:00
hpu.txt [Build] Require setuptools >= 77.0.3 for PEP 639 (#17389) 2025-04-30 23:25:36 -07:00
lint.txt Move requirements into their own directory (#12547) 2025-03-08 16:44:35 +00:00
neuron.txt Add NeuronxDistributedInference support, Speculative Decoding, Dynamic on-device sampling (#16357) 2025-05-07 00:07:30 -07:00
nightly_torch_test.txt [Quantization] Bump to use latest bitsandbytes (#20424) 2025-07-03 21:58:46 +08:00
rocm-build.txt [Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (#19019) 2025-06-03 00:16:17 -07:00
rocm-test.txt Adding "Basic Models Test" and "Multi-Modal Models Test (Extended) 3" in AMD Pipeline (#18106) 2025-05-15 08:49:23 -07:00
rocm.txt [AMD] Update compatible packaging version (#19309) 2025-06-07 20:55:09 +08:00
test.in [Quantization] Bump to use latest bitsandbytes (#20424) 2025-07-03 21:58:46 +08:00
test.txt [Quantization] Bump to use latest bitsandbytes (#20424) 2025-07-03 21:58:46 +08:00
tpu.txt [TPU] Update torch-xla version to include paged attention tuned block change (#19813) 2025-06-18 22:41:13 +00:00
xpu.txt [Hardware][Intel GPU] Add v1 Intel GPU support with Flash attention backend. (#19560) 2025-06-26 09:27:18 -07:00