vllm/docs/getting_started/installation
Reid cca91a7a10
[doc] fix the incorrect label (#19787)
Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
2025-06-18 10:30:58 +00:00
..
cpu [Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (#19019) 2025-06-03 00:16:17 -07:00
gpu [doc] fix the incorrect label (#19787) 2025-06-18 10:30:58 +00:00
.nav.yml [doc] split "Other AI Accelerators" tabs (#19708) 2025-06-17 22:05:29 +09:00
README.md [doc] split "Other AI Accelerators" tabs (#19708) 2025-06-17 22:05:29 +09:00
aws_neuron.md [doc] split "Other AI Accelerators" tabs (#19708) 2025-06-17 22:05:29 +09:00
cpu.md Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (#17930) 2025-06-10 06:22:05 +00:00
device.template.md Migrate docs from Sphinx to MkDocs (#18145) 2025-05-23 02:09:53 -07:00
google_tpu.md [doc] split "Other AI Accelerators" tabs (#19708) 2025-06-17 22:05:29 +09:00
gpu.md [doc] fix the incorrect label (#19787) 2025-06-18 10:30:58 +00:00
intel_gaudi.md [doc] split "Other AI Accelerators" tabs (#19708) 2025-06-17 22:05:29 +09:00
python_env_setup.inc.md Migrate docs from Sphinx to MkDocs (#18145) 2025-05-23 02:09:53 -07:00

README.md

title
Installation

{ #installation-index }

vLLM supports the following hardware platforms: