Signed-off-by: WoosukKwon <woosuk.kwon@berkeley.edu>
This commit is contained in:
WoosukKwon 2025-01-26 16:36:04 -08:00
parent 02d5e058ff
commit e890f28cea
1 changed files with 3 additions and 1 deletions

View File

@ -104,6 +104,8 @@ The final piece of the puzzle for vLLM V1 was integrating [FlashAttention 3](htt
# Performance
Thanks to the extensive improvements in vLLM V1, we have observed significant performance gains across various models and hardware backends. Here are some key highlights:
# Limitations & Future Work
While vLLM V1 shows promising results, it is still in its alpha stage and lacks several features from V0. Heres a clarification:
@ -115,7 +117,7 @@ V1 supports decoder-only Transformers like Llama, mixture-of-experts (MoE) model
V1 currently lacks support for log probs, prompt log probs sampling parameters, pipeline parallelism, structured decoding, speculative decoding, prometheus metrics, and LoRA. We are actively working to close this feature gap and add new optimizations. Please stay tuned!
**Hardware Support:**
V1 currently supports only Ampere or later NVIDIA GPUs. We are working on support for other hardware backends such as TPU.
V1 currently supports only Ampere or later NVIDIA GPUs. We are actively working to extend support to other hardware backends such as TPU.
Finally, please note that you can continue using V0 and maintain backward compatibility by not setting `VLLM_USE_V1=1`.