This commit is contained in:
simon-mo 2024-07-25 15:03:48 -07:00
parent d85b0ef5b5
commit 90a64dddc0
1 changed files with 1 additions and 1 deletions

View File

@ -25,7 +25,7 @@ We are excited to announce that vLLM has [started the incubation process into LF
### Performance is top priority
The vLLM contributor is doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine.
The vLLM contributors are doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine.
To recall our roadmap, we focus vLLM on six objectives: wide model coverage, broad hardware support, top performance, production-ready, thriving open source community, and extensible architecture.