typo
This commit is contained in:
parent
d85b0ef5b5
commit
90a64dddc0
|
@ -25,7 +25,7 @@ We are excited to announce that vLLM has [started the incubation process into LF
|
|||
|
||||
### Performance is top priority
|
||||
|
||||
The vLLM contributor is doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine.
|
||||
The vLLM contributors are doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine.
|
||||
|
||||
To recall our roadmap, we focus vLLM on six objectives: wide model coverage, broad hardware support, top performance, production-ready, thriving open source community, and extensible architecture.
|
||||
|
||||
|
|
Loading…
Reference in New Issue