diff --git a/_posts/2024-07-25-lfai-perf.md b/_posts/2024-07-25-lfai-perf.md index 2b8dd91..b519cf4 100644 --- a/_posts/2024-07-25-lfai-perf.md +++ b/_posts/2024-07-25-lfai-perf.md @@ -25,7 +25,7 @@ We are excited to announce that vLLM has [started the incubation process into LF ### Performance is top priority -The vLLM contributor is doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine. +The vLLM contributors are doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine. To recall our roadmap, we focus vLLM on six objectives: wide model coverage, broad hardware support, top performance, production-ready, thriving open source community, and extensible architecture.