From 90a64dddc095ecb80ff19ca06f7dcc51d15bd247 Mon Sep 17 00:00:00 2001 From: simon-mo Date: Thu, 25 Jul 2024 15:03:48 -0700 Subject: [PATCH] typo --- _posts/2024-07-25-lfai-perf.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-07-25-lfai-perf.md b/_posts/2024-07-25-lfai-perf.md index 2b8dd91..b519cf4 100644 --- a/_posts/2024-07-25-lfai-perf.md +++ b/_posts/2024-07-25-lfai-perf.md @@ -25,7 +25,7 @@ We are excited to announce that vLLM has [started the incubation process into LF ### Performance is top priority -The vLLM contributor is doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine. +The vLLM contributors are doubling down to ensure vLLM is a fastest and easiest-to-use LLM inference and serving engine. To recall our roadmap, we focus vLLM on six objectives: wide model coverage, broad hardware support, top performance, production-ready, thriving open source community, and extensible architecture.