From a8f7abcc58ffa5f48e7b3f124e02a8b1920879eb Mon Sep 17 00:00:00 2001 From: WoosukKwon Date: Fri, 24 Jan 2025 13:49:51 -0800 Subject: [PATCH] Minor Signed-off-by: WoosukKwon --- _posts/2025-01-24-v1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2025-01-24-v1.md b/_posts/2025-01-24-v1.md index 9783e69..6de2346 100644 --- a/_posts/2025-01-24-v1.md +++ b/_posts/2025-01-24-v1.md @@ -118,7 +118,7 @@ Finally, please note that you can continue using V0 and maintain backward compat To use vLLM V1: 1. Install the latest version of vLLM with `pip install vllm --upgrade`. 2. **Set the environment variable `export VLLM_USE_V1=1`.** -3. Use vLLM’s [Python interface](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/basic.py) or OpenAI-compatible server (`vllm serve `). You don’t need any change to the existing API. +3. Use vLLM’s [Python API](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/basic.py) or OpenAI-compatible server (`vllm serve `). You don’t need any change to the existing API. Please try it out and share your feedback!