diff --git a/_posts/2025-01-24-v1.md b/_posts/2025-01-24-v1.md index 9783e69..6de2346 100644 --- a/_posts/2025-01-24-v1.md +++ b/_posts/2025-01-24-v1.md @@ -118,7 +118,7 @@ Finally, please note that you can continue using V0 and maintain backward compat To use vLLM V1: 1. Install the latest version of vLLM with `pip install vllm --upgrade`. 2. **Set the environment variable `export VLLM_USE_V1=1`.** -3. Use vLLM’s [Python interface](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/basic.py) or OpenAI-compatible server (`vllm serve `). You don’t need any change to the existing API. +3. Use vLLM’s [Python API](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/basic.py) or OpenAI-compatible server (`vllm serve `). You don’t need any change to the existing API. Please try it out and share your feedback!