Community maintained hardware plugin for vLLM on Ascend
Go to file
JohnJan 54f2b31184
[Doc] Add a doc for qwen omni (#1867)
Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>

### What this PR does / why we need it?
Add FAQ note for qwen omni
Fixes https://github.com/vllm-project/vllm-ascend/issues/1760 issue1



- vLLM version: v0.9.2
- vLLM main:
b9a21e9173
2025-07-20 09:05:41 +08:00
.github [CI] Switching to infra cache server to reduce network pressure (#1792) 2025-07-18 18:39:25 +08:00
benchmarks [CI] Remove benchmark patch and increase the scheduler frequency (#1762) 2025-07-13 20:00:35 +08:00
cmake [core] Support custom ascendc kernels in vllm-ascend (#233) 2025-04-03 14:52:34 +08:00
csrc [Misc][V0 Deprecation] Remove V0 Related Custom Ops (#1871) 2025-07-18 23:06:03 +08:00
docs [Doc] Add a doc for qwen omni (#1867) 2025-07-20 09:05:41 +08:00
examples Fix e2e data parallel test: add resource release code (#1881) 2025-07-19 11:39:48 +08:00
tests Fix e2e data parallel test: add resource release code (#1881) 2025-07-19 11:39:48 +08:00
tools [1/N][CI] Move linting system to pre-commits hooks (#1256) 2025-07-10 14:17:15 +08:00
vllm_ascend [CI] Fix broken CI (#1889) 2025-07-20 02:11:57 +08:00
.gitignore [Misc] Add `fusion_result.json` to `.gitignore` (#1836) 2025-07-17 11:54:49 +08:00
.pre-commit-config.yaml [1/N][CI] Move linting system to pre-commits hooks (#1256) 2025-07-10 14:17:15 +08:00
.readthedocs.yaml [Doc] Add sphinx build for vllm-ascend (#55) 2025-02-13 18:44:17 +08:00
CMakeLists.txt add custom ascendc kernel vocabparallelembedding (#796) 2025-06-12 10:44:33 +08:00
CODE_OF_CONDUCT.md [Core] Init vllm-ascend (#3) 2025-02-05 10:53:12 +08:00
CONTRIBUTING.md Add recommend version and refresh readme / contribution.md (#1757) 2025-07-12 12:35:40 +08:00
DCO [Core] Init vllm-ascend (#3) 2025-02-05 10:53:12 +08:00
Dockerfile Upgrade vLLM version to v0.9.2 (#1652) 2025-07-08 14:18:17 +08:00
Dockerfile.310p Upgrade vLLM version to v0.9.2 (#1652) 2025-07-08 14:18:17 +08:00
Dockerfile.310p.openEuler Upgrade vLLM version to v0.9.2 (#1652) 2025-07-08 14:18:17 +08:00
Dockerfile.a3 [Platform] Add support for Altlas A3 series (#1794) 2025-07-17 11:13:02 +08:00
Dockerfile.a3.openEuler [Platform] Add support for Altlas A3 series (#1794) 2025-07-17 11:13:02 +08:00
Dockerfile.openEuler Upgrade vLLM version to v0.9.2 (#1652) 2025-07-08 14:18:17 +08:00
LICENSE Initial commit 2025-01-29 02:44:13 -08:00
README.md Add recommend version and refresh readme / contribution.md (#1757) 2025-07-12 12:35:40 +08:00
README.zh.md Update README.zh.md to fix typo (#1758) 2025-07-12 14:01:34 +08:00
codecov.yml [Test] Enable code cov for V1 and enable push trigger (#1164) 2025-06-21 00:01:05 +08:00
collect_env.py [CI]Add model basic accuracy test(Qwen2.5-0.5B-Instruct) (#460) 2025-04-17 14:59:56 +08:00
format.sh [1/N][CI] Move linting system to pre-commits hooks (#1256) 2025-07-10 14:17:15 +08:00
mypy.ini Support multistream of shared experts in FusedMoE (#997) 2025-06-11 09:18:38 +08:00
packages.txt [CI/UT][PD Disaggreate] Initialize PD Disaggreate UT (#889) 2025-05-29 10:17:12 +08:00
pyproject.toml [CI] Fix broken CI (#1889) 2025-07-20 02:11:57 +08:00
requirements-dev.txt [Test] Remove VLLM_USE_V1 in example and tests (#1733) 2025-07-15 12:49:57 +08:00
requirements-lint.txt [Test] Remove VLLM_USE_V1 in example and tests (#1733) 2025-07-15 12:49:57 +08:00
requirements.txt [CI] Fix broken CI (#1889) 2025-07-20 02:11:57 +08:00
setup.py [Build] Add build info (#1386) 2025-06-27 09:14:43 +08:00
typos.toml [1/N][CI] Move linting system to pre-commits hooks (#1256) 2025-07-10 14:17:15 +08:00

README.md

vllm-ascend

vLLM Ascend Plugin

| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |

English | 中文


Latest News 🔥

  • [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
  • [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
  • [2025/05] We've released first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
  • [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
  • [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
  • [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.

Overview

vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.

It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
  • OS: Linux
  • Software:
    • Python >= 3.9, < 3.12
    • CANN >= 8.1.RC1
    • PyTorch >= 2.5.1, torch-npu >= 2.5.1.post1.dev20250619
    • vLLM (the same version as vllm-ascend)

Getting Started

Please use the following recommended versions to get started quickly:

Version Release type Doc
v0.9.2rc1 Latest release candidate QuickStart and Installation for more details
v0.7.3.post1 Latest stable version QuickStart and Installation for more details

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.

We welcome and value any contributions and collaborations:

Branch

vllm-ascend has main branch and dev branch.

  • main: main branchcorresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example, v0.7.3-dev is the dev branch for vLLM v0.7.3 version.

Below is maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch and vLLM 0.9.x branch
v0.7.1-dev Unmaintained Only doc fixed is allowed
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more.
v0.9.1-dev Maintained CI commitment for vLLM 0.9.1 version

Please refer to Versioning policy for more details.

Weekly Meeting

License

Apache License 2.0, as found in the LICENSE file.