vllm/examples/offline_inference/qwen2_5_omni
Simon Mo 02f0c7b220
[Misc] Add SPDX-FileCopyrightText (#19100)
Signed-off-by: simon-mo <simon.mo@hey.com>
2025-06-03 11:20:17 -07:00
..
README.md [Misc] improve docs (#18734) 2025-05-27 07:07:01 +00:00
only_thinker.py [Misc] Add SPDX-FileCopyrightText (#19100) 2025-06-03 11:20:17 -07:00

README.md

Qwen2.5-Omni Offline Inference Examples

This folder provides several example scripts on how to inference Qwen2.5-Omni offline.

Thinker Only

# Audio + image + video
python examples/offline_inference/qwen2_5_omni/only_thinker.py \
    -q mixed_modalities

# Read vision and audio inputs from a single video file
# NOTE: V1 engine does not support interleaved modalities yet.
VLLM_USE_V1=0 \
python examples/offline_inference/qwen2_5_omni/only_thinker.py \
    -q use_audio_in_video

# Multiple audios
VLLM_USE_V1=0 \
python examples/offline_inference/qwen2_5_omni/only_thinker.py \
    -q multi_audios

This script will run the thinker part of Qwen2.5-Omni, and generate text response.

You can also test Qwen2.5-Omni on a single modality:

# Process audio inputs
python examples/offline_inference/audio_language.py \
    --model-type qwen2_5_omni

# Process image inputs
python examples/offline_inference/vision_language.py \
    --modality image \
    --model-type qwen2_5_omni

# Process video inputs
python examples/offline_inference/vision_language.py \
    --modality video \
    --model-type qwen2_5_omni