vllm/docs/source/deployment/frameworks/triton.md

444 B

(deployment-triton)=

NVIDIA Triton

The Triton Inference Server hosts a tutorial demonstrating how to quickly deploy a simple facebook/opt-125m model using vLLM. Please see Deploying a vLLM model in Triton for more details.