mirror of https://github.com/vllm-project/vllm.git
9 lines
316 B
Markdown
9 lines
316 B
Markdown
---
|
|
title: Modal
|
|
---
|
|
[](){ #deployment-modal }
|
|
|
|
vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling.
|
|
|
|
For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).
|