# Using vLLM vLLM supports the following usage patterns: - [Inference and Serving](../serving/offline_inference.md): Run a single instance of a model. - [Deployment](../deployment/docker.md): Scale up model instances for production. - [Training](../training/rlhf.md): Train or fine-tune a model.