mirror of https://github.com/vllm-project/vllm.git
docs: Add tutorial on deploying vLLM model with KServe (#2586)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
parent
27ca23dc00
commit
49d849b3ab
|
@ -70,6 +70,7 @@ Documentation
|
|||
|
||||
serving/distributed_serving
|
||||
serving/run_on_sky
|
||||
serving/deploying_with_kserve
|
||||
serving/deploying_with_triton
|
||||
serving/deploying_with_docker
|
||||
serving/serving_with_langchain
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
.. _deploying_with_kserve:
|
||||
|
||||
Deploying with KServe
|
||||
============================
|
||||
|
||||
vLLM can be deployed with `KServe <https://github.com/kserve/kserve>`_ on Kubernetes for highly scalable distributed model serving.
|
||||
|
||||
Please see `this guide <https://kserve.github.io/website/latest/modelserving/v1beta1/llm/vllm/>`_ for more details on using vLLM with KServe.
|
Loading…
Reference in New Issue