mirror of https://github.com/vllm-project/vllm.git
1.1 KiB
1.1 KiB
title |
---|
Streamlit |
{ #deployment-streamlit }
Streamlit lets you transform Python scripts into interactive web apps in minutes, instead of weeks. Build dashboards, generate reports, or create chat apps.
It can be quickly integrated with vLLM as a backend API server, enabling powerful LLM inference via API calls.
Prerequisites
- Setup vLLM environment
Deploy
- Start the vLLM server with the supported chat completion model, e.g.
vllm serve qwen/Qwen1.5-0.5B-Chat
- Install streamlit and openai:
pip install streamlit openai
-
Use the script: gh-file:examples/online_serving/streamlit_openai_chatbot_webserver.py
-
Start the streamlit web UI and start to chat:
streamlit run streamlit_openai_chatbot_webserver.py
# or specify the VLLM_API_BASE or VLLM_API_KEY
VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \
streamlit run streamlit_openai_chatbot_webserver.py
# start with debug mode to view more details
streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug