Logo
Explore Help
Register Sign In
Repositories Users Organizations
Sort
Newest Oldest Alphabetically Reverse alphabetically Recently updated Least recently updated Most stars Fewest stars Most forks Fewest forks
vLLM / vllm
Python 0 0
A high-throughput and memory-efficient inference and serving engine for LLMs
llm mlops pytorch cuda inference llama llm-serving llmops model-serving qwen rocm tpu trainium transformer amd xpu deepseek gpt hpu inferentia
Updated 2025-07-04 16:00:34 +08:00
Powered by Gitea Version: 1.21.1 Page: 58ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API