Update 2025-01-27-intro-to-llama-stack-with-vllm.md
This commit is contained in:
parent
4b3bc6dc25
commit
e95f52795f
|
@ -9,9 +9,7 @@ We are excited to announce that vLLM inference provider is now available in [Lla
|
||||||
|
|
||||||
# What is Llama Stack?
|
# What is Llama Stack?
|
||||||
|
|
||||||
<div align="center">
|
<img align="right" src="/assets/figures/llama-stack/llama-stack.png" alt="llama-stack-diagram" width="50%" height="50%">
|
||||||
<img src="/assets/figures/llama-stack/llama-stack.png" alt="Icon" style="width: 60%; vertical-align:middle;">
|
|
||||||
</div>
|
|
||||||
|
|
||||||
Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.
|
Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue