From 0121eb45e9cd83ee547df6527e72f9fc86338db2 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Fri, 24 Jan 2025 16:51:39 -0500 Subject: [PATCH] Update 2025-01-27-intro-to-llama-stack-with-vllm.md --- _posts/2025-01-27-intro-to-llama-stack-with-vllm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2025-01-27-intro-to-llama-stack-with-vllm.md b/_posts/2025-01-27-intro-to-llama-stack-with-vllm.md index 6d48e28..e5efefe 100644 --- a/_posts/2025-01-27-intro-to-llama-stack-with-vllm.md +++ b/_posts/2025-01-27-intro-to-llama-stack-with-vllm.md @@ -13,7 +13,7 @@ We are excited to announce that vLLM inference provider is now available in [Lla Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations. -Llama Stack focuses on making it easy to build production applications with a variety of models - ranging from the latest Llama 3.3 model to specialized models like Llama Guard for safety. More models beyond the Llama model family are in the works. The goal is to provide pre-packaged implementations (aka “distributions”) which can be run in a variety of deployment environments. The Stack can assist you in your entire app development lifecycle - start iterating on local, mobile or desktop and seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available. +Llama Stack focuses on making it easy to build production applications with a variety of models - ranging from the latest Llama 3.3 model to specialized models like Llama Guard for safety and other models. The goal is to provide pre-packaged implementations (aka “distributions”) which can be run in a variety of deployment environments. The Stack can assist you in your entire app development lifecycle - start iterating on local, mobile or desktop and seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available. Each specific implementation of an API is called a "Provider" in this architecture. Users can swap providers via configuration. `vLLM` is a prominent example of a high-performance API backing the inference API.