From 940a264895f2c6103ba7b4a17cbfc8eb1c07f96b Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Sun, 12 Jan 2025 20:56:12 -0500 Subject: [PATCH] Acknowledgement Signed-off-by: Yuan Tang --- _posts/2025-01-12-intro-to-llama-stack-with-vllm.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/_posts/2025-01-12-intro-to-llama-stack-with-vllm.md b/_posts/2025-01-12-intro-to-llama-stack-with-vllm.md index 8a62b5d..9f7e518 100644 --- a/_posts/2025-01-12-intro-to-llama-stack-with-vllm.md +++ b/_posts/2025-01-12-intro-to-llama-stack-with-vllm.md @@ -392,8 +392,12 @@ kubectl port-forward service/llama-stack-service 5000:5000 llama-stack-client --endpoint http://localhost:5000 inference chat-completion --message "hello, what model are you?" ``` TODO: More interesting prompt to congratulate reaching the end of the article +TODO(yuan): potential mention of deployment option via KServe ## Acknowledgments -TBA +TODO(yuan and ashwin): +* Llama Stack/Meta team (core maintainers) +* Red Hat team (contributors, reviewers) +* vLLM team