mirror of https://github.com/docker/docs.git
772 B
772 B
description | keywords | title | linkTitle | toc_min | toc_max |
---|---|---|---|---|---|
Containerize RAG application using Ollama and Docker | python, generative ai, genai, llm, ollama, rag, qdrant | Build a RAG application using Ollama and Docker | RAG Ollama application | 1 | 2 |
The Retrieval Augmented Generation (RAG) guide teaches you how to containerize an existing RAG application using Docker. The example application is a RAG that acts like a sommelier, giving you the best pairings between wines and food. In this guide, you’ll learn how to:
- Containerize and run a RAG application
- Set up a local environment to run the complete RAG stack locally for development
Start by containerizing an existing RAG application.
{{< button text="Containerize a RAG app" url="containerize.md" >}}