docs/content/guides/rag-ollama/_index.md

908 B
Raw Blame History

description keywords title linkTitle summary subjects levels aliases params
Containerize RAG application using Ollama and Docker python, generative ai, genai, llm, ollama, rag, qdrant Build a RAG application using Ollama and Docker RAG Ollama application This guide demonstrates how to use Docker to deploy Retrieval-Augmented Generation (RAG) models with Ollama.
ai
beginner
/guides/use-case/rag-ollama/
time
20 minutes

The Retrieval Augmented Generation (RAG) guide teaches you how to containerize an existing RAG application using Docker. The example application is a RAG that acts like a sommelier, giving you the best pairings between wines and food. In this guide, youll learn how to:

  • Containerize and run a RAG application
  • Set up a local environment to run the complete RAG stack locally for development

Start by containerizing an existing RAG application.