mirror of https://github.com/docker/docs.git
18 lines
772 B
Markdown
18 lines
772 B
Markdown
---
|
||
description: Containerize RAG application using Ollama and Docker
|
||
keywords: python, generative ai, genai, llm, ollama, rag, qdrant
|
||
title: Build a RAG application using Ollama and Docker
|
||
linkTitle: RAG Ollama application
|
||
toc_min: 1
|
||
toc_max: 2
|
||
---
|
||
|
||
The Retrieval Augmented Generation (RAG) guide teaches you how to containerize an existing RAG application using Docker. The example application is a RAG that acts like a sommelier, giving you the best pairings between wines and food. In this guide, you’ll learn how to:
|
||
|
||
* Containerize and run a RAG application
|
||
* Set up a local environment to run the complete RAG stack locally for development
|
||
|
||
Start by containerizing an existing RAG application.
|
||
|
||
{{< button text="Containerize a RAG app" url="containerize.md" >}}
|