docs/content/guides/rag-ollama/_index.md

22 lines
885 B
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
description: Containerize RAG application using Ollama and Docker
keywords: python, generative ai, genai, llm, ollama, rag, qdrant
title: Build a RAG application using Ollama and Docker
linkTitle: RAG Ollama application
summary: |
This guide demonstrates how to use Docker to deploy Retrieval-Augmented
Generation (RAG) models with Ollama.
tags: [ai]
aliases:
- /guides/use-case/rag-ollama/
params:
time: 20 minutes
---
The Retrieval Augmented Generation (RAG) guide teaches you how to containerize an existing RAG application using Docker. The example application is a RAG that acts like a sommelier, giving you the best pairings between wines and food. In this guide, youll learn how to:
- Containerize and run a RAG application
- Set up a local environment to run the complete RAG stack locally for development
Start by containerizing an existing RAG application.