mirror of https://github.com/docker/docs.git
Fix the GenAI usecase guide
Attempt to make it clearer where users can expect to be able to use GPU acceleration. Currently that is (1) docker-ce on Linux; (2) docker desktop on windows wsl2. Signed-off-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com>
This commit is contained in:
parent
27645332ad
commit
728f70cc85
|
@ -6,7 +6,11 @@ description: Learn how to containerize a generative AI (GenAI) application.
|
|||
|
||||
## Prerequisites
|
||||
|
||||
* You have installed the latest version of [Docker Desktop](../../../get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
|
||||
> **Note**
|
||||
>
|
||||
> GenAI applications can often benefit from GPU acceleration. Currently Docker Desktop supports GPU acceleration only on [Windows with the WSL2 backend](../../../desktop/gpu.md#using-nvidia-gpus-with-wsl2). Linux users can also access GPU acceleration using a native installation of the [Docker Engine](../../../engine/install/_index.md).
|
||||
|
||||
* You have installed the latest version of [Docker Desktop](../../../get-docker.md) or, if you are a Linux user and are planning to use GPU acceleration, [Docker Engine](../../../engine/install/_index.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
|
||||
* You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
|
||||
|
||||
## Overview
|
||||
|
@ -130,4 +134,4 @@ Related information:
|
|||
|
||||
In the next section, you'll learn how you can run your application, database, and LLM service all locally using Docker.
|
||||
|
||||
{{< button text="Develop your application" url="develop.md" >}}
|
||||
{{< button text="Develop your application" url="develop.md" >}}
|
||||
|
|
|
@ -84,7 +84,7 @@ The sample application supports both [Ollama](https://ollama.ai/) and [OpenAI](h
|
|||
|
||||
While all platforms can use any of the previous scenarios, the performance and
|
||||
GPU support may vary. You can use the following guidelines to help you choose the appropriate option:
|
||||
- Run Ollama in a container if you're on Linux or Windows 11, you
|
||||
- Run Ollama in a container if you're on Linux, and using a native installation of the Docker Engine, or Windows 10/11, and using Docker Desktop, you
|
||||
have a CUDA-supported GPU, and your system has at least 8 GB of RAM.
|
||||
- Run Ollama outside of a container if you're on an Apple silicon Mac.
|
||||
- Use OpenAI if the previous two scenarios don't apply to you.
|
||||
|
@ -98,8 +98,8 @@ When running Ollama in a container, you should have a CUDA-supported GPU. While
|
|||
|
||||
To run Ollama in a container and provide GPU access:
|
||||
1. Install the prerequisites.
|
||||
- For Linux, install the [NVIDIA Container Toolkilt](https://github.com/NVIDIA/nvidia-container-toolkit).
|
||||
- For Windows 11, install the latest [NVIDIA driver](https://www.nvidia.com/Download/index.aspx).
|
||||
- For Docker Engine on Linux, install the [NVIDIA Container Toolkilt](https://github.com/NVIDIA/nvidia-container-toolkit).
|
||||
- For Docker Desktop on Windows 10/11, install the latest [NVIDIA driver](https://www.nvidia.com/Download/index.aspx) and make sure you are using the [WSL2 backend](../../../desktop/wsl/index.md/#turn-on-docker-desktop-wsl-2)
|
||||
2. Add the Ollama service and a volume in your `compose.yaml`. The following is
|
||||
the updated `compose.yaml`:
|
||||
|
||||
|
@ -244,4 +244,4 @@ Related information:
|
|||
|
||||
## Next steps
|
||||
|
||||
See samples of more GenAI applications in the [GenAI Stack demo applications](https://github.com/docker/genai-stack).
|
||||
See samples of more GenAI applications in the [GenAI Stack demo applications](https://github.com/docker/genai-stack).
|
||||
|
|
Loading…
Reference in New Issue