add Compose how-to page for Docker Model Runner support with Compose (#22392)

<!--Delete sections as needed -->

## Description
Add how-to page explaining how to use Docker Model Runner with Compose

## Related issues or tickets

https://docker.atlassian.net/browse/APCLI-1068

## Reviews

<!-- Notes for reviewers here -->
<!-- List applicable reviews (optionally @tag reviewers) -->

- [x] Technical review
- [x] Editorial review
- [ ] Product review

---------

Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
Co-authored-by: aevesdocker <allie.sadler@docker.com>
This commit is contained in:
Guillaume Lours 2025-04-28 18:07:41 +02:00 committed by GitHub
parent a7be08a158
commit 75e9bc4f53
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 68 additions and 0 deletions

View File

@ -0,0 +1,66 @@
---
title: Use Docker Model Runner
description: Learn how to integrate Docker Model Runner with Docker Compose to build AI-powered applications
keywords: compose, docker compose, model runner, ai, llm, artificial intelligence, machine learning
weight: 111
params:
sidebar:
badge:
color: green
text: New
---
{{< summary-bar feature_name="Compose model runner" >}}
Docker Model Runner can be integrated with Docker Compose to run AI models as part of your multi-container applications.
This lets you define and run AI-powered applications alongside your other services.
## Prerequisites
- Docker Compose v2.35 or later
- Docker Desktop 4.41 or later
- Docker Desktop for Mac with Apple Silicon or Docker Desktop for Windows with NVIDIA GPU
- [Docker Model Runner enabled in Docker Desktop](/manuals/desktop/features/model-runner.md#enable-docker-model-runner)
## Provider services
Compose introduces a new service type called `provider` that allows you to declare platform capabilities required by your application. For AI models, you can use the `model` type to declare model dependencies.
Here's an example of how to define a model provider:
```yaml
services:
chat:
image: my-chat-app
depends_on:
- ai-runner
ai-runner:
provider:
type: model
options:
model: ai/smollm2
```
Notice the dedicated `provider` attribute in the `ai-runner` service.
This attribute specifies that the service is a model provider and lets you define options such as the name of the model to be used.
There is also a `depends_on` attribute in the `chat` service.
This attribute specifies that the `chat` service depends on the `ai-runner` service.
This means that the `ai-runner` service will be started before the `chat` service to allow injection of model information to the `chat` service.
## How it works
During the `docker compose up` process, Docker Model Runner automatically pulls and runs the specified model.
It also sends Compose the model tag name and the URL to access the model runner.
This information is then passed to services which declare a dependency on the model provider.
In the example above, the `chat` service receives 2 environment variables prefixed by the service name:
- `AI-RUNNER_URL` with the URL to access the model runner
- `AI-RUNNER_MODEL` with the model name which could be passed with the URL to request the model.
This lets the `chat` service to interact with the model and use it for its own purposes.
## Reference
- [Docker Model Runner documentation](/manuals/desktop/features/model-runner.md)

View File

@ -105,6 +105,8 @@ Compose mac address:
requires: Docker Compose [2.23.2](/manuals/compose/releases/release-notes.md#2232) and later
Compose menu:
requires: Docker Compose [2.26.0](/manuals/compose/releases/release-notes.md#2260) and later
Compose model runner:
requires: Docker Compose [2.35.0](/manuals/compose/releases/release-notes.md#2300) and later, and Docker Desktop 4.41 and later
Compose OCI artifact:
requires: Docker Compose [2.34.0](/manuals/compose/releases/release-notes.md#2340) and later
Compose replace file: