docs freshness for compose and desktop (#17003)

* docs freshness for compose and desktop

* fix

* more pages

* fix build
This commit is contained in:
Allie Sadler 2023-03-30 10:46:21 +01:00 committed by GitHub
parent 2bb5109710
commit deedbe185c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 45 additions and 769 deletions

View File

@ -256,3 +256,9 @@ fetch-remote:
- dest: "compose/compose-file/12-interpolation.md"
src:
- "12-interpolation.md"
- dest: "compose/compose-file/build.md"
src:
- "build.md"
- dest: "compose/compose-file/deploy.md"
src:
- "deploy.md"

View File

@ -1,435 +1,7 @@
---
description: Compose file build reference
keywords: fig, composition, compose, docker
title: Compose file build reference
toc_max: 4
toc_min: 2
keywords: fig, composition, compose, docker
fetch_remote:
line_start: 8
line_end: -1
---
Compose specification is a platform-neutral way to define multi-container applications. A Compose implementation
focusing on development use-case to run application on local machine will obviously also support (re)building
application from sources. The Compose Build specification allows to define the build process within a Compose file
in a portable way.
## Definitions
Compose Specification is extended to support an OPTIONAL `build` subsection on services. This section define the
build requirements for service container image. Only a subset of Compose file services MAY define such a Build
subsection, others being created based on `Image` attribute. When a Build subsection is present for a service, it
is *valid* for a Compose file to miss an `Image` attribute for corresponding service, as Compose implementation
can build image from source.
Build can be either specified as a single string defining a context path, or as a detailed build definition.
In the former case, the whole path is used as a Docker context to execute a docker build, looking for a canonical
`Dockerfile` at context root. Context path can be absolute or relative, and if so relative path MUST be resolved
from Compose file parent folder. As an absolute path prevent the Compose file to be portable, Compose implementation
SHOULD warn user accordingly.
In the later case, build arguments can be specified, including an alternate `Dockerfile` location. This one can be
absolute or relative path. If Dockerfile path is relative, it MUST be resolved from context path. As an absolute
path prevent the Compose file to be portable, Compose implementation SHOULD warn user if an absolute alternate
Dockerfile path is used.
## Consistency with Image
When service definition do include both `Image` attribute and a `Build` section, Compose implementation can't
guarantee a pulled image is strictly equivalent to building the same image from sources. Without any explicit
user directives, Compose implementation with Build support MUST first try to pull Image, then build from source
if image was not found on registry. Compose implementation MAY offer options to customize this behaviour by user
request.
## Publishing built images
Compose implementation with Build support SHOULD offer an option to push built images to a registry. Doing so, it
MUST NOT try to push service images without an `Image` attribute. Compose implementation SHOULD warn user about
missing `Image` attribute which prevent image being pushed.
Compose implementation MAY offer a mechanism to compute an `Image` attribute for service when not explicitly
declared in yaml file. In such a case, the resulting Compose configuration is considered to have a valid `Image`
attribute, whenever the actual raw yaml file doesn't explicitly declare one.
## Illustrative sample
The following sample illustrates Compose specification concepts with a concrete sample application. The sample is non-normative.
```yaml
services:
frontend:
image: awesome/webapp
build: ./webapp
backend:
image: awesome/database
build:
context: backend
dockerfile: ../backend.Dockerfile
custom:
build: ~/custom
```
When used to build service images from source, such a Compose file will create three docker images:
* `awesome/webapp` docker image is built using `webapp` sub-directory within Compose file parent folder as docker build context. Lack of a `Dockerfile` within this folder will throw an error.
* `awesome/database` docker image is built using `backend` sub-directory within Compose file parent folder. `backend.Dockerfile` file is used to define build steps, this file is searched relative to context path, which means for this sample `..` will resolve to Compose file parent folder, so `backend.Dockerfile` is a sibling file.
* a docker image is built using `custom` directory within user's HOME as docker context. Compose implementation warn user about non-portable path used to build image.
On push, both `awesome/webapp` and `awesome/database` docker images are pushed to (default) registry. `custom` service image is skipped as no `Image` attribute is set and user is warned about this missing attribute.
## Build definition
The `build` element define configuration options that are applied by Compose implementations to build Docker image from source.
`build` can be specified either as a string containing a path to the build context or a detailed structure:
```yml
services:
webapp:
build: ./dir
```
Using this string syntax, only the build context can be configured as a relative path to the Compose file's parent folder.
This path MUST be a directory and contain a `Dockerfile`.
Alternatively `build` can be an object with fields defined as follow
### context (REQUIRED)
`context` defines either a path to a directory containing a Dockerfile, or a url to a git repository.
When the value supplied is a relative path, it MUST be interpreted as relative to the location of the Compose file.
Compose implementations MUST warn user about absolute path used to define build context as those prevent Compose file
from being portable.
```yml
build:
context: ./dir
```
See [Build context](../../build/building/context.md) page for more information.
### dockerfile
`dockerfile` allows to set an alternate Dockerfile. A relative path MUST be resolved from the build context.
Compose implementations MUST warn user about absolute path used to define Dockerfile as those prevent Compose file
from being portable.
```yml
build:
context: .
dockerfile: webapp.Dockerfile
```
### args
`args` define build arguments, i.e. Dockerfile `ARG` values.
Using following Dockerfile:
```Dockerfile
ARG GIT_COMMIT
RUN echo "Based on commit: $GIT_COMMIT"
```
`args` can be set in Compose file under the `build` key to define `GIT_COMMIT`. `args` can be set a mapping or a list:
```yml
build:
context: .
args:
GIT_COMMIT: cdc3b19
```
```yml
build:
context: .
args:
- GIT_COMMIT=cdc3b19
```
Value can be omitted when specifying a build argument, in which case its value at build time MUST be obtained by user interaction,
otherwise build arg won't be set when building the Docker image.
```yml
args:
- GIT_COMMIT
```
### ssh
`ssh` defines SSH authentications that the image builder SHOULD use during image build (e.g., cloning private repository)
`ssh` property syntax can be either:
* `default` - let the builder connect to the ssh-agent.
* `ID=path` - a key/value definition of an ID and the associated path. Can be either a [PEM](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail) file, or path to ssh-agent socket
Simple `default` sample
```yaml
build:
context: .
ssh:
- default # mount the default ssh agent
```
or
```yaml
build:
context: .
ssh: ["default"] # mount the default ssh agent
```
Using a custom id `myproject` with path to a local SSH key:
```yaml
build:
context: .
ssh:
- myproject=~/.ssh/myproject.pem
```
Image builder can then rely on this to mount SSH key during build.
For illustration, [BuildKit extended syntax](https://github.com/compose-spec/compose-spec/pull/234/%5Bmoby/buildkit@master/frontend/dockerfile/docs/syntax.md#run---mounttypessh%5D(https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#run---mounttypessh)) can be used to mount ssh key set by ID and access a secured resource:
`RUN --mount=type=ssh,id=myproject git clone ...`
### cache_from
`cache_from` defines a list of sources the Image builder SHOULD use for cache resolution.
Cache location syntax MUST follow the global format `[NAME|type=TYPE[,KEY=VALUE]]`. Simple `NAME` is actually a shortcut notation for `type=registry,ref=NAME`.
Compose Builder implementations MAY support custom types, the Compose Specification defines canonical types which MUST be supported:
- `registry` to retrieve build cache from an OCI image set by key `ref`
```yml
build:
context: .
cache_from:
- alpine:latest
- type=local,src=path/to/cache
- type=gha
```
Unsupported caches MUST be ignored and not prevent user from building image.
### cache_to
`cache_to` defines a list of export locations to be used to share build cache with future builds.
```yml
build:
context: .
cache_to:
- user/app:cache
- type=local,dest=path/to/cache
```
Cache target is defined using the same `type=TYPE[,KEY=VALUE]` syntax defined by [`cache_from`](#cache_from).
Unsupported cache target MUST be ignored and not prevent user from building image.
### extra_hosts
`extra_hosts` adds hostname mappings at build-time. Use the same syntax as [extra_hosts](05-services.md#extra_hosts).
```yml
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
```
Compose implementations MUST create matching entry with the IP address and hostname in the container's network
configuration, which means for Linux `/etc/hosts` will get extra lines:
```
162.242.195.82 somehost
50.31.209.229 otherhost
```
### isolation
`isolation` specifies a builds container isolation technology. Like [isolation](05-services.md#isolation) supported values
are platform-specific.
### labels
`labels` add metadata to the resulting image. `labels` can be set either as an array or a map.
reverse-DNS notation SHOULD be used to prevent labels from conflicting with those used by other software.
```yml
build:
context: .
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
```
```yml
build:
context: .
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
```
### no_cache
`no_cache` disables image builder cache and enforces a full rebuild from source for all image layers. This only
applies to layers declared in the Dockerfile, referenced images COULD be retrieved from local image store whenever tag
has been updated on registry (see [pull](#pull)).
### pull
`pull` requires the image builder to pull referenced images (`FROM` Dockerfile directive), even if those are already
available in the local image store.
### shm_size
`shm_size` set the size of the shared memory (`/dev/shm` partition on Linux) allocated for building Docker image. Specify
as an integer value representing the number of bytes or as a string expressing a [byte value](11-extension.md#specifying-byte-values).
```yml
build:
context: .
shm_size: '2gb'
```
```yaml
build:
context: .
shm_size: 10000000
```
### target
`target` defines the stage to build as defined inside a multi-stage `Dockerfile`.
```yml
build:
context: .
target: prod
```
### secrets
`secrets` grants access to sensitive data defined by [secrets](05-services.md#secrets) on a per-service build basis. Two
different syntax variants are supported: the short syntax and the long syntax.
Compose implementations MUST report an error if the secret isn't defined in the
[`secrets`](09-secrets.md) section of the Compose file.
#### Short syntax
The short syntax variant only specifies the secret name. This grants the
container access to the secret and mounts it as read-only to `/run/secrets/<secret_name>`
within the container. The source name and destination mountpoint are both set
to the secret name.
The following example uses the short syntax to grant the build of the `frontend` service
access to the `server-certificate` secret. The value of `server-certificate` is set
to the contents of the file `./server.cert`.
```yml
services:
frontend:
build:
context: .
secrets:
- server-certificate
secrets:
server-certificate:
file: ./server.cert
```
#### Long syntax
The long syntax provides more granularity in how the secret is created within
the service's containers.
- `source`: The name of the secret as it exists on the platform.
- `target`: The name of the file to be mounted in `/run/secrets/` in the
service's task containers. Defaults to `source` if not specified.
- `uid` and `gid`: The numeric UID or GID that owns the file within
`/run/secrets/` in the service's task containers. Default value is USER running container.
- `mode`: The [permissions](https://chmod-calculator.com/) for the file to be mounted in `/run/secrets/`
in the service's task containers, in octal notation.
Default value is world-readable permissions (mode `0444`).
The writable bit MUST be ignored if set. The executable bit MAY be set.
The following example sets the name of the `server-certificate` secret file to `server.crt`
within the container, sets the mode to `0440` (group-readable), and sets the user and group
to `103`. The value of `server-certificate` secret is provided by the platform through a lookup and
the secret lifecycle is not directly managed by the Compose implementation.
```yml
services:
frontend:
build:
context: .
secrets:
- source: server-certificate
target: server.cert
uid: "103"
gid: "103"
mode: 0440
secrets:
server-certificate:
external: true
```
Service builds MAY be granted access to multiple secrets. Long and short syntax for secrets MAY be used in the
same Compose file. Defining a secret in the top-level `secrets` MUST NOT imply granting any service build access to it.
Such grant must be explicit within the service specification as a [secrets](05-services.md#secrets) service element.
### tags
`tags` defines a list of tag mappings that MUST be associated to the build image. This list comes in addition of
the `image` [property defined in the service section](05-services.md#image)
```yml
tags:
- "myimage:mytag"
- "registry/username/myrepos:my-other-tag"
```
### platforms
`platforms` defines a list of target [platforms](05-services.md#platform).
```yml
build:
context: "."
platforms:
- "linux/amd64"
- "linux/arm64"
```
When the `platforms` attribute is omitted, Compose implementations MUST include the service's platform
in the list of the default build target platforms.
Compose implementations SHOULD report an error in the following cases:
* when the list contains multiple platforms but the implementation is incapable of storing multi-platform images
* when the list contains an unsupported platform
```yml
build:
context: "."
platforms:
- "linux/amd64"
- "unsupported/unsupported"
```
* when the list is non-empty and does not contain the service's platform
```yml
services:
frontend:
platform: "linux/amd64"
build:
context: "."
platforms:
- "linux/arm64"
```
## Implementations
* [docker-compose](../../compose/index.md)
* [buildx bake](../../build/bake/index.md)

View File

@ -1,298 +1,7 @@
---
description: Compose file deploy reference
keywords: fig, composition, compose, docker
title: Compose file deploy reference
toc_max: 4
toc_min: 2
keywords: fig, composition, compose, docker
fetch_remote:
line_start: 8
line_end: -1
---
Compose specification is a platform-neutral way to define multi-container applications. A Compose implementation supporting
deployment of application model MAY require some additional metadata as the Compose application model is way too abstract
to reflect actual infrastructure needs per service, or lifecycle constraints.
Compose Specification Deployment allows users to declare additional metadata on services so Compose implementations get
relevant data to allocate adequate resources on platform and configure them to match user's needs.
## Definitions
Compose Specification is extended to support an OPTIONAL `deploy` subsection on services. This section define runtime requirements
for a service.
### endpoint_mode
`endpoint_mode` specifies a service discovery method for external clients connecting to a service. Default and available values
are platform specific, anyway the Compose specification define two canonical values:
* `endpoint_mode: vip`: Assigns the service a virtual IP (VIP) that acts as the front end for clients to reach the service
on a network. Platform routes requests between the client and nodes running the service, without client knowledge of how
many nodes are participating in the service or their IP addresses or ports.
* `endpoint_mode: dnsrr`: Platform sets up DNS entries for the service such that a DNS query for the service name returns a
list of IP addresses (DNS round-robin), and the client connects directly to one of these.
```yml
services:
frontend:
image: awesome/webapp
ports:
- "8080:80"
deploy:
mode: replicated
replicas: 2
endpoint_mode: vip
```
### labels
`labels` specifies metadata for the service. These labels MUST *only* be set on the service and *not* on any containers for the service.
This assumes the platform has some native concept of "service" that can match Compose application model.
```yml
services:
frontend:
image: awesome/webapp
deploy:
labels:
com.example.description: "This label will appear on the web service"
```
### mode
`mode` define the replication model used to run the service on platform. Either `global` (exactly one container per physical node) or `replicated` (a specified number of containers). The default is `replicated`.
```yml
services:
frontend:
image: awesome/webapp
deploy:
mode: global
```
### placement
`placement` specifies constraints and preferences for platform to select a physical node to run service containers.
#### constraints
`constraints` defines a REQUIRED property the platform's node MUST fulfill to run service container. Can be set either
by a list or a map with string values.
```yml
deploy:
placement:
constraints:
- disktype=ssd
```
```yml
deploy:
placement:
constraints:
disktype: ssd
```
#### preferences
`preferences` defines a property the platform's node SHOULD fulfill to run service container. Can be set either
by a list or a map with string values.
```yml
deploy:
placement:
preferences:
- datacenter=us-east
```
```yml
deploy:
placement:
preferences:
datacenter: us-east
```
### replicas
If the service is `replicated` (which is the default), `replicas` specifies the number of containers that SHOULD be
running at any given time.
```yml
services:
frontend:
image: awesome/webapp
deploy:
mode: replicated
replicas: 6
```
### resources
`resources` configures physical resource constraints for container to run on platform. Those constraints can be configured
as a:
- `limits`: The platform MUST prevent container to allocate more
- `reservations`: The platform MUST guarantee container can allocate at least the configured amount
```yml
services:
frontend:
image: awesome/webapp
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
pids: 1
reservations:
cpus: '0.25'
memory: 20M
```
#### cpus
`cpus` configures a limit or reservation for how much of the available CPU resources (as number of cores) a container can use.
#### memory
`memory` configures a limit or reservation on the amount of memory a container
can allocate, set as a string expressing a
[byte value](11-extension.md#specifying-byte-values).
#### pids
`pids` tunes a containers PIDs limit, set as an integer.
#### devices
`devices` configures reservations of the devices a container can use. It contains a list of reservations, each set as an object with the following parameters: `capabilities`, `driver`, `count`, `device_ids` and `options`.
Devices are reserved using a list of capabilities, making `capabilities` the only required field. A device MUST satisfy all the requested capabilities for a successful reservation.
##### capabilities
`capabilities` are set as a list of strings, expressing both generic and driver specific capabilities.
The following generic capabilities are recognized today:
- `gpu`: Graphics accelerator
- `tpu`: AI accelerator
To avoid name clashes, driver specific capabilities MUST be prefixed with the driver name.
For example, reserving an nVidia CUDA-enabled accelerator might look like this:
```yml
deploy:
resources:
reservations:
devices:
- capabilities: ["nvidia-compute"]
```
##### driver
A different driver for the reserved device(s) can be requested using `driver` field. The value is specified as a string.
```yml
deploy:
resources:
reservations:
devices:
- capabilities: ["nvidia-compute"]
driver: nvidia
```
##### count
If `count` is set to `all` or not specified, Compose implementations MUST reserve all devices that satisfy the requested capabilities. Otherwise, Compose implementations MUST reserve at least the number of devices specified. The value is specified as an integer.
```yml
deploy:
resources:
reservations:
devices:
- capabilities: ["tpu"]
count: 2
```
`count` and `device_ids` fields are exclusive. Compose implementations MUST return an error if both are specified.
##### device_ids
If `device_ids` is set, Compose implementations MUST reserve devices with the specified IDs providing they satisfy the requested capabilities. The value is specified as a list of strings.
```yml
deploy:
resources:
reservations:
devices:
- capabilities: ["gpu"]
device_ids: ["GPU-f123d1c9-26bb-df9b-1c23-4a731f61d8c7"]
```
`count` and `device_ids` fields are exclusive. Compose implementations MUST return an error if both are specified.
##### options
Driver specific options can be set with `options` as key-value pairs.
```yml
deploy:
resources:
reservations:
devices:
- capabilities: ["gpu"]
driver: gpuvendor
options:
virtualization: false
```
### restart_policy
`restart_policy` configures if and how to restart containers when they exit. If `restart_policy` is not set, Compose implementations MUST consider `restart` field set by service configuration.
- `condition`: One of `none`, `on-failure` or `any` (default: `any`).
- `delay`: How long to wait between restart attempts, specified as a [duration](11-extension.md#specifying-durations) (default: 0).
- `max_attempts`: How many times to attempt to restart a container before giving up (default: never give up). If the restart does not
succeed within the configured `window`, this attempt doesn't count toward the configured `max_attempts` value.
For example, if `max_attempts` is set to '2', and the restart fails on the first attempt, more than two restarts MUST be attempted.
- `window`: How long to wait before deciding if a restart has succeeded, specified as a [duration](11-extension.md#specifying-durations) (default:
decide immediately).
```yml
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
```
### rollback_config
`rollback_config` configures how the service should be rollbacked in case of a failing update.
- `parallelism`: The number of containers to rollback at a time. If set to 0, all containers rollback simultaneously.
- `delay`: The time to wait between each container group's rollback (default 0s).
- `failure_action`: What to do if a rollback fails. One of `continue` or `pause` (default `pause`)
- `monitor`: Duration after each task update to monitor for failure `(ns|us|ms|s|m|h)` (default 0s).
- `max_failure_ratio`: Failure rate to tolerate during a rollback (default 0).
- `order`: Order of operations during rollbacks. One of `stop-first` (old task is stopped before starting new one),
or `start-first` (new task is started first, and the running tasks briefly overlap) (default `stop-first`).
### update_config
`update_config` configures how the service should be updated. Useful for configuring rolling updates.
- `parallelism`: The number of containers to update at a time.
- `delay`: The time to wait between updating a group of containers.
- `failure_action`: What to do if an update fails. One of `continue`, `rollback`, or `pause` (default: `pause`).
- `monitor`: Duration after each task update to monitor for failure `(ns|us|ms|s|m|h)` (default 0s).
- `max_failure_ratio`: Failure rate to tolerate during an update.
- `order`: Order of operations during updates. One of `stop-first` (old task is stopped before starting new one),
or `start-first` (new task is started first, and the running tasks briefly overlap) (default `stop-first`).
```yml
deploy:
update_config:
parallelism: 2
delay: 10s
order: stop-first
```

View File

@ -35,31 +35,17 @@ redirect_from:
To run the Quick Start Guide on demand, select the Docker menu ![whale menu](images/whale-x.svg){: .inline} and then choose **Quick Start Guide**.
For a more detailed guide, see [Get started](../get-started/index.md).
For a more detailed guide, see [Get started](../get-started/index.md), or the [Docker Desktop hands-on guides](../get-started/hands-on-overview.md).
## Sign in to Docker Desktop
We recommend that you authenticate using the **Sign in/Create ID** option in the top-right corner of Docker Desktop.
We recommend that you authenticate using the **Sign in** option in the top-right corner of the Docker Dashboard.
Once logged in, you can access your Docker Hub repositories directly from Docker Desktop.
Authenticated users get a higher pull rate limit compared to anonymous users. For example, if you are authenticated, you get 200 pulls per 6 hour period, compared to 100 pulls per 6 hour period per IP address for anonymous users. For more information, see [Download rate limit](../docker-hub/download-rate-limit.md).
In large enterprises where admin access is restricted, administrators can create a registry.json file and deploy it to the developers machines using a device management software as part of the Docker Desktop installation process. Enforcing developers to authenticate through Docker Desktop also allows administrators to set up guardrails using features such as [Image Access Management](../docker-hub/image-access-management.md) which allows team members to only have access to Trusted Content on Docker Hub, and pull only from the specified categories of images. For more information, see [Configure registry.json to enforce sign-in](../docker-hub/configure-sign-in.md).
### Two-factor authentication
Docker Desktop lets you to sign in to Docker Hub using two-factor authentication. Two-factor authentication provides an extra layer of security when accessing your Docker Hub account.
You must turn on two-factor authentication in Docker Hub before signing into your Docker Hub account through Docker Desktop. For instructions, see [Enable two-factor authentication for Docker Hub](/docker-hub/2fa/).
After two-factor authentication is turned on:
1. Go to the Docker Desktop menu and then select **Sign in / Create Docker ID**.
2. Enter your Docker ID and password and select **Sign in**.
3. After you have successfully signed in, Docker Desktop prompts you to enter the authentication code. Enter the six-digit code from your phone and then select **Verify**.
In large enterprises where admin access is restricted, administrators can [Configure registry.json to enforce sign-in](../docker-hub/configure-sign-in.md). Enforcing developers to authenticate through Docker Desktop also allows administrators to improve their organizations security posture for containerized development by taking advantage of [Hardened Desktop](hardened-desktop/index.md).
### Credentials management for Linux users

View File

@ -17,25 +17,6 @@ This page contains information about general system requirements, supported plat
>For more information see [What is the difference between Docker Desktop for Linux and Docker Engine](../faqs/linuxfaqs.md#what-is-the-difference-between-docker-desktop-for-linux-and-docker-engine).
{: .important}
## System requirements
To install Docker Desktop successfully, your Linux host must meet the following general requirements:
- 64-bit kernel and CPU support for virtualization.
- KVM virtualization support. Follow the [KVM virtualization support instructions](#kvm-virtualization-support) to check if the KVM kernel modules are enabled and how to provide access to the kvm device.
- **QEMU must be version 5.2 or newer**. We recommend upgrading to the latest version.
- systemd init system.
- Gnome, KDE, or MATE Desktop environment.
- For many Linux distros, the Gnome environment does not support tray icons. To add support for tray icons, you need to install a Gnome extension. For example, [AppIndicator](https://extensions.gnome.org/extension/615/appindicator-support/){:target="_blank" rel="noopener" class="_"}.
- At least 4 GB of RAM.
- Enable configuring ID mapping in user namespaces, see [File sharing](../faqs/linuxfaqs.md#how-do-i-enable-file-sharing).
Docker Desktop for Linux runs a Virtual Machine (VM). For more information on why, see [Why Docker Desktop for Linux runs a VM](../faqs/linuxfaqs.md#why-does-docker-desktop-for-linux-run-a-vm).
> **Note:**
>
> Docker does not provide support for running Docker Desktop in nested virtualization scenarios. We recommend that you run Docker Desktop for Linux natively on supported distributions.
## Supported platforms
Docker provides `.deb` and `.rpm` packages from the following Linux distributions
@ -56,6 +37,24 @@ An experimental package is available for [Arch](archlinux.md)-based distribution
Docker supports Docker Desktop on the current LTS release of the aforementioned distributions and the most recent version. As new versions are made available, Docker stops supporting the oldest version and supports the newest version.
## System requirements
To install Docker Desktop successfully, your Linux host must meet the following general requirements:
- 64-bit kernel and CPU support for virtualization.
- KVM virtualization support. Follow the [KVM virtualization support instructions](#kvm-virtualization-support) to check if the KVM kernel modules are enabled and how to provide access to the kvm device.
- **QEMU must be version 5.2 or newer**. We recommend upgrading to the latest version.
- systemd init system.
- Gnome, KDE, or MATE Desktop environment.
- For many Linux distros, the Gnome environment does not support tray icons. To add support for tray icons, you need to install a Gnome extension. For example, [AppIndicator](https://extensions.gnome.org/extension/615/appindicator-support/){:target="_blank" rel="noopener" class="_"}.
- At least 4 GB of RAM.
- Enable configuring ID mapping in user namespaces, see [File sharing](../faqs/linuxfaqs.md#how-do-i-enable-file-sharing).
Docker Desktop for Linux runs a Virtual Machine (VM). For more information on why, see [Why Docker Desktop for Linux runs a VM](../faqs/linuxfaqs.md#why-does-docker-desktop-for-linux-run-a-vm).
> **Note:**
>
> Docker does not provide support for running Docker Desktop in nested virtualization scenarios. We recommend that you run Docker Desktop for Linux natively on supported distributions.
### KVM virtualization support

View File

@ -31,8 +31,8 @@ When you run a container with the `-p` argument, for example:
$ docker run -p 80:80 -d nginx
```
Docker Desktop makes whatever is running on port 80 in the container (in
this case, `nginx`) available on port 80 of `localhost`. In this example, the
Docker Desktop makes whatever is running on port 80 in the container, in
this case, `nginx`, available on port 80 of `localhost`. In this example, the
host and container ports are the same. If, for example, you already have something running on port 80 of
your host machine, you can connect the container to a different port:
@ -58,11 +58,15 @@ Docker Desktop on Mac and Linux allows you to use the hosts SSH agent inside
1. Bind mount the SSH agent socket by adding the following parameter to your `docker run` command:
`--mount type=bind,src=/run/host-services/ssh-auth.sock,target=/run/host-services/ssh-auth.sock`
```console
$--mount type=bind,src=/run/host-services/ssh-auth.sock,target=/run/host-services/ssh-auth.sock
```
2. Add the `SSH_AUTH_SOCK` environment variable in your container:
`-e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock"`
```console
$ -e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock"
```
To enable the SSH agent in Docker Compose, add the following flags to your service:
@ -104,7 +108,7 @@ However if you are a Windows user, it works with Windows containers.
### I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name
The host has a changing IP address, or none if you have no network access. We recommend that you connect to the special DNS name
`host.docker.internal` which resolves to the internal IP address used by the
host. This is for development purpose and does not work in a production environment outside of Docker Desktop.
@ -129,7 +133,7 @@ If you have installed Python on your machine, use the following instructions as
#### I want to connect to a container from the host
Port forwarding works for `localhost`; `--publish`, `-p`, or `-P` all work.
Port forwarding works for `localhost`. `--publish`, `-p`, or `-P` all work.
Ports exposed from Linux are forwarded to the host.
Our current recommendation is to publish a port, or to connect from another