From f3c4a2513315850252a3c3e2c427d43439fd3a2c Mon Sep 17 00:00:00 2001 From: CrazyMax Date: Wed, 25 May 2022 07:54:27 +0200 Subject: [PATCH] fix files with CRLF control characters Signed-off-by: CrazyMax --- cloud/aci-integration.md | 942 ++++++++--------- cloud/ecs-integration.md | 1248 +++++++++++------------ engine/context/working-with-contexts.md | 544 +++++----- engine/scan/index.md | 940 ++++++++--------- images/logo-docker-main2.svg | 218 ++-- images/resources_file_icon.svg | 26 +- images/resources_lap_icon.svg | 32 +- 7 files changed, 1975 insertions(+), 1975 deletions(-) diff --git a/cloud/aci-integration.md b/cloud/aci-integration.md index 056daf112c..7738d2d84e 100644 --- a/cloud/aci-integration.md +++ b/cloud/aci-integration.md @@ -1,471 +1,471 @@ ---- -title: Deploying Docker containers on Azure -description: Deploying Docker containers on Azure -keywords: Docker, Azure, Integration, ACI, context, Compose, cli, deploy, containers, cloud -redirect_from: - - /engine/context/aci-integration/ -toc_min: 1 -toc_max: 2 ---- - -## Overview - -The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment. - -In addition, the integration between Docker and Microsoft developer technologies allow developers to use the Docker CLI to: - -- Easily log into Azure -- Set up an ACI context in one Docker command allowing you to switch from a local context to a cloud context and run applications quickly and easily -- Simplify single container and multi-container application development using the Compose specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service - -Also see the [full list of container features supported by ACI](aci-container-features.md) and [full list of compose features supported by ACI](aci-compose-features.md). - -## Prerequisites - -To deploy Docker containers on Azure, you must meet the following requirements: - -1. Download and install the latest version of Docker Desktop. - - - [Download for Mac](../desktop/mac/install.md) - - [Download for Windows](../desktop/windows/install.md) - - Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux). - -2. Ensure you have an Azure subscription. You can get started with an [Azure free account](https://aka.ms/AA8r2pj){: target="_blank" rel="noopener" class="_"}. - -## Run Docker containers on ACI - -Docker not only runs containers locally, but also enables developers to seamlessly deploy Docker containers on ACI using `docker run` or deploy multi-container applications defined in a Compose file using the `docker compose up` command. - -The following sections contain instructions on how to deploy your Docker containers on ACI. -Also see the [full list of container features supported by ACI](aci-container-features.md). - -### Log into Azure - -Run the following commands to log into Azure: - -```console -$ docker login azure -``` - -This opens your web browser and prompts you to enter your Azure login credentials. -If the Docker CLI cannot open a browser, it will fall back to the [Azure device code flow](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code){:target="_blank" rel="noopener" class="_"} and lets you connect manually. -Note that the [Azure command line](https://docs.microsoft.com/en-us/cli/azure/){:target="_blank" rel="noopener" class="_"} login is separated from the Docker CLI Azure login. - -Alternatively, you can log in without interaction (typically in -scripts or continuous integration scenarios), using an Azure Service -Principal, with `docker login azure --client-id xx --client-secret yy --tenant-id zz` - ->**Note** -> -> Logging in through the Azure Service Provider obtains an access token valid -for a short period (typically 1h), but it does not allow you to automatically -and transparently refresh this token. You must manually re-login -when the access token has expired when logging in with a Service Provider. - -You can also use the `--tenant-id` option alone to specify a tenant, if -you have several ones available in Azure. - -### Create an ACI context - -After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI. -Creating an ACI context requires an Azure subscription, a [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal), and a region. -For example, let us create a new context called `myacicontext`: - -```console -$ docker context create aci myacicontext -``` - -This command automatically uses your Azure login credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags: `--subscription-id`, -`--resource-group`, and `--location`. - -If you don't have any existing resource groups in your Azure account, the `docker context create aci myacicontext` command creates one for you. You don’t have to specify any additional options to do this. - -After you have created an ACI context, you can list your Docker contexts by running the `docker context ls` command: - -```console -NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR -myacicontext aci myResourceGroupGTA@eastus -default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm -``` - -### Run a container - -Now that you've logged in and created an ACI context, you can start using Docker commands to deploy containers on ACI. - -There are two ways to use your new ACI context. You can use the `--context` flag with the Docker command to specify that you would like to run the command using your newly created ACI context. - -```console -$ docker --context myacicontext run -p 80:80 nginx -``` - -Or, you can change context using `docker context use` to select the ACI context to be your focus for running Docker commands. For example, we can use the `docker context use` command to deploy an Nginx container: - -```console -$ docker context use myacicontext -$ docker run -p 80:80 nginx -``` - -After you've switched to the `myacicontext` context, you can use `docker ps` to list your containers running on ACI. - -In the case of the demonstration Nginx container started above, the result of the ps command will display in column "PORTS" the IP address and port on which the container is running. For example, it may show `52.154.202.35:80->80/tcp`, and you can view the Nginx welcome page by browsing `http://52.154.202.35`. - -To view logs from your container, run: - -```console -$ docker logs -``` - -To execute a command in a running container, run: - -```console -$ docker exec -t COMMAND -``` - -To stop and remove a container from ACI, run: - -```console -$ docker stop -$ docker rm -``` - -You can remove containers using `docker rm`. To remove a running container, you must use the `--force` flag, or stop the container using `docker stop` before removing it. - -> **Note** -> -> The semantics of restarting a container on ACI are different to those when using a local Docker context for local development. On ACI, the container will be reset to its initial state and started on a new node. This includes the container's filesystem so all state that is not stored in a volume will be lost on restart. - -## Running Compose applications - -You can also deploy and manage multi-container applications defined in Compose files to ACI using the `docker compose` command. -All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file. -Name resolution between containers is achieved by writing service names in the `/etc/hosts` file that is shared automatically by all containers in the container group. - -Also see the [full list of compose features supported by ACI](aci-compose-features.md). - -1. Ensure you are using your ACI context. You can do this either by specifying the `--context myacicontext` flag or by setting the default context using the command `docker context use myacicontext`. - -2. Run `docker compose up` and `docker compose down` to start and then stop a full Compose application. - - By default, `docker compose up` uses the `docker-compose.yaml` file in the current folder. You can specify the working directory using the --workdir flag or specify the Compose file directly using `docker compose --file mycomposefile.yaml up`. - - You can also specify a name for the Compose application using the `--project-name` flag during deployment. If no name is specified, a name will be derived from the working directory. - - Containers started as part of Compose applications will be displayed along with single containers when using `docker ps`. Their container ID will be of the format: `_`. - These containers cannot be stopped, started, or removed independently since they are all part of the same ACI container group. - You can view each container's logs with `docker logs`. You can list deployed Compose applications with `docker compose ls`. This will list only compose applications, not single containers started with `docker run`. You can remove a Compose application with `docker compose down`. - -> **Note** -> -> The current Docker Azure integration does not allow fetching a combined log stream from all the containers that make up the Compose application. - -## Updating applications - -From a deployed Compose application, you can update the application by re-deploying it with the same project name: `docker compose --project-name PROJECT up`. - -Updating an application means the ACI node will be reused, and the application will keep the same IP address that was previously allocated to expose ports, if any. ACI has some limitations on what can be updated in an existing application (you will not be able to change CPU/memory reservation for example), in these cases, you need to deploy a new application from scratch. - -Updating is the default behavior if you invoke `docker compose up` on an already deployed Compose file, as the Compose project name is derived from the directory where the Compose file is located by default. You need to explicitly execute `docker compose down` before running `docker compose up` again in order to totally reset a Compose application. - -## Releasing resources - -Single containers and Compose applications can be removed from ACI with -the `docker prune` command. The `docker prune` command removes deployments -that are not currently running. To remove running depoyments, you can specify -`--force`. The `--dry-run` option lists deployments that are planned for -removal, but it doesn't actually remove them. - -```console -$ ./bin/docker --context acicontext prune --dry-run --force -Resources that would be deleted: -my-application -Total CPUs reclaimed: 2.01, total memory reclaimed: 2.30 GB -``` - -## Exposing ports - -Single containers and Compose applications can optionally expose ports. -For single containers, this is done using the `--publish` (`-p`) flag of the `docker run` command : `docker run -p 80:80 nginx`. - -For Compose applications, you must specify exposed ports in the Compose file service definition: - -```yaml -services: - nginx: - image: nginx - ports: - - "80:80" -``` - - -> **Note** -> -> ACI does not allow port mapping (that is, changing port number while exposing port). Therefore, the source and target ports must be the same when deploying to ACI. -> -> All containers in the same Compose application are deployed in the same ACI container group. Different containers in the same Compose application cannot expose the same port when deployed to ACI. - -By default, when exposing ports for your application, a random public IP address is associated with the container group supporting the deployed application (single container or Compose application). -This IP address can be obtained when listing containers with `docker ps` or using `docker inspect`. - -### DNS label name - -In addition to exposing ports on a random IP address, you can specify a DNS label name to expose your application on an FQDN of the form: `.region.azurecontainer.io`. - -You can set this name with the `--domainname` flag when performing a `docker run`, or by using the `domainname` field in the Compose file when performing a `docker compose up`: - -```yaml -services: - nginx: - image: nginx - domainname: "myapp" - ports: - - "80:80" -``` - -> **Note** -> -> The domain of a Compose application can only be set once, if you specify the -> `domainname` for several services, the value must be identical. -> -> The FQDN `.region.azurecontainer.io` must be available. - -## Using Azure file share as volumes in ACI containers - -You can deploy containers or Compose applications that use persistent data -stored in volumes. Azure File Share can be used to support volumes for ACI -containers. - -Using an existing Azure File Share with storage account name `mystorageaccount` -and file share name `myfileshare`, you can specify a volume in your deployment `run` -command as follows: - -```console -$ docker run -v mystorageaccount/myfileshare:/target/path myimage -``` - -The runtime container will see the file share content in `/target/path`. - -In a Compose application, the volume specification must use the following syntax -in the Compose file: - -```yaml -myservice: - image: nginx - volumes: - - mydata:/mount/testvolumes - -volumes: - mydata: - driver: azure_file - driver_opts: - share_name: myfileshare - storage_account_name: mystorageaccount -``` - -> **Note** -> -> The volume short syntax in Compose files cannot be used as it is aimed at volume definition for local bind mounts. Using the volume driver and driver option syntax in Compose files makes the volume definition a lot more clear. - -In single or multi-container deployments, the Docker CLI will use your Azure login to fetch the key to the storage account, and provide this key with the container deployment information, so that the container can access the volume. -Volumes can be used from any file share in any storage account you have access to with your Azure login. You can specify `rw` (read/write) or `ro` (read only) when mounting the volume (`rw` is the default). - -### Managing Azure volumes - -To create a volume that you can use in containers or Compose applications when -using your ACI Docker context, you can use the `docker volume create` command, -and specify an Azure storage account name and the file share name: - -```console -$ docker --context aci volume create test-volume --storage-account mystorageaccount -[+] Running 2/2 - ⠿ mystorageaccount Created 26.2s - ⠿ test-volume Created 0.9s -mystorageaccount/test-volume -``` - -By default, if the storage account does not already exist, this command -creates a new storage account using the Standard LRS as a default SKU, and the -resource group and location associated with your Docker ACI context. - -If you specify an existing storage account, the command creates a new -file share in the existing account: - -```console -$ docker --context aci volume create test-volume2 --storage-account mystorageaccount -[+] Running 2/2 - ⠿ mystorageaccount Use existing 0.7s - ⠿ test-volume2 Created 0.7s -mystorageaccount/test-volume2 -``` - -Alternatively, you can create an Azure storage account or a file share using the Azure -portal, or the `az` [command line](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-cli). - -You can also list volumes that are available for use in containers or Compose applications: - -```console -$ docker --context aci volume ls -ID DESCRIPTION -mystorageaccount/test-volume Fileshare test-volume in mystorageaccount storage account -mystorageaccount/test-volume2 Fileshare test-volume2 in mystorageaccount storage account -``` - -To delete a volume and the corresponding Azure file share, use the `volume rm` command: - -```console -$ docker --context aci volume rm mystorageaccount/test-volume -mystorageaccount/test-volume -``` - -This permanently deletes the Azure file share and all its data. - -When deleting a volume in Azure, the command checks whether the specified file share -is the only file share available in the storage account. If the storage account is -created with the `docker volume create` command, `docker volume rm` also -deletes the storage account when it does not have any file shares. -If you are using a storage account created without the `docker volume create` command -(through Azure portal or with the `az` command line for example), `docker volume rm` -does not delete the storage account, even when it has zero remaining file shares. - -## Environment variables - -When using `docker run`, you can pass the environment variables to ACI containers using the `--env` flag. -For Compose applications, you can specify the environment variables in the Compose file with the `environment` or `env-file` service field, or with the `--environment` command line flag. - -## Health checks - -You can specify a container health checks using either the `--healthcheck-` prefixed flags with `docker run`, or in a Compose file with the `healthcheck` section of the service. - -Health checks are converted to ACI `LivenessProbe`s. ACI runs the health check command periodically, and if it fails, the container will be terminated. - -Health checks must be used in addition to restart policies to ensure the container is then restarted on termination. The default restart policy for `docker run` is `no` which will not restart the container. The default restart policy for Compose is `any` which will always try restarting the service containers. - -Example using `docker run`: - -```console -$ docker --context acicontext run -p 80:80 --restart always --health-cmd "curl http://localhost:80" --health-interval 3s nginx -``` - -Example using Compose files: - -```yaml -services: - web: - image: nginx - deploy: - restart_policy: - condition: on-failure - healthcheck: - test: ["CMD", "curl", "-f", "http://localhost:80"] - interval: 10s -``` - -## Private Docker Hub images and using the Azure Container Registry - -You can deploy private images to ACI that are hosted by any container registry. You need to log into the relevant registry using `docker login` before running `docker run` or `docker compose up`. The Docker CLI will fetch your registry login for the deployed images and send the credentials along with the image deployment information to ACI. -In the case of the Azure Container Registry, the command line will try to automatically log you into ACR from your Azure login. You don't need to manually login to the ACR registry first, if your Azure login has access to the ACR. - -## Using ACI resource groups as namespaces - -You can create several Docker contexts associated with ACI. Each context must be associated with a unique Azure resource group. This allows you to use Docker contexts as namespaces. You can switch between namespaces using `docker context use `. - -When you run the `docker ps` command, it only lists containers in your current Docker context. There won’t be any contention in container names or Compose application names between two Docker contexts. - -## Install the Docker Compose CLI on Linux - -The Docker Compose CLI adds support for running and managing containers on Azure Container Instances (ACI). - -### Install Prerequisites - -- [Docker 19.03 or later](../get-docker.md) - -### Install script - -You can install the new CLI using the install script: - -```console -$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh -``` - -### Manual install - -You can download the Docker ACI Integration CLI from the -[latest release](https://github.com/docker/compose-cli/releases/latest){: target="_blank" rel="noopener" class="_"} page. - -You will then need to make it executable: - -```console -$ chmod +x docker-aci -``` - -To enable using the local Docker Engine and to use existing Docker contexts, you -must have the existing Docker CLI as `com.docker.cli` somewhere in your -`PATH`. You can do this by creating a symbolic link from the existing Docker -CLI: - -```console -$ ln -s /path/to/existing/docker /directory/in/PATH/com.docker.cli -``` - -> **Note** -> -> The `PATH` environment variable is a colon-separated list of -> directories with priority from left to right. You can view it using -> `echo $PATH`. You can find the path to the existing Docker CLI using -> `which docker`. You may need root permissions to make this link. - -On a fresh install of Ubuntu 20.04 with Docker Engine -[already installed](../engine/install/ubuntu.md): - -```console -$ echo $PATH -/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin -$ which docker -/usr/bin/docker -$ sudo ln -s /usr/bin/docker /usr/local/bin/com.docker.cli -``` - -You can verify that this is working by checking that the new CLI works with the -default context: - -```console -$ ./docker-aci --context default ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -$ echo $? -0 -``` - -To make this CLI with ACI integration your default Docker CLI, you must move it -to a directory in your `PATH` with higher priority than the existing Docker CLI. - -Again, on a fresh Ubuntu 20.04: - -```console -$ which docker -/usr/bin/docker -$ echo $PATH -/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin -$ sudo mv docker-aci /usr/local/bin/docker -$ which docker -/usr/local/bin/docker -$ docker version -... - Azure integration 0.1.4 -... -``` - -### Supported commands - -After you have installed the Docker ACI Integration CLI, run `--help` to see the current list of commands. - -### Uninstall - -To remove the Docker Azure Integration CLI, you need to remove the binary you downloaded and `com.docker.cli` from your `PATH`. If you installed using the script, this can be done as follows: - -```console -$ sudo rm /usr/local/bin/docker /usr/local/bin/com.docker.cli -``` - -## Feedback - -Thank you for trying out Docker Azure Integration. Your feedback is very important to us. Let us know your feedback by creating an issue in the [compose-cli](https://github.com/docker/compose-cli){: target="_blank" rel="noopener" class="_"} GitHub repository. +--- +title: Deploying Docker containers on Azure +description: Deploying Docker containers on Azure +keywords: Docker, Azure, Integration, ACI, context, Compose, cli, deploy, containers, cloud +redirect_from: + - /engine/context/aci-integration/ +toc_min: 1 +toc_max: 2 +--- + +## Overview + +The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment. + +In addition, the integration between Docker and Microsoft developer technologies allow developers to use the Docker CLI to: + +- Easily log into Azure +- Set up an ACI context in one Docker command allowing you to switch from a local context to a cloud context and run applications quickly and easily +- Simplify single container and multi-container application development using the Compose specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service + +Also see the [full list of container features supported by ACI](aci-container-features.md) and [full list of compose features supported by ACI](aci-compose-features.md). + +## Prerequisites + +To deploy Docker containers on Azure, you must meet the following requirements: + +1. Download and install the latest version of Docker Desktop. + + - [Download for Mac](../desktop/mac/install.md) + - [Download for Windows](../desktop/windows/install.md) + + Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux). + +2. Ensure you have an Azure subscription. You can get started with an [Azure free account](https://aka.ms/AA8r2pj){: target="_blank" rel="noopener" class="_"}. + +## Run Docker containers on ACI + +Docker not only runs containers locally, but also enables developers to seamlessly deploy Docker containers on ACI using `docker run` or deploy multi-container applications defined in a Compose file using the `docker compose up` command. + +The following sections contain instructions on how to deploy your Docker containers on ACI. +Also see the [full list of container features supported by ACI](aci-container-features.md). + +### Log into Azure + +Run the following commands to log into Azure: + +```console +$ docker login azure +``` + +This opens your web browser and prompts you to enter your Azure login credentials. +If the Docker CLI cannot open a browser, it will fall back to the [Azure device code flow](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code){:target="_blank" rel="noopener" class="_"} and lets you connect manually. +Note that the [Azure command line](https://docs.microsoft.com/en-us/cli/azure/){:target="_blank" rel="noopener" class="_"} login is separated from the Docker CLI Azure login. + +Alternatively, you can log in without interaction (typically in +scripts or continuous integration scenarios), using an Azure Service +Principal, with `docker login azure --client-id xx --client-secret yy --tenant-id zz` + +>**Note** +> +> Logging in through the Azure Service Provider obtains an access token valid +for a short period (typically 1h), but it does not allow you to automatically +and transparently refresh this token. You must manually re-login +when the access token has expired when logging in with a Service Provider. + +You can also use the `--tenant-id` option alone to specify a tenant, if +you have several ones available in Azure. + +### Create an ACI context + +After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI. +Creating an ACI context requires an Azure subscription, a [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal), and a region. +For example, let us create a new context called `myacicontext`: + +```console +$ docker context create aci myacicontext +``` + +This command automatically uses your Azure login credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags: `--subscription-id`, +`--resource-group`, and `--location`. + +If you don't have any existing resource groups in your Azure account, the `docker context create aci myacicontext` command creates one for you. You don’t have to specify any additional options to do this. + +After you have created an ACI context, you can list your Docker contexts by running the `docker context ls` command: + +```console +NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR +myacicontext aci myResourceGroupGTA@eastus +default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm +``` + +### Run a container + +Now that you've logged in and created an ACI context, you can start using Docker commands to deploy containers on ACI. + +There are two ways to use your new ACI context. You can use the `--context` flag with the Docker command to specify that you would like to run the command using your newly created ACI context. + +```console +$ docker --context myacicontext run -p 80:80 nginx +``` + +Or, you can change context using `docker context use` to select the ACI context to be your focus for running Docker commands. For example, we can use the `docker context use` command to deploy an Nginx container: + +```console +$ docker context use myacicontext +$ docker run -p 80:80 nginx +``` + +After you've switched to the `myacicontext` context, you can use `docker ps` to list your containers running on ACI. + +In the case of the demonstration Nginx container started above, the result of the ps command will display in column "PORTS" the IP address and port on which the container is running. For example, it may show `52.154.202.35:80->80/tcp`, and you can view the Nginx welcome page by browsing `http://52.154.202.35`. + +To view logs from your container, run: + +```console +$ docker logs +``` + +To execute a command in a running container, run: + +```console +$ docker exec -t COMMAND +``` + +To stop and remove a container from ACI, run: + +```console +$ docker stop +$ docker rm +``` + +You can remove containers using `docker rm`. To remove a running container, you must use the `--force` flag, or stop the container using `docker stop` before removing it. + +> **Note** +> +> The semantics of restarting a container on ACI are different to those when using a local Docker context for local development. On ACI, the container will be reset to its initial state and started on a new node. This includes the container's filesystem so all state that is not stored in a volume will be lost on restart. + +## Running Compose applications + +You can also deploy and manage multi-container applications defined in Compose files to ACI using the `docker compose` command. +All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file. +Name resolution between containers is achieved by writing service names in the `/etc/hosts` file that is shared automatically by all containers in the container group. + +Also see the [full list of compose features supported by ACI](aci-compose-features.md). + +1. Ensure you are using your ACI context. You can do this either by specifying the `--context myacicontext` flag or by setting the default context using the command `docker context use myacicontext`. + +2. Run `docker compose up` and `docker compose down` to start and then stop a full Compose application. + + By default, `docker compose up` uses the `docker-compose.yaml` file in the current folder. You can specify the working directory using the --workdir flag or specify the Compose file directly using `docker compose --file mycomposefile.yaml up`. + + You can also specify a name for the Compose application using the `--project-name` flag during deployment. If no name is specified, a name will be derived from the working directory. + + Containers started as part of Compose applications will be displayed along with single containers when using `docker ps`. Their container ID will be of the format: `_`. + These containers cannot be stopped, started, or removed independently since they are all part of the same ACI container group. + You can view each container's logs with `docker logs`. You can list deployed Compose applications with `docker compose ls`. This will list only compose applications, not single containers started with `docker run`. You can remove a Compose application with `docker compose down`. + +> **Note** +> +> The current Docker Azure integration does not allow fetching a combined log stream from all the containers that make up the Compose application. + +## Updating applications + +From a deployed Compose application, you can update the application by re-deploying it with the same project name: `docker compose --project-name PROJECT up`. + +Updating an application means the ACI node will be reused, and the application will keep the same IP address that was previously allocated to expose ports, if any. ACI has some limitations on what can be updated in an existing application (you will not be able to change CPU/memory reservation for example), in these cases, you need to deploy a new application from scratch. + +Updating is the default behavior if you invoke `docker compose up` on an already deployed Compose file, as the Compose project name is derived from the directory where the Compose file is located by default. You need to explicitly execute `docker compose down` before running `docker compose up` again in order to totally reset a Compose application. + +## Releasing resources + +Single containers and Compose applications can be removed from ACI with +the `docker prune` command. The `docker prune` command removes deployments +that are not currently running. To remove running depoyments, you can specify +`--force`. The `--dry-run` option lists deployments that are planned for +removal, but it doesn't actually remove them. + +```console +$ ./bin/docker --context acicontext prune --dry-run --force +Resources that would be deleted: +my-application +Total CPUs reclaimed: 2.01, total memory reclaimed: 2.30 GB +``` + +## Exposing ports + +Single containers and Compose applications can optionally expose ports. +For single containers, this is done using the `--publish` (`-p`) flag of the `docker run` command : `docker run -p 80:80 nginx`. + +For Compose applications, you must specify exposed ports in the Compose file service definition: + +```yaml +services: + nginx: + image: nginx + ports: + - "80:80" +``` + + +> **Note** +> +> ACI does not allow port mapping (that is, changing port number while exposing port). Therefore, the source and target ports must be the same when deploying to ACI. +> +> All containers in the same Compose application are deployed in the same ACI container group. Different containers in the same Compose application cannot expose the same port when deployed to ACI. + +By default, when exposing ports for your application, a random public IP address is associated with the container group supporting the deployed application (single container or Compose application). +This IP address can be obtained when listing containers with `docker ps` or using `docker inspect`. + +### DNS label name + +In addition to exposing ports on a random IP address, you can specify a DNS label name to expose your application on an FQDN of the form: `.region.azurecontainer.io`. + +You can set this name with the `--domainname` flag when performing a `docker run`, or by using the `domainname` field in the Compose file when performing a `docker compose up`: + +```yaml +services: + nginx: + image: nginx + domainname: "myapp" + ports: + - "80:80" +``` + +> **Note** +> +> The domain of a Compose application can only be set once, if you specify the +> `domainname` for several services, the value must be identical. +> +> The FQDN `.region.azurecontainer.io` must be available. + +## Using Azure file share as volumes in ACI containers + +You can deploy containers or Compose applications that use persistent data +stored in volumes. Azure File Share can be used to support volumes for ACI +containers. + +Using an existing Azure File Share with storage account name `mystorageaccount` +and file share name `myfileshare`, you can specify a volume in your deployment `run` +command as follows: + +```console +$ docker run -v mystorageaccount/myfileshare:/target/path myimage +``` + +The runtime container will see the file share content in `/target/path`. + +In a Compose application, the volume specification must use the following syntax +in the Compose file: + +```yaml +myservice: + image: nginx + volumes: + - mydata:/mount/testvolumes + +volumes: + mydata: + driver: azure_file + driver_opts: + share_name: myfileshare + storage_account_name: mystorageaccount +``` + +> **Note** +> +> The volume short syntax in Compose files cannot be used as it is aimed at volume definition for local bind mounts. Using the volume driver and driver option syntax in Compose files makes the volume definition a lot more clear. + +In single or multi-container deployments, the Docker CLI will use your Azure login to fetch the key to the storage account, and provide this key with the container deployment information, so that the container can access the volume. +Volumes can be used from any file share in any storage account you have access to with your Azure login. You can specify `rw` (read/write) or `ro` (read only) when mounting the volume (`rw` is the default). + +### Managing Azure volumes + +To create a volume that you can use in containers or Compose applications when +using your ACI Docker context, you can use the `docker volume create` command, +and specify an Azure storage account name and the file share name: + +```console +$ docker --context aci volume create test-volume --storage-account mystorageaccount +[+] Running 2/2 + ⠿ mystorageaccount Created 26.2s + ⠿ test-volume Created 0.9s +mystorageaccount/test-volume +``` + +By default, if the storage account does not already exist, this command +creates a new storage account using the Standard LRS as a default SKU, and the +resource group and location associated with your Docker ACI context. + +If you specify an existing storage account, the command creates a new +file share in the existing account: + +```console +$ docker --context aci volume create test-volume2 --storage-account mystorageaccount +[+] Running 2/2 + ⠿ mystorageaccount Use existing 0.7s + ⠿ test-volume2 Created 0.7s +mystorageaccount/test-volume2 +``` + +Alternatively, you can create an Azure storage account or a file share using the Azure +portal, or the `az` [command line](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-cli). + +You can also list volumes that are available for use in containers or Compose applications: + +```console +$ docker --context aci volume ls +ID DESCRIPTION +mystorageaccount/test-volume Fileshare test-volume in mystorageaccount storage account +mystorageaccount/test-volume2 Fileshare test-volume2 in mystorageaccount storage account +``` + +To delete a volume and the corresponding Azure file share, use the `volume rm` command: + +```console +$ docker --context aci volume rm mystorageaccount/test-volume +mystorageaccount/test-volume +``` + +This permanently deletes the Azure file share and all its data. + +When deleting a volume in Azure, the command checks whether the specified file share +is the only file share available in the storage account. If the storage account is +created with the `docker volume create` command, `docker volume rm` also +deletes the storage account when it does not have any file shares. +If you are using a storage account created without the `docker volume create` command +(through Azure portal or with the `az` command line for example), `docker volume rm` +does not delete the storage account, even when it has zero remaining file shares. + +## Environment variables + +When using `docker run`, you can pass the environment variables to ACI containers using the `--env` flag. +For Compose applications, you can specify the environment variables in the Compose file with the `environment` or `env-file` service field, or with the `--environment` command line flag. + +## Health checks + +You can specify a container health checks using either the `--healthcheck-` prefixed flags with `docker run`, or in a Compose file with the `healthcheck` section of the service. + +Health checks are converted to ACI `LivenessProbe`s. ACI runs the health check command periodically, and if it fails, the container will be terminated. + +Health checks must be used in addition to restart policies to ensure the container is then restarted on termination. The default restart policy for `docker run` is `no` which will not restart the container. The default restart policy for Compose is `any` which will always try restarting the service containers. + +Example using `docker run`: + +```console +$ docker --context acicontext run -p 80:80 --restart always --health-cmd "curl http://localhost:80" --health-interval 3s nginx +``` + +Example using Compose files: + +```yaml +services: + web: + image: nginx + deploy: + restart_policy: + condition: on-failure + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:80"] + interval: 10s +``` + +## Private Docker Hub images and using the Azure Container Registry + +You can deploy private images to ACI that are hosted by any container registry. You need to log into the relevant registry using `docker login` before running `docker run` or `docker compose up`. The Docker CLI will fetch your registry login for the deployed images and send the credentials along with the image deployment information to ACI. +In the case of the Azure Container Registry, the command line will try to automatically log you into ACR from your Azure login. You don't need to manually login to the ACR registry first, if your Azure login has access to the ACR. + +## Using ACI resource groups as namespaces + +You can create several Docker contexts associated with ACI. Each context must be associated with a unique Azure resource group. This allows you to use Docker contexts as namespaces. You can switch between namespaces using `docker context use `. + +When you run the `docker ps` command, it only lists containers in your current Docker context. There won’t be any contention in container names or Compose application names between two Docker contexts. + +## Install the Docker Compose CLI on Linux + +The Docker Compose CLI adds support for running and managing containers on Azure Container Instances (ACI). + +### Install Prerequisites + +- [Docker 19.03 or later](../get-docker.md) + +### Install script + +You can install the new CLI using the install script: + +```console +$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh +``` + +### Manual install + +You can download the Docker ACI Integration CLI from the +[latest release](https://github.com/docker/compose-cli/releases/latest){: target="_blank" rel="noopener" class="_"} page. + +You will then need to make it executable: + +```console +$ chmod +x docker-aci +``` + +To enable using the local Docker Engine and to use existing Docker contexts, you +must have the existing Docker CLI as `com.docker.cli` somewhere in your +`PATH`. You can do this by creating a symbolic link from the existing Docker +CLI: + +```console +$ ln -s /path/to/existing/docker /directory/in/PATH/com.docker.cli +``` + +> **Note** +> +> The `PATH` environment variable is a colon-separated list of +> directories with priority from left to right. You can view it using +> `echo $PATH`. You can find the path to the existing Docker CLI using +> `which docker`. You may need root permissions to make this link. + +On a fresh install of Ubuntu 20.04 with Docker Engine +[already installed](../engine/install/ubuntu.md): + +```console +$ echo $PATH +/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin +$ which docker +/usr/bin/docker +$ sudo ln -s /usr/bin/docker /usr/local/bin/com.docker.cli +``` + +You can verify that this is working by checking that the new CLI works with the +default context: + +```console +$ ./docker-aci --context default ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +$ echo $? +0 +``` + +To make this CLI with ACI integration your default Docker CLI, you must move it +to a directory in your `PATH` with higher priority than the existing Docker CLI. + +Again, on a fresh Ubuntu 20.04: + +```console +$ which docker +/usr/bin/docker +$ echo $PATH +/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin +$ sudo mv docker-aci /usr/local/bin/docker +$ which docker +/usr/local/bin/docker +$ docker version +... + Azure integration 0.1.4 +... +``` + +### Supported commands + +After you have installed the Docker ACI Integration CLI, run `--help` to see the current list of commands. + +### Uninstall + +To remove the Docker Azure Integration CLI, you need to remove the binary you downloaded and `com.docker.cli` from your `PATH`. If you installed using the script, this can be done as follows: + +```console +$ sudo rm /usr/local/bin/docker /usr/local/bin/com.docker.cli +``` + +## Feedback + +Thank you for trying out Docker Azure Integration. Your feedback is very important to us. Let us know your feedback by creating an issue in the [compose-cli](https://github.com/docker/compose-cli){: target="_blank" rel="noopener" class="_"} GitHub repository. diff --git a/cloud/ecs-integration.md b/cloud/ecs-integration.md index 1d694b7478..040270f88d 100644 --- a/cloud/ecs-integration.md +++ b/cloud/ecs-integration.md @@ -1,624 +1,624 @@ ---- -title: Deploying Docker containers on ECS -description: Deploying Docker containers on ECS -keywords: Docker, AWS, ECS, Integration, context, Compose, cli, deploy, containers, cloud -redirect_from: - - /engine/context/ecs-integration/ -toc_min: 1 -toc_max: 2 ---- - -## Overview - -The Docker Compose CLI enables developers to use native Docker commands to run applications in Amazon Elastic Container Service (ECS) when building cloud-native applications. - -The integration between Docker and Amazon ECS allows developers to use the Docker Compose CLI to: - -* Set up an AWS context in one Docker command, allowing you to switch from a local context to a cloud context and run applications quickly and easily -* Simplify multi-container application development on Amazon ECS using Compose files - -Also see the [ECS integration architecture](ecs-architecture.md), [full list of compose features](ecs-compose-features.md) and [Compose examples for ECS integration](ecs-compose-examples.md). - -## Prerequisites - -To deploy Docker containers on ECS, you must meet the following requirements: - -1. Download and install the latest version of Docker Desktop. - - - [Download for Mac](../desktop/mac/install.md) - - [Download for Windows](../desktop/windows/install.md) - - Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux). - -2. Ensure you have an AWS account. - -Docker not only runs multi-container applications locally, but also enables -developers to seamlessly deploy Docker containers on Amazon ECS using a -Compose file with the `docker compose up` command. The following sections -contain instructions on how to deploy your Compose application on Amazon ECS. - -## Run an application on ECS - -### Requirements - -AWS uses a fine-grained permission model, with specific role for each resource type and operation. - -To ensure that Docker ECS integration is allowed to manage resources for your Compose application, you have to ensure your AWS credentials [grant access to following AWS IAM permissions](https://aws.amazon.com/iam/features/manage-permissions/): - -* application-autoscaling:* -* cloudformation:* -* ec2:AuthorizeSecurityGroupIngress -* ec2:CreateSecurityGroup -* ec2:CreateTags -* ec2:DeleteSecurityGroup -* ec2:DescribeRouteTables -* ec2:DescribeSecurityGroups -* ec2:DescribeSubnets -* ec2:DescribeVpcs -* ec2:RevokeSecurityGroupIngress -* ecs:CreateCluster -* ecs:CreateService -* ecs:DeleteCluster -* ecs:DeleteService -* ecs:DeregisterTaskDefinition -* ecs:DescribeClusters -* ecs:DescribeServices -* ecs:DescribeTasks -* ecs:ListAccountSettings -* ecs:ListTasks -* ecs:RegisterTaskDefinition -* ecs:UpdateService -* elasticloadbalancing:* -* iam:AttachRolePolicy -* iam:CreateRole -* iam:DeleteRole -* iam:DetachRolePolicy -* iam:PassRole -* logs:CreateLogGroup -* logs:DeleteLogGroup -* logs:DescribeLogGroups -* logs:FilterLogEvents -* route53:CreateHostedZone -* route53:DeleteHostedZone -* route53:GetHealthCheck -* route53:GetHostedZone -* route53:ListHostedZonesByName -* servicediscovery:* - -GPU support, which relies on EC2 instances to run containers with attached GPU devices, -require a few additional permissions: - -* ec2:DescribeVpcs -* autoscaling:* -* iam:CreateInstanceProfile -* iam:AddRoleToInstanceProfile -* iam:RemoveRoleFromInstanceProfile -* iam:DeleteInstanceProfile - -### Create AWS context - -Run the `docker context create ecs myecscontext` command to create an Amazon ECS Docker -context named `myecscontext`. If you have already installed and configured the AWS CLI, -the setup command lets you select an existing AWS profile to connect to Amazon. -Otherwise, you can create a new profile by passing an -[AWS access key ID and a secret access key](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys){: target="_blank" rel="noopener" class="_"}. -Finally, you can configure your ECS context to retrieve AWS credentials by `AWS_*` environment variables, which is a common way to integrate with -third-party tools and single-sign-on providers. - -```console -? Create a Docker context using: [Use arrows to move, type to filter] - An existing AWS profile - AWS secret and token credentials -> AWS environment variables -``` - -After you have created an AWS context, you can list your Docker contexts by running the `docker context ls` command: - -```console -NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR -myecscontext ecs credentials read from environment -default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm -``` - -### Run a Compose application - -You can deploy and manage multi-container applications defined in Compose files -to Amazon ECS using the `docker compose` command. To do this: - -- Ensure you are using your ECS context. You can do this either by specifying -the `--context myecscontext` flag with your command, or by setting the -current context using the command `docker context use myecscontext`. - -- Run `docker compose up` and `docker compose down` to start and then -stop a full Compose application. - - By default, `docker compose up` uses the `compose.yaml` or `docker-compose.yaml` file in - the current folder. You can specify the working directory using the --workdir flag or - specify the Compose file directly using `docker compose --file mycomposefile.yaml up`. - - You can also specify a name for the Compose application using the `--project-name` flag during deployment. If no name is specified, a name will be derived from the working directory. - -Docker ECS integration converts the Compose application model into a set of AWS resources, described as a [CloudFormation](https://aws.amazon.com/cloudformation/){: target="_blank" rel="noopener" class="_"} template. The actual mapping is described in [technical documentation](https://github.com/docker/compose-cli/blob/main/docs/ecs-architecture.md){: target="_blank" rel="noopener" class="_"}. -You can review the generated template using `docker compose convert` command, and follow CloudFormation applying this model within -[AWS web console](https://console.aws.amazon.com/cloudformation/home){: target="_blank" rel="noopener" class="_"} when you run `docker compose up`, in addition to CloudFormation events being displayed -in your terminal. - -- You can view services created for the Compose application on Amazon ECS and -their state using the `docker compose ps` command. - -- You can view logs from containers that are part of the Compose application -using the `docker compose logs` command. - -Also see the [full list of compose features](ecs-compose-features.md). - -## Rolling update - -To update your application without interrupting production flow you can simply -use `docker compose up` on the updated Compose project. -Your ECS services are created with rolling update configuration. As you run -`docker compose up` with a modified Compose file, the stack will be -updated to reflect changes, and if required, some services will be replaced. -This replacement process will follow the rolling-update configuration set by -your services [`deploy.update_config`](../compose/compose-file/compose-file-v3.md#update_config) -configuration. - -AWS ECS uses a percent-based model to define the number of containers to be -run or shut down during a rolling update. The Docker Compose CLI computes -rolling update configuration according to the `parallelism` and `replicas` -fields. However, you might prefer to directly configure a rolling update -using the extension fields `x-aws-min_percent` and `x-aws-max_percent`. -The former sets the minimum percent of containers to run for service, and the -latter sets the maximum percent of additional containers to start before -previous versions are removed. - -By default, the ECS rolling update is set to run twice the number of -containers for a service (200%), and has the ability to shut down 100% -containers during the update. - -## View application logs - -The Docker Compose CLI configures AWS CloudWatch Logs service for your -containers. -By default you can see logs of your compose application the same way you check logs of local deployments: - -```console -# fetch logs for application in current working directory -$ docker compose logs - -# specify compose project name -$ docker compose --project-name PROJECT logs - -# specify compose file -$ docker compose --file /path/to/docker-compose.yaml logs -``` - -A log group is created for the application as `docker-compose/`, -and log streams are created for each service and container in your application -as `//`. - -You can fine tune AWS CloudWatch Logs using extension field `x-aws-logs_retention` -in your Compose file to set the number of retention days for log events. The -default behavior is to keep logs forever. - -You can also pass `awslogs` -parameters to your container as standard -Compose file `logging.driver_opts` elements. See [AWS documentation](https://docs.amazonaws.cn/en_us/AmazonECS/latest/developerguide/using_awslogs.html){:target="_blank" rel="noopener" class="_"} for details on available log driver options. - -## Private Docker images - -The Docker Compose CLI automatically configures authorization so you can pull private images from the Amazon ECR registry on the same AWS account. To pull private images from another registry, including Docker Hub, you’ll have to create a Username + Password (or a Username + Token) secret on the [AWS Secrets Manager service](https://docs.aws.amazon.com/secretsmanager/){: target="_blank" rel="noopener" class="_"}. - -For your convenience, the Docker Compose CLI offers the `docker secret` command, so you can manage secrets created on AWS SMS without having to install the AWS CLI. - -First, create a `token.json` file to define your DockerHub username and access token. - -For instructions on how to generate access tokens, see [Managing access tokens](../docker-hub/access-tokens.md). - -```json -{ - "username":"DockerHubUserName", - "password":"DockerHubAccessToken" -} -``` - -You can then create a secret from this file using `docker secret`: - -```console -$ docker secret create dockerhubAccessToken token.json -arn:aws:secretsmanager:eu-west-3:12345:secret:DockerHubAccessToken -``` - -Once created, you can use this ARN in you Compose file using using `x-aws-pull_credentials` custom extension with the Docker image URI for your service. - -```yaml -services: - worker: - image: mycompany/privateimage - x-aws-pull_credentials: "arn:aws:secretsmanager:eu-west-3:12345:secret:DockerHubAccessToken" -``` - -> **Note** -> -> If you set the Compose file version to 3.8 or later, you can use the same Compose file for local deployment using `docker-compose`. Custom ECS extensions will be ignored in this case. - -## Service discovery - -Service-to-service communication is implemented transparently by default, so you can deploy your Compose applications with multiple interconnected services without changing the compose file between local and ECS deployment. Individual services can run with distinct constraints (memory, cpu) and replication rules. - -### Service names - -Services are registered automatically by the Docker Compose CLI on [AWS Cloud Map](https://docs.aws.amazon.com/cloud-map/latest/dg/what-is-cloud-map.html){: target="_blank" rel="noopener" class="_"} during application deployment. They are declared as fully qualified domain names of the form: `..local`. - -Services can retrieve their dependencies using Compose service names (as they do when deploying locally with docker-compose), or optionally use the fully qualified names. - -> **Note** -> -> Short service names, nor the fully qualified service names, will resolve unless you enable public dns names in your VPC. - -### Dependent service startup time and DNS resolution - -Services get concurrently scheduled on ECS when a Compose file is deployed. AWS Cloud Map introduces an initial delay for DNS service to be able to resolve your services domain names. Your code needs to support this delay by waiting for dependent services to be ready, or by adding a wait-script as the entrypoint to your Docker image, as documented in [Control startup order](../compose/startup-order.md). -Note this need to wait for dependent services in your Compose application also exists when deploying locally with docker-compose, but the delay is typically shorter. Issues might become more visible when deploying to ECS if services do not wait for their dependencies to be available. - -Alternatively, you can use the [depends_on](https://github.com/compose-spec/compose-spec/blob/master/spec.md#depends_on){: target="_blank" rel="noopener" class="_"} feature of the Compose file format. By doing this, dependent service will be created first, and application deployment will wait for it to be up and running before starting the creation of the dependent services. - -### Service isolation - -Service isolation is implemented by the [Security Groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html){: target="_blank" rel="noopener" class="_"} rules, allowing services sharing a common Compose file “network” to communicate together using their Compose service names. - -## Volumes - -ECS integration supports volume management based on Amazon Elastic File System (Amazon EFS). -For a Compose file to declare a `volume`, ECS integration will define creation of an EFS -file system within the CloudFormation template, with `Retain` policy so data won't -be deleted on application shut-down. If the same application (same project name) is -deployed again, the file system will be re-attached to offer the same user experience -developers are used to with docker-compose. - -A basic compose service using a volume can be declared like this: - -```yaml -services: - nginx: - image: nginx - volumes: - - mydata:/some/container/path -volumes: - mydata: -``` - -With no specific volume options, the volume still must be declared in the `volumes`section for -the compose file to be valid (in the above example the empty `mydata:` entry) -If required, the initial file system can be customized using `driver-opts`: - -```yaml -volumes: - my-data: - driver_opts: - # Filesystem configuration - backup_policy: ENABLED - lifecycle_policy: AFTER_14_DAYS - performance_mode: maxIO - throughput_mode: provisioned - provisioned_throughput: 1 -``` - -File systems created by executing `docker compose up` on AWS can be listed using -`docker volume ls` and removed with `docker volume rm `. - -An existing file system can also be used for users who already have data stored on EFS -or want to use a file system created by another Compose stack. - -```yaml -volumes: - my-data: - external: true - name: fs-123abcd -``` - -Accessing a volume from a container can introduce POSIX user ID -permission issues, as Docker images can define arbitrary user ID / group ID for the -process to run inside a container. However, the same `uid:gid` will have to match -POSIX permissions on the file system. To work around the possible conflict, you can set the volume -`uid` and `gid` to be used when accessing a volume: - -```yaml -volumes: - my-data: - driver_opts: - # Access point configuration - uid: 0 - gid: 0 -``` - -## Secrets - -You can pass secrets to your ECS services using Docker model to bind sensitive -data as files under `/run/secrets`. If your Compose file declares a secret as -file, such a secret will be created as part of your application deployment on -ECS. If you use an existing secret as `external: true` reference in your -Compose file, use the ECS Secrets Manager full ARN as the secret name: - -```yaml -services: - webapp: - image: ... - secrets: - - foo - -secrets: - foo: - name: "arn:aws:secretsmanager:eu-west-3:1234:secret:foo-ABC123" - external: true -``` - -Secrets will be available at runtime for your service as a plain text file `/run/secrets/foo`. - -The AWS Secrets Manager allows you to store sensitive data either as a plain -text (like Docker secret does), or as a hierarchical JSON document. You can -use the latter with Docker Compose CLI by using custom field `x-aws-keys` to -define which entries in the JSON document to bind as a secret in your service -container. - -```yaml -services: - webapp: - image: ... - secrets: - - foo - -secrets: - foo: - name: "arn:aws:secretsmanager:eu-west-3:1234:secret:foo-ABC123" - x-aws-keys: - - "bar" -``` - -By doing this, the secret for `bar` key will be available at runtime for your -service as a plain text file `/run/secrets/foo/bar`. You can use the special -value `*` to get all keys bound in your container. - -## Auto scaling - -Scaling service static information (non auto-scaling) can be specified using the normal Compose syntax: - -```yaml -services: - foo: - deploy: - replicas: 3 -``` - -The Compose file model does not define any attributes to declare auto-scaling conditions. -Therefore, we rely on `x-aws-autoscaling` custom extension to define the auto-scaling range, as -well as cpu _or_ memory to define target metric, expressed as resource usage percent. - -```yaml -services: - foo: - deploy: - x-aws-autoscaling: - min: 1 - max: 10 #required - cpu: 75 - # mem: - mutualy exlusive with cpu -``` - -## IAM roles - -Your ECS Tasks are executed with a dedicated IAM role, granting access -to AWS Managed policies[`AmazonECSTaskExecutionRolePolicy`](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) -and [`AmazonEC2ContainerRegistryReadOnly`](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecr_managed_policies.html). -In addition, if your service uses secrets, IAM Role gets additional -permissions to read and decrypt secrets from the AWS Secret Manager. - -You can grant additional managed policies to your service execution -by using `x-aws-policies` inside a service definition: - -```yaml -services: - foo: - x-aws-policies: - - "arn:aws:iam::aws:policy/AmazonS3FullAccess" -``` - -You can also write your own [IAM Policy Document](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) -to fine tune the IAM role to be applied to your ECS service, and use -`x-aws-role` inside a service definition to pass the -yaml-formatted policy document. - -```yaml -services: - foo: - x-aws-role: - Version: "2012-10-17" - Statement: - - Effect: "Allow" - Action: - - "some_aws_service" - Resource: - - "*" -``` - -## Tuning the CloudFormation template - -The Docker Compose CLI relies on [Amazon CloudFormation](https://docs.aws.amazon.com/cloudformation/){: target="_blank" rel="noopener" class="_"} to manage the application deployment. To get more control on the created resources, you can use `docker compose convert` to generate a CloudFormation stack file from your Compose file. This allows you to inspect resources it defines, or customize the template for your needs, and then apply the template to AWS using the AWS CLI, or the AWS web console. - -Once you have identified the changes required to your CloudFormation template, you can include _overlays_ in your -Compose file that will be automatically applied on `compose up`. An _overlay_ is a yaml object that uses the same CloudFormation template data structure as the one generated by ECS integration, but only contains attributes to -be updated or added. It will be merged with the generated template before being applied on the AWS infrastructure. - -### Adjusting Load Balancer http HealthCheck configuration - -While ECS cluster uses the `HealthCheck` command on container to get service health, Application Load Balancers define -their own URL-based HealthCheck mechanism so traffic gets routed. As the Compose model does not offer such an -abstraction (yet), the default one is applied, which queries your service under `/` expecting HTTP status code -`200`. - -You can tweak this behavior using a cloudformation overlay by following the [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancingv2-targetgroup.html){:target="_blank" rel="noopener" class="_"} for -configuration reference: - -```yaml -services: - webapp: - image: acme/webapp - ports: - - "80:80" - -x-aws-cloudformation: - Resources: - WebappTCP80TargetGroup: - Properties: - HealthCheckPath: /health - Matcher: - HttpCode: 200-499 -``` - -### Setting SSL termination by Load Balancer - -You can use Application Load Balancer to handle the SSL termination for HTTPS services, so that your code, which ran inside -a container, doesn't have to. This is currently not supported by the ECS integration due to the lack of an equivalent abstraction in the Compose specification. However, you can rely on overlays to enable this feature on generated Listeners configuration: - -```yaml -services: - webapp: - image: acme/webapp - ports: - - "80:80" - -x-aws-cloudformation: - Resources: - WebappTCP80Listener: - Properties: - Certificates: - - CertificateArn: "arn:aws:acm:certificate/123abc" - Protocol: HTTPS - Port: 443 -``` - -## Using existing AWS network resources - -By default, the Docker Compose CLI creates an ECS cluster for your Compose application, a Security Group per network in your Compose file on your AWS account’s default VPC, and a LoadBalancer to route traffic to your services. - -With the following basic compose file, the Docker Compose CLI will automatically create these ECS constructs including the load balancer to route traffic to the exposed port 80. - -```yaml -services: - nginx: - image: nginx - ports: - - "80:80" -``` - -If your AWS account does not have [permissions](https://github.com/docker/ecs-plugin/blob/master/docs/requirements.md#permissions){: target="_blank" rel="noopener" class="_"} to create such resources, or if you want to manage these yourself, you can use the following custom Compose extensions: - -- Use `x-aws-cluster` as a top-level element in your Compose file to set the ID -of an ECS cluster when deploying a Compose application. Otherwise, a -cluster will be created for the Compose project. - -- Use `x-aws-vpc` as a top-level element in your Compose file to set the ARN -of a VPC when deploying a Compose application. - -- Use `x-aws-loadbalancer` as a top-level element in your Compose file to set -the ARN of an existing LoadBalancer. - -The latter can be used for those who want to customize application exposure, typically to -use an existing domain name for your application: - -1. Use the AWS web console or CLI to get your VPC and Subnets IDs. You can retrieve the default VPC ID and attached subnets using this AWS CLI commands: - - ```console - $ aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId' - - "vpc-123456" - $ aws ec2 describe-subnets --filters Name=vpc-id,Values=vpc-123456 --query 'Subnets[*].SubnetId' - - [ - "subnet-1234abcd", - "subnet-6789ef00", - ] - ``` - -2. Use the AWS CLI to create your load balancer. The AWS Web Console can also be used but will require adding at least one listener, which we don't need here. - - ```console - $ aws elbv2 create-load-balancer --name myloadbalancer --type application --subnets "subnet-1234abcd" "subnet-6789ef00" - - { - "LoadBalancers": [ - { - "IpAddressType": "ipv4", - "VpcId": "vpc-123456", - "LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/myloadbalancer/123abcd456", - "DNSName": "myloadbalancer-123456.us-east-1.elb.amazonaws.com", - <...> - ``` - -3. To assign your application an existing domain name, you can configure your DNS with a - CNAME entry pointing to just-created loadbalancer's `DNSName` reported as you created the loadbalancer. - -4. Use Loadbalancer ARN to set `x-aws-loadbalancer` in your compose file, and deploy your application using `docker compose up` command. - -Please note Docker ECS integration won't be aware of this domain name, so `docker compose ps` command will report URLs with loadbalancer DNSName, not your own domain. - -You also can use `external: true` inside a network definition in your Compose file for -Docker Compose CLI to _not_ create a Security Group, and set `name` with the -ID of an existing SecurityGroup you want to use for network connectivity between -services: - -```yaml -networks: - back_tier: - external: true - name: "sg-1234acbd" -``` - -## Local simulation - -When you deploy your application on ECS, you may also rely on the additional AWS services. -In such cases, your code must embed the AWS SDK and retrieve API credentials at runtime. -AWS offers a credentials discovery mechanism which is fully implemented by the SDK, and relies -on accessing a metadata service on a fixed IP address. - -Once you adopt this approach, running your application locally for testing or debug purposes -can be difficult. Therefore, we have introduced an option on context creation to set the -`ecs-local` context to maintain application portability between local workstation and the -AWS cloud provider. - -```console -$ docker context create ecs --local-simulation ecsLocal -Successfully created ecs-local context "ecsLocal" -``` - -When you select a local simulation context, running the `docker compose up` command doesn't -deploy your application on ECS. Therefore, you must run it locally, automatically adjusting your Compose -application so it includes the [ECS local endpoints](https://github.com/awslabs/amazon-ecs-local-container-endpoints/). -This allows the AWS SDK used by application code to -access a local mock container as "AWS metadata API" and retrieve credentials from your own -local `.aws/credentials` config file. - -## Install the Docker Compose CLI on Linux - -The Docker Compose CLI adds support for running and managing containers on ECS. - -### Install Prerequisites - -[Docker 19.03 or later](../get-docker.md) - -### Install script - -You can install the new CLI using the install script: - -```console -$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh -``` - -## FAQ - -**What does the error `this tool requires the "new ARN resource ID format"` mean?** - -This error message means that your account requires the new ARN resource ID format for ECS. To learn more, see [Migrating your Amazon ECS deployment to the new ARN and resource ID format](https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-deployment-to-the-new-arn-and-resource-id-format-2/){: target="_blank" rel="noopener" class="_"}. - -## Feedback - -Thank you for trying out the Docker Compose CLI. Your feedback is very important to us. Let us know your feedback by creating an issue in the [Compose CLI](https://github.com/docker/compose-cli){: target="_blank" rel="noopener" class="_"} GitHub repository. +--- +title: Deploying Docker containers on ECS +description: Deploying Docker containers on ECS +keywords: Docker, AWS, ECS, Integration, context, Compose, cli, deploy, containers, cloud +redirect_from: + - /engine/context/ecs-integration/ +toc_min: 1 +toc_max: 2 +--- + +## Overview + +The Docker Compose CLI enables developers to use native Docker commands to run applications in Amazon Elastic Container Service (ECS) when building cloud-native applications. + +The integration between Docker and Amazon ECS allows developers to use the Docker Compose CLI to: + +* Set up an AWS context in one Docker command, allowing you to switch from a local context to a cloud context and run applications quickly and easily +* Simplify multi-container application development on Amazon ECS using Compose files + +Also see the [ECS integration architecture](ecs-architecture.md), [full list of compose features](ecs-compose-features.md) and [Compose examples for ECS integration](ecs-compose-examples.md). + +## Prerequisites + +To deploy Docker containers on ECS, you must meet the following requirements: + +1. Download and install the latest version of Docker Desktop. + + - [Download for Mac](../desktop/mac/install.md) + - [Download for Windows](../desktop/windows/install.md) + + Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux). + +2. Ensure you have an AWS account. + +Docker not only runs multi-container applications locally, but also enables +developers to seamlessly deploy Docker containers on Amazon ECS using a +Compose file with the `docker compose up` command. The following sections +contain instructions on how to deploy your Compose application on Amazon ECS. + +## Run an application on ECS + +### Requirements + +AWS uses a fine-grained permission model, with specific role for each resource type and operation. + +To ensure that Docker ECS integration is allowed to manage resources for your Compose application, you have to ensure your AWS credentials [grant access to following AWS IAM permissions](https://aws.amazon.com/iam/features/manage-permissions/): + +* application-autoscaling:* +* cloudformation:* +* ec2:AuthorizeSecurityGroupIngress +* ec2:CreateSecurityGroup +* ec2:CreateTags +* ec2:DeleteSecurityGroup +* ec2:DescribeRouteTables +* ec2:DescribeSecurityGroups +* ec2:DescribeSubnets +* ec2:DescribeVpcs +* ec2:RevokeSecurityGroupIngress +* ecs:CreateCluster +* ecs:CreateService +* ecs:DeleteCluster +* ecs:DeleteService +* ecs:DeregisterTaskDefinition +* ecs:DescribeClusters +* ecs:DescribeServices +* ecs:DescribeTasks +* ecs:ListAccountSettings +* ecs:ListTasks +* ecs:RegisterTaskDefinition +* ecs:UpdateService +* elasticloadbalancing:* +* iam:AttachRolePolicy +* iam:CreateRole +* iam:DeleteRole +* iam:DetachRolePolicy +* iam:PassRole +* logs:CreateLogGroup +* logs:DeleteLogGroup +* logs:DescribeLogGroups +* logs:FilterLogEvents +* route53:CreateHostedZone +* route53:DeleteHostedZone +* route53:GetHealthCheck +* route53:GetHostedZone +* route53:ListHostedZonesByName +* servicediscovery:* + +GPU support, which relies on EC2 instances to run containers with attached GPU devices, +require a few additional permissions: + +* ec2:DescribeVpcs +* autoscaling:* +* iam:CreateInstanceProfile +* iam:AddRoleToInstanceProfile +* iam:RemoveRoleFromInstanceProfile +* iam:DeleteInstanceProfile + +### Create AWS context + +Run the `docker context create ecs myecscontext` command to create an Amazon ECS Docker +context named `myecscontext`. If you have already installed and configured the AWS CLI, +the setup command lets you select an existing AWS profile to connect to Amazon. +Otherwise, you can create a new profile by passing an +[AWS access key ID and a secret access key](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys){: target="_blank" rel="noopener" class="_"}. +Finally, you can configure your ECS context to retrieve AWS credentials by `AWS_*` environment variables, which is a common way to integrate with +third-party tools and single-sign-on providers. + +```console +? Create a Docker context using: [Use arrows to move, type to filter] + An existing AWS profile + AWS secret and token credentials +> AWS environment variables +``` + +After you have created an AWS context, you can list your Docker contexts by running the `docker context ls` command: + +```console +NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR +myecscontext ecs credentials read from environment +default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm +``` + +### Run a Compose application + +You can deploy and manage multi-container applications defined in Compose files +to Amazon ECS using the `docker compose` command. To do this: + +- Ensure you are using your ECS context. You can do this either by specifying +the `--context myecscontext` flag with your command, or by setting the +current context using the command `docker context use myecscontext`. + +- Run `docker compose up` and `docker compose down` to start and then +stop a full Compose application. + + By default, `docker compose up` uses the `compose.yaml` or `docker-compose.yaml` file in + the current folder. You can specify the working directory using the --workdir flag or + specify the Compose file directly using `docker compose --file mycomposefile.yaml up`. + + You can also specify a name for the Compose application using the `--project-name` flag during deployment. If no name is specified, a name will be derived from the working directory. + +Docker ECS integration converts the Compose application model into a set of AWS resources, described as a [CloudFormation](https://aws.amazon.com/cloudformation/){: target="_blank" rel="noopener" class="_"} template. The actual mapping is described in [technical documentation](https://github.com/docker/compose-cli/blob/main/docs/ecs-architecture.md){: target="_blank" rel="noopener" class="_"}. +You can review the generated template using `docker compose convert` command, and follow CloudFormation applying this model within +[AWS web console](https://console.aws.amazon.com/cloudformation/home){: target="_blank" rel="noopener" class="_"} when you run `docker compose up`, in addition to CloudFormation events being displayed +in your terminal. + +- You can view services created for the Compose application on Amazon ECS and +their state using the `docker compose ps` command. + +- You can view logs from containers that are part of the Compose application +using the `docker compose logs` command. + +Also see the [full list of compose features](ecs-compose-features.md). + +## Rolling update + +To update your application without interrupting production flow you can simply +use `docker compose up` on the updated Compose project. +Your ECS services are created with rolling update configuration. As you run +`docker compose up` with a modified Compose file, the stack will be +updated to reflect changes, and if required, some services will be replaced. +This replacement process will follow the rolling-update configuration set by +your services [`deploy.update_config`](../compose/compose-file/compose-file-v3.md#update_config) +configuration. + +AWS ECS uses a percent-based model to define the number of containers to be +run or shut down during a rolling update. The Docker Compose CLI computes +rolling update configuration according to the `parallelism` and `replicas` +fields. However, you might prefer to directly configure a rolling update +using the extension fields `x-aws-min_percent` and `x-aws-max_percent`. +The former sets the minimum percent of containers to run for service, and the +latter sets the maximum percent of additional containers to start before +previous versions are removed. + +By default, the ECS rolling update is set to run twice the number of +containers for a service (200%), and has the ability to shut down 100% +containers during the update. + +## View application logs + +The Docker Compose CLI configures AWS CloudWatch Logs service for your +containers. +By default you can see logs of your compose application the same way you check logs of local deployments: + +```console +# fetch logs for application in current working directory +$ docker compose logs + +# specify compose project name +$ docker compose --project-name PROJECT logs + +# specify compose file +$ docker compose --file /path/to/docker-compose.yaml logs +``` + +A log group is created for the application as `docker-compose/`, +and log streams are created for each service and container in your application +as `//`. + +You can fine tune AWS CloudWatch Logs using extension field `x-aws-logs_retention` +in your Compose file to set the number of retention days for log events. The +default behavior is to keep logs forever. + +You can also pass `awslogs` +parameters to your container as standard +Compose file `logging.driver_opts` elements. See [AWS documentation](https://docs.amazonaws.cn/en_us/AmazonECS/latest/developerguide/using_awslogs.html){:target="_blank" rel="noopener" class="_"} for details on available log driver options. + +## Private Docker images + +The Docker Compose CLI automatically configures authorization so you can pull private images from the Amazon ECR registry on the same AWS account. To pull private images from another registry, including Docker Hub, you’ll have to create a Username + Password (or a Username + Token) secret on the [AWS Secrets Manager service](https://docs.aws.amazon.com/secretsmanager/){: target="_blank" rel="noopener" class="_"}. + +For your convenience, the Docker Compose CLI offers the `docker secret` command, so you can manage secrets created on AWS SMS without having to install the AWS CLI. + +First, create a `token.json` file to define your DockerHub username and access token. + +For instructions on how to generate access tokens, see [Managing access tokens](../docker-hub/access-tokens.md). + +```json +{ + "username":"DockerHubUserName", + "password":"DockerHubAccessToken" +} +``` + +You can then create a secret from this file using `docker secret`: + +```console +$ docker secret create dockerhubAccessToken token.json +arn:aws:secretsmanager:eu-west-3:12345:secret:DockerHubAccessToken +``` + +Once created, you can use this ARN in you Compose file using using `x-aws-pull_credentials` custom extension with the Docker image URI for your service. + +```yaml +services: + worker: + image: mycompany/privateimage + x-aws-pull_credentials: "arn:aws:secretsmanager:eu-west-3:12345:secret:DockerHubAccessToken" +``` + +> **Note** +> +> If you set the Compose file version to 3.8 or later, you can use the same Compose file for local deployment using `docker-compose`. Custom ECS extensions will be ignored in this case. + +## Service discovery + +Service-to-service communication is implemented transparently by default, so you can deploy your Compose applications with multiple interconnected services without changing the compose file between local and ECS deployment. Individual services can run with distinct constraints (memory, cpu) and replication rules. + +### Service names + +Services are registered automatically by the Docker Compose CLI on [AWS Cloud Map](https://docs.aws.amazon.com/cloud-map/latest/dg/what-is-cloud-map.html){: target="_blank" rel="noopener" class="_"} during application deployment. They are declared as fully qualified domain names of the form: `..local`. + +Services can retrieve their dependencies using Compose service names (as they do when deploying locally with docker-compose), or optionally use the fully qualified names. + +> **Note** +> +> Short service names, nor the fully qualified service names, will resolve unless you enable public dns names in your VPC. + +### Dependent service startup time and DNS resolution + +Services get concurrently scheduled on ECS when a Compose file is deployed. AWS Cloud Map introduces an initial delay for DNS service to be able to resolve your services domain names. Your code needs to support this delay by waiting for dependent services to be ready, or by adding a wait-script as the entrypoint to your Docker image, as documented in [Control startup order](../compose/startup-order.md). +Note this need to wait for dependent services in your Compose application also exists when deploying locally with docker-compose, but the delay is typically shorter. Issues might become more visible when deploying to ECS if services do not wait for their dependencies to be available. + +Alternatively, you can use the [depends_on](https://github.com/compose-spec/compose-spec/blob/master/spec.md#depends_on){: target="_blank" rel="noopener" class="_"} feature of the Compose file format. By doing this, dependent service will be created first, and application deployment will wait for it to be up and running before starting the creation of the dependent services. + +### Service isolation + +Service isolation is implemented by the [Security Groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html){: target="_blank" rel="noopener" class="_"} rules, allowing services sharing a common Compose file “network” to communicate together using their Compose service names. + +## Volumes + +ECS integration supports volume management based on Amazon Elastic File System (Amazon EFS). +For a Compose file to declare a `volume`, ECS integration will define creation of an EFS +file system within the CloudFormation template, with `Retain` policy so data won't +be deleted on application shut-down. If the same application (same project name) is +deployed again, the file system will be re-attached to offer the same user experience +developers are used to with docker-compose. + +A basic compose service using a volume can be declared like this: + +```yaml +services: + nginx: + image: nginx + volumes: + - mydata:/some/container/path +volumes: + mydata: +``` + +With no specific volume options, the volume still must be declared in the `volumes`section for +the compose file to be valid (in the above example the empty `mydata:` entry) +If required, the initial file system can be customized using `driver-opts`: + +```yaml +volumes: + my-data: + driver_opts: + # Filesystem configuration + backup_policy: ENABLED + lifecycle_policy: AFTER_14_DAYS + performance_mode: maxIO + throughput_mode: provisioned + provisioned_throughput: 1 +``` + +File systems created by executing `docker compose up` on AWS can be listed using +`docker volume ls` and removed with `docker volume rm `. + +An existing file system can also be used for users who already have data stored on EFS +or want to use a file system created by another Compose stack. + +```yaml +volumes: + my-data: + external: true + name: fs-123abcd +``` + +Accessing a volume from a container can introduce POSIX user ID +permission issues, as Docker images can define arbitrary user ID / group ID for the +process to run inside a container. However, the same `uid:gid` will have to match +POSIX permissions on the file system. To work around the possible conflict, you can set the volume +`uid` and `gid` to be used when accessing a volume: + +```yaml +volumes: + my-data: + driver_opts: + # Access point configuration + uid: 0 + gid: 0 +``` + +## Secrets + +You can pass secrets to your ECS services using Docker model to bind sensitive +data as files under `/run/secrets`. If your Compose file declares a secret as +file, such a secret will be created as part of your application deployment on +ECS. If you use an existing secret as `external: true` reference in your +Compose file, use the ECS Secrets Manager full ARN as the secret name: + +```yaml +services: + webapp: + image: ... + secrets: + - foo + +secrets: + foo: + name: "arn:aws:secretsmanager:eu-west-3:1234:secret:foo-ABC123" + external: true +``` + +Secrets will be available at runtime for your service as a plain text file `/run/secrets/foo`. + +The AWS Secrets Manager allows you to store sensitive data either as a plain +text (like Docker secret does), or as a hierarchical JSON document. You can +use the latter with Docker Compose CLI by using custom field `x-aws-keys` to +define which entries in the JSON document to bind as a secret in your service +container. + +```yaml +services: + webapp: + image: ... + secrets: + - foo + +secrets: + foo: + name: "arn:aws:secretsmanager:eu-west-3:1234:secret:foo-ABC123" + x-aws-keys: + - "bar" +``` + +By doing this, the secret for `bar` key will be available at runtime for your +service as a plain text file `/run/secrets/foo/bar`. You can use the special +value `*` to get all keys bound in your container. + +## Auto scaling + +Scaling service static information (non auto-scaling) can be specified using the normal Compose syntax: + +```yaml +services: + foo: + deploy: + replicas: 3 +``` + +The Compose file model does not define any attributes to declare auto-scaling conditions. +Therefore, we rely on `x-aws-autoscaling` custom extension to define the auto-scaling range, as +well as cpu _or_ memory to define target metric, expressed as resource usage percent. + +```yaml +services: + foo: + deploy: + x-aws-autoscaling: + min: 1 + max: 10 #required + cpu: 75 + # mem: - mutualy exlusive with cpu +``` + +## IAM roles + +Your ECS Tasks are executed with a dedicated IAM role, granting access +to AWS Managed policies[`AmazonECSTaskExecutionRolePolicy`](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) +and [`AmazonEC2ContainerRegistryReadOnly`](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecr_managed_policies.html). +In addition, if your service uses secrets, IAM Role gets additional +permissions to read and decrypt secrets from the AWS Secret Manager. + +You can grant additional managed policies to your service execution +by using `x-aws-policies` inside a service definition: + +```yaml +services: + foo: + x-aws-policies: + - "arn:aws:iam::aws:policy/AmazonS3FullAccess" +``` + +You can also write your own [IAM Policy Document](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) +to fine tune the IAM role to be applied to your ECS service, and use +`x-aws-role` inside a service definition to pass the +yaml-formatted policy document. + +```yaml +services: + foo: + x-aws-role: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Action: + - "some_aws_service" + Resource: + - "*" +``` + +## Tuning the CloudFormation template + +The Docker Compose CLI relies on [Amazon CloudFormation](https://docs.aws.amazon.com/cloudformation/){: target="_blank" rel="noopener" class="_"} to manage the application deployment. To get more control on the created resources, you can use `docker compose convert` to generate a CloudFormation stack file from your Compose file. This allows you to inspect resources it defines, or customize the template for your needs, and then apply the template to AWS using the AWS CLI, or the AWS web console. + +Once you have identified the changes required to your CloudFormation template, you can include _overlays_ in your +Compose file that will be automatically applied on `compose up`. An _overlay_ is a yaml object that uses the same CloudFormation template data structure as the one generated by ECS integration, but only contains attributes to +be updated or added. It will be merged with the generated template before being applied on the AWS infrastructure. + +### Adjusting Load Balancer http HealthCheck configuration + +While ECS cluster uses the `HealthCheck` command on container to get service health, Application Load Balancers define +their own URL-based HealthCheck mechanism so traffic gets routed. As the Compose model does not offer such an +abstraction (yet), the default one is applied, which queries your service under `/` expecting HTTP status code +`200`. + +You can tweak this behavior using a cloudformation overlay by following the [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancingv2-targetgroup.html){:target="_blank" rel="noopener" class="_"} for +configuration reference: + +```yaml +services: + webapp: + image: acme/webapp + ports: + - "80:80" + +x-aws-cloudformation: + Resources: + WebappTCP80TargetGroup: + Properties: + HealthCheckPath: /health + Matcher: + HttpCode: 200-499 +``` + +### Setting SSL termination by Load Balancer + +You can use Application Load Balancer to handle the SSL termination for HTTPS services, so that your code, which ran inside +a container, doesn't have to. This is currently not supported by the ECS integration due to the lack of an equivalent abstraction in the Compose specification. However, you can rely on overlays to enable this feature on generated Listeners configuration: + +```yaml +services: + webapp: + image: acme/webapp + ports: + - "80:80" + +x-aws-cloudformation: + Resources: + WebappTCP80Listener: + Properties: + Certificates: + - CertificateArn: "arn:aws:acm:certificate/123abc" + Protocol: HTTPS + Port: 443 +``` + +## Using existing AWS network resources + +By default, the Docker Compose CLI creates an ECS cluster for your Compose application, a Security Group per network in your Compose file on your AWS account’s default VPC, and a LoadBalancer to route traffic to your services. + +With the following basic compose file, the Docker Compose CLI will automatically create these ECS constructs including the load balancer to route traffic to the exposed port 80. + +```yaml +services: + nginx: + image: nginx + ports: + - "80:80" +``` + +If your AWS account does not have [permissions](https://github.com/docker/ecs-plugin/blob/master/docs/requirements.md#permissions){: target="_blank" rel="noopener" class="_"} to create such resources, or if you want to manage these yourself, you can use the following custom Compose extensions: + +- Use `x-aws-cluster` as a top-level element in your Compose file to set the ID +of an ECS cluster when deploying a Compose application. Otherwise, a +cluster will be created for the Compose project. + +- Use `x-aws-vpc` as a top-level element in your Compose file to set the ARN +of a VPC when deploying a Compose application. + +- Use `x-aws-loadbalancer` as a top-level element in your Compose file to set +the ARN of an existing LoadBalancer. + +The latter can be used for those who want to customize application exposure, typically to +use an existing domain name for your application: + +1. Use the AWS web console or CLI to get your VPC and Subnets IDs. You can retrieve the default VPC ID and attached subnets using this AWS CLI commands: + + ```console + $ aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId' + + "vpc-123456" + $ aws ec2 describe-subnets --filters Name=vpc-id,Values=vpc-123456 --query 'Subnets[*].SubnetId' + + [ + "subnet-1234abcd", + "subnet-6789ef00", + ] + ``` + +2. Use the AWS CLI to create your load balancer. The AWS Web Console can also be used but will require adding at least one listener, which we don't need here. + + ```console + $ aws elbv2 create-load-balancer --name myloadbalancer --type application --subnets "subnet-1234abcd" "subnet-6789ef00" + + { + "LoadBalancers": [ + { + "IpAddressType": "ipv4", + "VpcId": "vpc-123456", + "LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/myloadbalancer/123abcd456", + "DNSName": "myloadbalancer-123456.us-east-1.elb.amazonaws.com", + <...> + ``` + +3. To assign your application an existing domain name, you can configure your DNS with a + CNAME entry pointing to just-created loadbalancer's `DNSName` reported as you created the loadbalancer. + +4. Use Loadbalancer ARN to set `x-aws-loadbalancer` in your compose file, and deploy your application using `docker compose up` command. + +Please note Docker ECS integration won't be aware of this domain name, so `docker compose ps` command will report URLs with loadbalancer DNSName, not your own domain. + +You also can use `external: true` inside a network definition in your Compose file for +Docker Compose CLI to _not_ create a Security Group, and set `name` with the +ID of an existing SecurityGroup you want to use for network connectivity between +services: + +```yaml +networks: + back_tier: + external: true + name: "sg-1234acbd" +``` + +## Local simulation + +When you deploy your application on ECS, you may also rely on the additional AWS services. +In such cases, your code must embed the AWS SDK and retrieve API credentials at runtime. +AWS offers a credentials discovery mechanism which is fully implemented by the SDK, and relies +on accessing a metadata service on a fixed IP address. + +Once you adopt this approach, running your application locally for testing or debug purposes +can be difficult. Therefore, we have introduced an option on context creation to set the +`ecs-local` context to maintain application portability between local workstation and the +AWS cloud provider. + +```console +$ docker context create ecs --local-simulation ecsLocal +Successfully created ecs-local context "ecsLocal" +``` + +When you select a local simulation context, running the `docker compose up` command doesn't +deploy your application on ECS. Therefore, you must run it locally, automatically adjusting your Compose +application so it includes the [ECS local endpoints](https://github.com/awslabs/amazon-ecs-local-container-endpoints/). +This allows the AWS SDK used by application code to +access a local mock container as "AWS metadata API" and retrieve credentials from your own +local `.aws/credentials` config file. + +## Install the Docker Compose CLI on Linux + +The Docker Compose CLI adds support for running and managing containers on ECS. + +### Install Prerequisites + +[Docker 19.03 or later](../get-docker.md) + +### Install script + +You can install the new CLI using the install script: + +```console +$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh +``` + +## FAQ + +**What does the error `this tool requires the "new ARN resource ID format"` mean?** + +This error message means that your account requires the new ARN resource ID format for ECS. To learn more, see [Migrating your Amazon ECS deployment to the new ARN and resource ID format](https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-deployment-to-the-new-arn-and-resource-id-format-2/){: target="_blank" rel="noopener" class="_"}. + +## Feedback + +Thank you for trying out the Docker Compose CLI. Your feedback is very important to us. Let us know your feedback by creating an issue in the [Compose CLI](https://github.com/docker/compose-cli){: target="_blank" rel="noopener" class="_"} GitHub repository. diff --git a/engine/context/working-with-contexts.md b/engine/context/working-with-contexts.md index d986a1ad9f..5fcdb74b3c 100644 --- a/engine/context/working-with-contexts.md +++ b/engine/context/working-with-contexts.md @@ -1,272 +1,272 @@ ---- -title: Docker Context -description: Learn about Docker Context -keywords: engine, context, cli, kubernetes ---- - - -## Introduction - -This guide shows how _contexts_ make it easy for a **single Docker CLI** to manage multiple Swarm clusters, multiple Kubernetes clusters, and multiple individual Docker nodes. - -A single Docker CLI can have multiple contexts. Each context contains all of the endpoint and security information required to manage a different cluster or node. The `docker context` command makes it easy to configure these contexts and switch between them. - -As an example, a single Docker client on your company laptop might be configured with two contexts; **dev-k8s** and **prod-swarm**. **dev-k8s** contains the endpoint data and security credentials to configure and manage a Kubernetes cluster in a development environment. **prod-swarm** contains everything required to manage a Swarm cluster in a production environment. Once these contexts are configured, you can use the top-level `docker context use ` to easily switch between them. - -For information on using Docker Context to deploy your apps to the cloud, see [Deploying Docker containers on Azure](../../cloud/aci-integration.md) and [Deploying Docker containers on ECS](../../cloud/ecs-integration.md). - -## Prerequisites - -To follow the examples in this guide, you'll need: - -- A Docker client that supports the top-level `context` command - -Run `docker context` to verify that your Docker client supports contexts. - -You will also need one of the following: - -- Docker Swarm cluster -- Single-engine Docker node -- Kubernetes cluster - -## The anatomy of a context - -A context is a combination of several properties. These include: - -- Name -- Endpoint configuration -- TLS info -- Orchestrator - -The easiest way to see what a context looks like is to view the **default** context. - -```console -$ docker context ls -NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR -default * Current... unix:///var/run/docker.sock swarm -``` - -This shows a single context called "default". It's configured to talk to a Swarm cluster through the local `/var/run/docker.sock` Unix socket. It has no Kubernetes endpoint configured. - -The asterisk in the `NAME` column indicates that this is the active context. This means all `docker` commands will be executed against the "default" context unless overridden with environment variables such as `DOCKER_HOST` and `DOCKER_CONTEXT`, or on the command-line with the `--context` and `--host` flags. - -Dig a bit deeper with `docker context inspect`. In this example, we're inspecting the context called `default`. - -```console -$ docker context inspect default -[ - { - "Name": "default", - "Metadata": { - "StackOrchestrator": "swarm" - }, - "Endpoints": { - "docker": { - "Host": "unix:///var/run/docker.sock", - "SkipTLSVerify": false - } - }, - "TLSMaterial": {}, - "Storage": { - "MetadataPath": "\u003cIN MEMORY\u003e", - "TLSPath": "\u003cIN MEMORY\u003e" - } - } -] -``` - -This context is using "swarm" as the orchestrator (`metadata.stackOrchestrator`). It is configured to talk to an endpoint exposed on a local Unix socket at `/var/run/docker.sock` (`Endpoints.docker.Host`), and requires TLS verification (`Endpoints.docker.SkipTLSVerify`). - -### Create a new context - -You can create new contexts with the `docker context create` command. - -The following example creates a new context called "docker-test" and specifies the following: - -- Default orchestrator = Swarm -- Issue commands to the local Unix socket `/var/run/docker.sock` - -```console -$ docker context create docker-test \ - --default-stack-orchestrator=swarm \ - --docker host=unix:///var/run/docker.sock - -Successfully created context "docker-test" -``` - -The new context is stored in a `meta.json` file below `~/.docker/contexts/`. Each new context you create gets its own `meta.json` stored in a dedicated sub-directory of `~/.docker/contexts/`. - -> **Note:** The default context behaves differently than manually created contexts. It does not have a `meta.json` configuration file, and it dynamically updates based on the current configuration. For example, if you switch your current Kubernetes config using `kubectl config use-context`, the default Docker context will dynamically update itself to the new Kubernetes endpoint. - -You can view the new context with `docker context ls` and `docker context inspect `. - -The following can be used to create a config with Kubernetes as the default orchestrator using the existing kubeconfig stored in `/home/ubuntu/.kube/config`. For this to work, you will need a valid kubeconfig file in `/home/ubuntu/.kube/config`. If your kubeconfig has more than one context, the current context (`kubectl config current-context`) will be used. - -```console -$ docker context create k8s-test \ - --default-stack-orchestrator=kubernetes \ - --kubernetes config-file=/home/ubuntu/.kube/config \ - --docker host=unix:///var/run/docker.sock - -Successfully created context "k8s-test" -``` - -You can view all contexts on the system with `docker context ls`. - -```console -$ docker context ls -NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR -default * Current unix:///var/run/docker.sock https://35.226.99.100 (default) swarm -k8s-test unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes -docker-test unix:///var/run/docker.sock swarm -``` - -The current context is indicated with an asterisk ("\*"). - -## Use a different context - -You can use `docker context use` to quickly switch between contexts. - -The following command will switch the `docker` CLI to use the "k8s-test" context. - -```console -$ docker context use k8s-test - -k8s-test -Current context is now "k8s-test" -``` - -Verify the operation by listing all contexts and ensuring the asterisk ("\*") is against the "k8s-test" context. - -```console -$ docker context ls -NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR -default Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://35.226.99.100 (default) swarm -docker-test unix:///var/run/docker.sock swarm -k8s-test * unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes -``` - -`docker` commands will now target endpoints defined in the "k8s-test" context. - -You can also set the current context using the `DOCKER_CONTEXT` environment variable. This overrides the context set with `docker context use`. - -Use the appropriate command below to set the context to `docker-test` using an environment variable. - -Windows PowerShell: - -```console -> $Env:DOCKER_CONTEXT=docker-test -``` - -Linux: - -```console -$ export DOCKER_CONTEXT=docker-test -``` - -Run a `docker context ls` to verify that the "docker-test" context is now the active context. - -You can also use the global `--context` flag to override the context specified by the `DOCKER_CONTEXT` environment variable. For example, the following will send the command to a context called "production". - -```console -$ docker --context production container ls -``` - -## Exporting and importing Docker contexts - -The `docker context` command makes it easy to export and import contexts on different machines with the Docker client installed. - -You can use the `docker context export` command to export an existing context to a file. This file can later be imported on another machine that has the `docker` client installed. - -By default, contexts will be exported as a _native Docker contexts_. You can export and import these using the `docker context` command. If the context you are exporting includes a Kubernetes endpoint, the Kubernetes part of the context will be included in the `export` and `import` operations. - -There is also an option to export just the Kubernetes part of a context. This will produce a native kubeconfig file that can be manually merged with an existing `~/.kube/config` file on another host that has `kubectl` installed. You cannot export just the Kubernetes portion of a context and then import it with `docker context import`. The only way to import the exported Kubernetes config is to manually merge it into an existing kubeconfig file. - -Let's look at exporting and importing a native Docker context. - -### Exporting and importing a native Docker context - -The following example exports an existing context called "docker-test". It will be written to a file called `docker-test.dockercontext`. - -```console -$ docker context export docker-test -Written file "docker-test.dockercontext" -``` - -Check the contents of the export file. - -```console -$ cat docker-test.dockercontext -meta.json0000644000000000000000000000022300000000000011023 0ustar0000000000000000{"Name":"docker-test","Metadata":{"StackOrchestrator":"swarm"},"Endpoints":{"docker":{"Host":"unix:///var/run/docker.sock","SkipTLSVerify":false}}}tls0000700000000000000000000000000000000000000007716 5ustar0000000000000000 -``` - -This file can be imported on another host using `docker context import`. The target host must have the Docker client installed. - -```console -$ docker context import docker-test docker-test.dockercontext -docker-test -Successfully imported context "docker-test" -``` - -You can verify that the context was imported with `docker context ls`. - -The format of the import command is `docker context import `. - -Now, let's look at exporting just the Kubernetes parts of a context. - -### Exporting a Kubernetes context - -You can export a Kubernetes context only if the context you are exporting has a Kubernetes endpoint configured. You cannot import a Kubernetes context using `docker context import`. - -These steps will use the `--kubeconfig` flag to export **only** the Kubernetes elements of the existing `k8s-test` context to a file called "k8s-test.kubeconfig". The `cat` command will then show that it's exported as a valid kubeconfig file. - -```console -$ docker context export k8s-test --kubeconfig -Written file "k8s-test.kubeconfig" -``` - -Verify that the exported file contains a valid kubectl config. - -```console -$ cat k8s-test.kubeconfig -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: - - server: https://35.226.99.100 - name: cluster -contexts: -- context: - cluster: cluster - namespace: default - user: authInfo - name: context -current-context: context -kind: Config -preferences: {} -users: -- name: authInfo - user: - auth-provider: - config: - cmd-args: config config-helper --format=json - cmd-path: /snap/google-cloud-sdk/77/bin/gcloud - expiry-key: '{.credential.token_expiry}' - token-key: '{.credential.access_token}' - name: gcp -``` - -You can merge this with an existing `~/.kube/config` file on another machine. - -## Updating a context - -You can use `docker context update` to update fields in an existing context. - -The following example updates the "Description" field in the existing `k8s-test` context. - -```console -$ docker context update k8s-test --description "Test Kubernetes cluster" -k8s-test -Successfully updated context "k8s-test" -``` +--- +title: Docker Context +description: Learn about Docker Context +keywords: engine, context, cli, kubernetes +--- + + +## Introduction + +This guide shows how _contexts_ make it easy for a **single Docker CLI** to manage multiple Swarm clusters, multiple Kubernetes clusters, and multiple individual Docker nodes. + +A single Docker CLI can have multiple contexts. Each context contains all of the endpoint and security information required to manage a different cluster or node. The `docker context` command makes it easy to configure these contexts and switch between them. + +As an example, a single Docker client on your company laptop might be configured with two contexts; **dev-k8s** and **prod-swarm**. **dev-k8s** contains the endpoint data and security credentials to configure and manage a Kubernetes cluster in a development environment. **prod-swarm** contains everything required to manage a Swarm cluster in a production environment. Once these contexts are configured, you can use the top-level `docker context use ` to easily switch between them. + +For information on using Docker Context to deploy your apps to the cloud, see [Deploying Docker containers on Azure](../../cloud/aci-integration.md) and [Deploying Docker containers on ECS](../../cloud/ecs-integration.md). + +## Prerequisites + +To follow the examples in this guide, you'll need: + +- A Docker client that supports the top-level `context` command + +Run `docker context` to verify that your Docker client supports contexts. + +You will also need one of the following: + +- Docker Swarm cluster +- Single-engine Docker node +- Kubernetes cluster + +## The anatomy of a context + +A context is a combination of several properties. These include: + +- Name +- Endpoint configuration +- TLS info +- Orchestrator + +The easiest way to see what a context looks like is to view the **default** context. + +```console +$ docker context ls +NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR +default * Current... unix:///var/run/docker.sock swarm +``` + +This shows a single context called "default". It's configured to talk to a Swarm cluster through the local `/var/run/docker.sock` Unix socket. It has no Kubernetes endpoint configured. + +The asterisk in the `NAME` column indicates that this is the active context. This means all `docker` commands will be executed against the "default" context unless overridden with environment variables such as `DOCKER_HOST` and `DOCKER_CONTEXT`, or on the command-line with the `--context` and `--host` flags. + +Dig a bit deeper with `docker context inspect`. In this example, we're inspecting the context called `default`. + +```console +$ docker context inspect default +[ + { + "Name": "default", + "Metadata": { + "StackOrchestrator": "swarm" + }, + "Endpoints": { + "docker": { + "Host": "unix:///var/run/docker.sock", + "SkipTLSVerify": false + } + }, + "TLSMaterial": {}, + "Storage": { + "MetadataPath": "\u003cIN MEMORY\u003e", + "TLSPath": "\u003cIN MEMORY\u003e" + } + } +] +``` + +This context is using "swarm" as the orchestrator (`metadata.stackOrchestrator`). It is configured to talk to an endpoint exposed on a local Unix socket at `/var/run/docker.sock` (`Endpoints.docker.Host`), and requires TLS verification (`Endpoints.docker.SkipTLSVerify`). + +### Create a new context + +You can create new contexts with the `docker context create` command. + +The following example creates a new context called "docker-test" and specifies the following: + +- Default orchestrator = Swarm +- Issue commands to the local Unix socket `/var/run/docker.sock` + +```console +$ docker context create docker-test \ + --default-stack-orchestrator=swarm \ + --docker host=unix:///var/run/docker.sock + +Successfully created context "docker-test" +``` + +The new context is stored in a `meta.json` file below `~/.docker/contexts/`. Each new context you create gets its own `meta.json` stored in a dedicated sub-directory of `~/.docker/contexts/`. + +> **Note:** The default context behaves differently than manually created contexts. It does not have a `meta.json` configuration file, and it dynamically updates based on the current configuration. For example, if you switch your current Kubernetes config using `kubectl config use-context`, the default Docker context will dynamically update itself to the new Kubernetes endpoint. + +You can view the new context with `docker context ls` and `docker context inspect `. + +The following can be used to create a config with Kubernetes as the default orchestrator using the existing kubeconfig stored in `/home/ubuntu/.kube/config`. For this to work, you will need a valid kubeconfig file in `/home/ubuntu/.kube/config`. If your kubeconfig has more than one context, the current context (`kubectl config current-context`) will be used. + +```console +$ docker context create k8s-test \ + --default-stack-orchestrator=kubernetes \ + --kubernetes config-file=/home/ubuntu/.kube/config \ + --docker host=unix:///var/run/docker.sock + +Successfully created context "k8s-test" +``` + +You can view all contexts on the system with `docker context ls`. + +```console +$ docker context ls +NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR +default * Current unix:///var/run/docker.sock https://35.226.99.100 (default) swarm +k8s-test unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes +docker-test unix:///var/run/docker.sock swarm +``` + +The current context is indicated with an asterisk ("\*"). + +## Use a different context + +You can use `docker context use` to quickly switch between contexts. + +The following command will switch the `docker` CLI to use the "k8s-test" context. + +```console +$ docker context use k8s-test + +k8s-test +Current context is now "k8s-test" +``` + +Verify the operation by listing all contexts and ensuring the asterisk ("\*") is against the "k8s-test" context. + +```console +$ docker context ls +NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR +default Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://35.226.99.100 (default) swarm +docker-test unix:///var/run/docker.sock swarm +k8s-test * unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes +``` + +`docker` commands will now target endpoints defined in the "k8s-test" context. + +You can also set the current context using the `DOCKER_CONTEXT` environment variable. This overrides the context set with `docker context use`. + +Use the appropriate command below to set the context to `docker-test` using an environment variable. + +Windows PowerShell: + +```console +> $Env:DOCKER_CONTEXT=docker-test +``` + +Linux: + +```console +$ export DOCKER_CONTEXT=docker-test +``` + +Run a `docker context ls` to verify that the "docker-test" context is now the active context. + +You can also use the global `--context` flag to override the context specified by the `DOCKER_CONTEXT` environment variable. For example, the following will send the command to a context called "production". + +```console +$ docker --context production container ls +``` + +## Exporting and importing Docker contexts + +The `docker context` command makes it easy to export and import contexts on different machines with the Docker client installed. + +You can use the `docker context export` command to export an existing context to a file. This file can later be imported on another machine that has the `docker` client installed. + +By default, contexts will be exported as a _native Docker contexts_. You can export and import these using the `docker context` command. If the context you are exporting includes a Kubernetes endpoint, the Kubernetes part of the context will be included in the `export` and `import` operations. + +There is also an option to export just the Kubernetes part of a context. This will produce a native kubeconfig file that can be manually merged with an existing `~/.kube/config` file on another host that has `kubectl` installed. You cannot export just the Kubernetes portion of a context and then import it with `docker context import`. The only way to import the exported Kubernetes config is to manually merge it into an existing kubeconfig file. + +Let's look at exporting and importing a native Docker context. + +### Exporting and importing a native Docker context + +The following example exports an existing context called "docker-test". It will be written to a file called `docker-test.dockercontext`. + +```console +$ docker context export docker-test +Written file "docker-test.dockercontext" +``` + +Check the contents of the export file. + +```console +$ cat docker-test.dockercontext +meta.json0000644000000000000000000000022300000000000011023 0ustar0000000000000000{"Name":"docker-test","Metadata":{"StackOrchestrator":"swarm"},"Endpoints":{"docker":{"Host":"unix:///var/run/docker.sock","SkipTLSVerify":false}}}tls0000700000000000000000000000000000000000000007716 5ustar0000000000000000 +``` + +This file can be imported on another host using `docker context import`. The target host must have the Docker client installed. + +```console +$ docker context import docker-test docker-test.dockercontext +docker-test +Successfully imported context "docker-test" +``` + +You can verify that the context was imported with `docker context ls`. + +The format of the import command is `docker context import `. + +Now, let's look at exporting just the Kubernetes parts of a context. + +### Exporting a Kubernetes context + +You can export a Kubernetes context only if the context you are exporting has a Kubernetes endpoint configured. You cannot import a Kubernetes context using `docker context import`. + +These steps will use the `--kubeconfig` flag to export **only** the Kubernetes elements of the existing `k8s-test` context to a file called "k8s-test.kubeconfig". The `cat` command will then show that it's exported as a valid kubeconfig file. + +```console +$ docker context export k8s-test --kubeconfig +Written file "k8s-test.kubeconfig" +``` + +Verify that the exported file contains a valid kubectl config. + +```console +$ cat k8s-test.kubeconfig +apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: + + server: https://35.226.99.100 + name: cluster +contexts: +- context: + cluster: cluster + namespace: default + user: authInfo + name: context +current-context: context +kind: Config +preferences: {} +users: +- name: authInfo + user: + auth-provider: + config: + cmd-args: config config-helper --format=json + cmd-path: /snap/google-cloud-sdk/77/bin/gcloud + expiry-key: '{.credential.token_expiry}' + token-key: '{.credential.access_token}' + name: gcp +``` + +You can merge this with an existing `~/.kube/config` file on another machine. + +## Updating a context + +You can use `docker context update` to update fields in an existing context. + +The following example updates the "Description" field in the existing `k8s-test` context. + +```console +$ docker context update k8s-test --description "Test Kubernetes cluster" +k8s-test +Successfully updated context "k8s-test" +``` diff --git a/engine/scan/index.md b/engine/scan/index.md index 0e7b3cb97d..3e1f0604d1 100644 --- a/engine/scan/index.md +++ b/engine/scan/index.md @@ -1,470 +1,470 @@ ---- -title: Vulnerability scanning for Docker local images -description: Vulnerability scanning for Docker local images -keywords: Docker, scan, Snyk, images, local, CVE, vulnerability, security -toc_min: 1 -toc_max: 2 ---- - -{% include sign-up-cta.html - body="Did you know that you can now get 10 free scans per month? Sign in to Docker to start scanning your images for vulnerabilities." - header-text="Scan your images for free" - target-url="https://www.docker.com/pricing?utm_source=docker&utm_medium=webreferral&utm_campaign=docs_driven_upgrade_scan" -%} - -Looking to speed up your development cycles? Quickly detect and learn how to remediate CVEs in your images by running `docker scan IMAGE_NAME`. Check out [How to scan images](#how-to-scan-images) for details. - -Vulnerability scanning for Docker local images allows developers and development teams to review the security state of the container images and take actions to fix issues identified during the scan, resulting in more secure deployments. Docker Scan runs on Snyk engine, providing users with visibility into the security posture of their local Dockerfiles and local images. - -Users trigger vulnerability scans through the CLI, and use the CLI to view the -scan results. The scan results contain a list of Common Vulnerabilities and -Exposures (CVEs), the sources, such as OS packages and libraries, versions in -which they were introduced, and a recommended fixed version (if available) to -remediate the CVEs discovered. - -> **Log4j 2 CVE-2021-44228** -> -> Versions of `docker Scan` earlier than `v0.11.0` are not able to detect [Log4j 2 -> CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){: -> target="_blank" rel="noopener" class="_"}. You must update your Docker -> Desktop installation to 4.3.1 or higher to fix this issue. For more -> information, see [Scan images for Log4j 2 CVE](#scan-images-for-log4j-2-cve). -{: .important} - -For information about the system requirements to run vulnerability scanning, see [Prerequisites](#prerequisites). - -This page contains information about the `docker scan` CLI command. For -information about automatically scanning Docker images through Docker Hub, see -[Hub Vulnerability Scanning](/docker-hub/vulnerability-scanning/). - -## Scan images for Log4j 2 CVE - -Docker Scan versions earlier than `v0.11.0` do not detect [Log4j 2 -CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){: -target="_blank" rel="noopener" class="_"} when you scan your -images for vulnerabilities. You must update your Docker installation to the -latest version to fix this issue. - -If you are using the `docker scan` plugin shipped -with Docker Desktop, update Docker Desktop to version 4.3.1 or -higher. See the release notes for [Mac](../../desktop/mac/release-notes/index.md) and -[Windows](../../desktop/windows/release-notes/index.md) for download information. - -If you are using Linux, run the following command to manually install the latest -version of `docker scan`: - -On `.deb` based distros, such as Ubuntu and Debian: - -```console -$ apt-get update && apt-get install docker-scan-plugin -``` - -On rpm-based distros, such as CentOS or Fedora: - -```console -$ yum install docker-scan-plugin -``` - -Alternatively, you can manually download the `docker scan` binaries from the [Docker Scan](https://github.com/docker/scan-cli-plugin/releases/tag/v0.11.0){: -target="_blank" rel="noopener" class="_"} GitHub repository and -[install](https://github.com/docker/scan-cli-plugin){: -target="_blank" rel="noopener" class="_"} in the plugins directory. - -### Verify the `docker scan` version - -After upgrading `docker scan`, verify you are running the latest version by -running the following command: - -```console -$ docker scan --accept-license --version -Version: v0.12.0 -Git commit: 1074dd0 -Provider: Snyk (1.790.0 (standalone)) -``` - -If your code output contains `ORGAPACHELOGGINGLOG4J`, it is -likely that your code is affected by the Log4j 2 CVE-2021-44228 vulnerability. When you run the updated version of `docker scan`, you should also see a message -in the output log similar to: - -```console -Upgrade org.apache.logging.log4j:log4j-core@2.14.0 to org.apache.logging.log4j:log4j-core@2.15.0 to fix -✗ Arbitrary Code Execution (new) [Critical Severity][https://snyk.io/vuln/SNYK-JAVA-ORGAPACHELOGGINGLOG4J-2314720] in org.apache.logging.log4j:log4j-core@2.14.0 -introduced by org.apache.logging.log4j:log4j-core@2.14.0 -``` - -For more information, read our blog post [Apache Log4j 2 -CVE-2021-44228](https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/){: -target="_blank" rel="noopener" class="_"}. - -## How to scan images - -The `docker scan` command allows you to scan existing Docker images using the image name or ID. For example, run the following command to scan the hello-world image: - -```console -$ docker scan hello-world - -Testing hello-world... - -Organization: docker-desktop-test -Package manager: linux -Project name: docker-image|hello-world -Docker image: hello-world -Licenses: enabled - -✓ Tested 0 dependencies for known issues, no vulnerable paths found. - -Note that we do not currently have vulnerability data for your image. -``` - -### Get a detailed scan report - -You can get a detailed scan report about a Docker image by providing the Dockerfile used to create the image. The syntax is `docker scan --file PATH_TO_DOCKERFILE DOCKER_IMAGE`. - -For example, if you apply the option to the `docker-scan` test image, it displays the following result: - -```console -$ docker scan --file Dockerfile docker-scan:e2e -Testing docker-scan:e2e -... -✗ High severity vulnerability found in perl - Description: Integer Overflow or Wraparound - Info: https://snyk.io/vuln/SNYK-DEBIAN10-PERL-570802 - Introduced through: git@1:2.20.1-2+deb10u3, meta-common-packages@meta - From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6 - From: git@1:2.20.1-2+deb10u3 > liberror-perl@0.17027-2 > perl@5.28.1-6 - From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6 > perl/perl-modules-5.28@5.28.1-6 - and 3 more... - Introduced by your base image (golang:1.14.6) - -Organization: docker-desktop-test -Package manager: deb -Target file: Dockerfile -Project name: docker-image|99138c65ebc7 -Docker image: 99138c65ebc7 -Base image: golang:1.14.6 -Licenses: enabled - -Tested 200 dependencies for known issues, found 157 issues. - -According to our scan, you are currently using the most secure version of the selected base image -``` - -### Excluding the base image - -When using docker scan with the `--file` flag, you can also add the `--exclude-base` tag. This excludes the base image (specified in the Dockerfile using the `FROM` directive) vulnerabilities from your report. For example: - -```console -$ docker scan --file Dockerfile --exclude-base docker-scan:e2e -Testing docker-scan:e2e -... -✗ Medium severity vulnerability found in libidn2/libidn2-0 - Description: Improper Input Validation - Info: https://snyk.io/vuln/SNYK-DEBIAN10-LIBIDN2-474100 - Introduced through: iputils/iputils-ping@3:20180629-2+deb10u1, wget@1.20.1-1.1, curl@7.64.0-4+deb10u1, git@1:2.20.1-2+deb10u3 - From: iputils/iputils-ping@3:20180629-2+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1 - From: wget@1.20.1-1.1 > libidn2/libidn2-0@2.0.5-1+deb10u1 - From: curl@7.64.0-4+deb10u1 > curl/libcurl4@7.64.0-4+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1 - and 3 more... - Introduced in your Dockerfile by 'RUN apk add -U --no-cache wget tar' - - - -Organization: docker-desktop-test -Package manager: deb -Target file: Dockerfile -Project name: docker-image|99138c65ebc7 -Docker image: 99138c65ebc7 -Base image: golang:1.14.6 -Licenses: enabled - -Tested 200 dependencies for known issues, found 16 issues. -``` - -### Viewing the JSON output - -You can also display the scan result as a JSON output by adding the `--json` flag to the command. For example: - -```console -$ docker scan --json hello-world -{ - "vulnerabilities": [], - "ok": true, - "dependencyCount": 0, - "org": "docker-desktop-test", - "policy": "# Snyk (https://snyk.io) policy file, patches or ignores known vulnerabilities.\nversion: v1.19.0\nignore: {}\npatch: {}\n", - "isPrivate": true, - "licensesPolicy": { - "severities": {}, - "orgLicenseRules": { - "AGPL-1.0": { - "licenseType": "AGPL-1.0", - "severity": "high", - "instructions": "" - }, - ... - "SimPL-2.0": { - "licenseType": "SimPL-2.0", - "severity": "high", - "instructions": "" - } - } - }, - "packageManager": "linux", - "ignoreSettings": null, - "docker": { - "baseImageRemediation": { - "code": "SCRATCH_BASE_IMAGE", - "advice": [ - { - "message": "Note that we do not currently have vulnerability data for your image.", - "bold": true, - "color": "yellow" - } - ] - }, - "binariesVulns": { - "issuesData": {}, - "affectedPkgs": {} - } - }, - "summary": "No known vulnerabilities", - "filesystemPolicy": false, - "uniqueCount": 0, - "projectName": "docker-image|hello-world", - "path": "hello-world" -} -``` - -In addition to the `--json` flag, you can also use the `--group-issues` flag to display a vulnerability only once in the scan report: - -```console -$ docker scan --json --group-issues docker-scan:e2e -{ - { - "title": "Improper Check for Dropped Privileges", - ... - "packageName": "bash", - "language": "linux", - "packageManager": "debian:10", - "description": "## Overview\nAn issue was discovered in disable_priv_mode in shell.c in GNU Bash through 5.0 patch 11. By default, if Bash is run with its effective UID not equal to its real UID, it will drop privileges by setting its effective UID to its real UID. However, it does so incorrectly. On Linux and other systems that support \"saved UID\" functionality, the saved UID is not dropped. An attacker with command execution in the shell can use \"enable -f\" for runtime loading of a new builtin, which can be a shared object that calls setuid() and therefore regains privileges. However, binaries running with an effective UID of 0 are unaffected.\n\n## References\n- [CONFIRM](https://security.netapp.com/advisory/ntap-20200430-0003/)\n- [Debian Security Tracker](https://security-tracker.debian.org/tracker/CVE-2019-18276)\n- [GitHub Commit](https://github.com/bminor/bash/commit/951bdaad7a18cc0dc1036bba86b18b90874d39ff)\n- [MISC](http://packetstormsecurity.com/files/155498/Bash-5.0-Patch-11-Privilege-Escalation.html)\n- [MISC](https://www.youtube.com/watch?v=-wGtxJ8opa8)\n- [Ubuntu CVE Tracker](http://people.ubuntu.com/~ubuntu-security/cve/CVE-2019-18276)\n", - "identifiers": { - "ALTERNATIVE": [], - "CVE": [ - "CVE-2019-18276" - ], - "CWE": [ - "CWE-273" - ] - }, - "severity": "low", - "severityWithCritical": "low", - "cvssScore": 7.8, - "CVSSv3": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:F", - ... - "from": [ - "docker-image|docker-scan@e2e", - "bash@5.0-4" - ], - "upgradePath": [], - "isUpgradable": false, - "isPatchable": false, - "name": "bash", - "version": "5.0-4" - }, - ... - "summary": "880 vulnerable dependency paths", - "filesystemPolicy": false, - "filtered": { - "ignore": [], - "patch": [] - }, - "uniqueCount": 158, - "projectName": "docker-image|docker-scan", - "platform": "linux/amd64", - "path": "docker-scan:e2e" -} -``` - -You can find all the sources of the vulnerability in the `from` section. - -### Checking the dependency tree - -To view the dependency tree of your image, use the --dependency-tree flag. This displays all the dependencies before the scan result. For example: - -```console -$ docker scan --dependency-tree debian:buster - -$ docker-image|99138c65ebc7 @ latest - ├─ ca-certificates @ 20200601~deb10u1 - │ └─ openssl @ 1.1.1d-0+deb10u3 - │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3 - ├─ curl @ 7.64.0-4+deb10u1 - │ └─ curl/libcurl4 @ 7.64.0-4+deb10u1 - │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3 - │ ├─ krb5/libgssapi-krb5-2 @ 1.17-3 - │ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3 - │ │ ├─ krb5/libk5crypto3 @ 1.17-3 - │ │ │ └─ krb5/libkrb5support0 @ 1.17-3 - │ │ ├─ krb5/libkrb5-3 @ 1.17-3 - │ │ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3 - │ │ │ ├─ krb5/libk5crypto3 @ 1.17-3 - │ │ │ ├─ krb5/libkrb5support0 @ 1.17-3 - │ │ │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3 - │ │ └─ krb5/libkrb5support0 @ 1.17-3 - │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1 - │ │ └─ libunistring/libunistring2 @ 0.9.10-1 - │ ├─ krb5/libk5crypto3 @ 1.17-3 - │ ├─ krb5/libkrb5-3 @ 1.17-3 - │ ├─ openldap/libldap-2.4-2 @ 2.4.47+dfsg-3+deb10u2 - │ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4 - │ │ │ ├─ nettle/libhogweed4 @ 3.4.1-1 - │ │ │ │ └─ nettle/libnettle6 @ 3.4.1-1 - │ │ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1 - │ │ │ ├─ nettle/libnettle6 @ 3.4.1-1 - │ │ │ ├─ p11-kit/libp11-kit0 @ 0.23.15-2 - │ │ │ │ └─ libffi/libffi6 @ 3.2.1-9 - │ │ │ ├─ libtasn1-6 @ 4.13-3 - │ │ │ └─ libunistring/libunistring2 @ 0.9.10-1 - │ │ ├─ cyrus-sasl2/libsasl2-2 @ 2.1.27+dfsg-1+deb10u1 - │ │ │ └─ cyrus-sasl2/libsasl2-modules-db @ 2.1.27+dfsg-1+deb10u1 - │ │ │ └─ db5.3/libdb5.3 @ 5.3.28+dfsg1-0.5 - │ │ └─ openldap/libldap-common @ 2.4.47+dfsg-3+deb10u2 - │ ├─ nghttp2/libnghttp2-14 @ 1.36.0-2+deb10u1 - │ ├─ libpsl/libpsl5 @ 0.20.2-2 - │ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1 - │ │ └─ libunistring/libunistring2 @ 0.9.10-1 - │ ├─ rtmpdump/librtmp1 @ 2.4+20151223.gitfa8646d.1-2 - │ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4 - │ │ ├─ nettle/libhogweed4 @ 3.4.1-1 - │ │ └─ nettle/libnettle6 @ 3.4.1-1 - │ ├─ libssh2/libssh2-1 @ 1.8.0-2.1 - │ │ └─ libgcrypt20 @ 1.8.4-5 - │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3 - ├─ gnupg2/dirmngr @ 2.2.12-1+deb10u1 - ... - -Organization: docker-desktop-test -Package manager: deb -Project name: docker-image|99138c65ebc7 -Docker image: 99138c65ebc7 -Licenses: enabled - -Tested 200 dependencies for known issues, found 157 issues. - -For more free scans that keep your images secure, sign up to Snyk at https://dockr.ly/3ePqVcp. -``` - -For more information about the vulnerability data, see [Docker Vulnerability Scanning CLI Cheat Sheet](https://goto.docker.com/rs/929-FJL-178/images/cheat-sheet-docker-desktop-vulnerability-scanning-CLI.pdf){: target="_blank" rel="noopener" class="_"}. - -### Limiting the level of vulnerabilities displayed - -Docker scan allows you to choose the level of vulnerabilities displayed in your scan report using the `--severity` flag. -You can set the severity flag to `low`, `medium`, or` high` depending on the level of vulnerabilities you’d like to see in your report. -For example, if you set the severity level as `medium`, the scan report displays all vulnerabilities that are classified as medium and high. - - ```console -$ docker scan --severity=medium docker-scan:e2e -./bin/docker-scan_darwin_amd64 scan --severity=medium docker-scan:e2e - -Testing docker-scan:e2e... - -✗ Medium severity vulnerability found in sqlite3/libsqlite3-0 - Description: Divide By Zero - Info: https://snyk.io/vuln/SNYK-DEBIAN10-SQLITE3-466337 - Introduced through: gnupg2/gnupg@2.2.12-1+deb10u1, subversion@1.10.4-1+deb10u1, mercurial@4.8.2-1+deb10u1 - From: gnupg2/gnupg@2.2.12-1+deb10u1 > gnupg2/gpg@2.2.12-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3 - From: subversion@1.10.4-1+deb10u1 > subversion/libsvn1@1.10.4-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3 - From: mercurial@4.8.2-1+deb10u1 > python-defaults/python@2.7.16-1 > python2.7@2.7.16-2+deb10u1 > python2.7/libpython2.7-stdlib@2.7.16-2+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3 - -✗ Medium severity vulnerability found in sqlite3/libsqlite3-0 - Description: Uncontrolled Recursion -... -✗ High severity vulnerability found in binutils/binutils-common - Description: Missing Release of Resource after Effective Lifetime - Info: https://snyk.io/vuln/SNYK-DEBIAN10-BINUTILS-403318 - Introduced through: gcc-defaults/g++@4:8.3.0-1 - From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-common@2.31.1-16 - From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/libbinutils@2.31.1-16 > binutils/binutils-common@2.31.1-16 - From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-x86-64-linux-gnu@2.31.1-16 > binutils/binutils-common@2.31.1-16 - and 4 more... - -Organization: docker-desktop-test -Package manager: deb -Project name: docker-image|docker-scan -Docker image: docker-scan:e2e -Platform: linux/amd64 -Licenses: enabled - -Tested 200 dependencies for known issues, found 37 issues. -``` - -## Provider authentication - -If you have an existing Snyk account, you can directly use your Snyk [API token](https://app.snyk.io/account){: target="_blank" rel="noopener" class="_"}: - -```console -$ docker scan --login --token SNYK_AUTH_TOKEN - -Your account has been authenticated. Snyk is now ready to be used. -``` - -If you use the `--login` flag without any token, you will be redirected to the Snyk website to login. - -## Prerequisites - -To run vulnerability scanning on your Docker images, you must meet the following requirements: - -1. Download and install the latest version of Docker Desktop. - - - [Download for Mac with Intel chip](https://desktop.docker.com/mac/main/amd64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-amd64) - - [Download for Mac with Apple chip](https://desktop.docker.com/mac/main/arm64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-arm64) - - [Download for Windows](https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe) - -2. Sign into [Docker Hub](https://hub.docker.com){: target="_blank" rel="noopener" class="_"}. - -3. From the Docker Desktop menu, select **Sign in/ Create Docker ID**. Alternatively, open a terminal and run the command `docker login`. - -4. (Optional) You can create a [Snyk account](https://dockr.ly/3ePqVcp){: target="_blank" rel="noopener" class="_"} for scans, or use the additional monthly free scans provided by Snyk with your Docker Hub account. - -Check your installation by running `docker scan --version`, it should print the current version of docker scan and the Snyk engine version. For example: - -```console -$ docker scan --version -Version: v0.5.0 -Git commit: 5a09266 -Provider: Snyk (1.432.0) -``` - -> **Note:** -> -> Docker Scan uses the Snyk binary installed in your environment by default. If -this is not available, it uses the Snyk binary embedded in Docker Desktop. -> The minimum version required for Snyk is `1.385.0`. - -## Supported options - -The high-level `docker scan` command scans local images using the image name or the image ID. It supports the following options: - -| Option | Description | -|:------------------------------------------------------------------ :------------------------------------------------| -| `--accept-license` | Accept the license agreement of the third-party scanning provider | -| `--dependency-tree` | Display the dependency tree of the image along with scan results | -| `--exclude-base` | Exclude the base image during scanning. This option requires the --file option to be set | -| `-f`, `--file string` | Specify the location of the Dockerfile associated with the image. This option displays a detailed scan result | -| `--json` | Display the result of the scan in JSON format| -| `--login` | Log into Snyk using an optional token (using the flag --token), or by using a web-based token | -| `--reject-license` | Reject the license agreement of the third-party scanning provider | -| `--severity string` | Only report vulnerabilities of provided level or higher (low, medium, high) | -| `--token string` | Use the authentication token to log into the third-party scanning provider | -| `--version` | Display the Docker Scan plugin version | - -## Known issues - -**WSL 2** - -- The Vulnerability scanning feature doesn’t work with Alpine distributions. -- If you are using Debian and OpenSUSE distributions, the login process only works with the `--token` flag, you won’t be redirected to the Snyk website for authentication. - -## Feedback - -Your feedback is very important to us. Let us know your feedback by creating an issue in the [scan-cli-plugin](https://github.com/docker/scan-cli-plugin/issues/new){: target="_blank" rel="noopener" class="_"} GitHub repository. +--- +title: Vulnerability scanning for Docker local images +description: Vulnerability scanning for Docker local images +keywords: Docker, scan, Snyk, images, local, CVE, vulnerability, security +toc_min: 1 +toc_max: 2 +--- + +{% include sign-up-cta.html + body="Did you know that you can now get 10 free scans per month? Sign in to Docker to start scanning your images for vulnerabilities." + header-text="Scan your images for free" + target-url="https://www.docker.com/pricing?utm_source=docker&utm_medium=webreferral&utm_campaign=docs_driven_upgrade_scan" +%} + +Looking to speed up your development cycles? Quickly detect and learn how to remediate CVEs in your images by running `docker scan IMAGE_NAME`. Check out [How to scan images](#how-to-scan-images) for details. + +Vulnerability scanning for Docker local images allows developers and development teams to review the security state of the container images and take actions to fix issues identified during the scan, resulting in more secure deployments. Docker Scan runs on Snyk engine, providing users with visibility into the security posture of their local Dockerfiles and local images. + +Users trigger vulnerability scans through the CLI, and use the CLI to view the +scan results. The scan results contain a list of Common Vulnerabilities and +Exposures (CVEs), the sources, such as OS packages and libraries, versions in +which they were introduced, and a recommended fixed version (if available) to +remediate the CVEs discovered. + +> **Log4j 2 CVE-2021-44228** +> +> Versions of `docker Scan` earlier than `v0.11.0` are not able to detect [Log4j 2 +> CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){: +> target="_blank" rel="noopener" class="_"}. You must update your Docker +> Desktop installation to 4.3.1 or higher to fix this issue. For more +> information, see [Scan images for Log4j 2 CVE](#scan-images-for-log4j-2-cve). +{: .important} + +For information about the system requirements to run vulnerability scanning, see [Prerequisites](#prerequisites). + +This page contains information about the `docker scan` CLI command. For +information about automatically scanning Docker images through Docker Hub, see +[Hub Vulnerability Scanning](/docker-hub/vulnerability-scanning/). + +## Scan images for Log4j 2 CVE + +Docker Scan versions earlier than `v0.11.0` do not detect [Log4j 2 +CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){: +target="_blank" rel="noopener" class="_"} when you scan your +images for vulnerabilities. You must update your Docker installation to the +latest version to fix this issue. + +If you are using the `docker scan` plugin shipped +with Docker Desktop, update Docker Desktop to version 4.3.1 or +higher. See the release notes for [Mac](../../desktop/mac/release-notes/index.md) and +[Windows](../../desktop/windows/release-notes/index.md) for download information. + +If you are using Linux, run the following command to manually install the latest +version of `docker scan`: + +On `.deb` based distros, such as Ubuntu and Debian: + +```console +$ apt-get update && apt-get install docker-scan-plugin +``` + +On rpm-based distros, such as CentOS or Fedora: + +```console +$ yum install docker-scan-plugin +``` + +Alternatively, you can manually download the `docker scan` binaries from the [Docker Scan](https://github.com/docker/scan-cli-plugin/releases/tag/v0.11.0){: +target="_blank" rel="noopener" class="_"} GitHub repository and +[install](https://github.com/docker/scan-cli-plugin){: +target="_blank" rel="noopener" class="_"} in the plugins directory. + +### Verify the `docker scan` version + +After upgrading `docker scan`, verify you are running the latest version by +running the following command: + +```console +$ docker scan --accept-license --version +Version: v0.12.0 +Git commit: 1074dd0 +Provider: Snyk (1.790.0 (standalone)) +``` + +If your code output contains `ORGAPACHELOGGINGLOG4J`, it is +likely that your code is affected by the Log4j 2 CVE-2021-44228 vulnerability. When you run the updated version of `docker scan`, you should also see a message +in the output log similar to: + +```console +Upgrade org.apache.logging.log4j:log4j-core@2.14.0 to org.apache.logging.log4j:log4j-core@2.15.0 to fix +✗ Arbitrary Code Execution (new) [Critical Severity][https://snyk.io/vuln/SNYK-JAVA-ORGAPACHELOGGINGLOG4J-2314720] in org.apache.logging.log4j:log4j-core@2.14.0 +introduced by org.apache.logging.log4j:log4j-core@2.14.0 +``` + +For more information, read our blog post [Apache Log4j 2 +CVE-2021-44228](https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/){: +target="_blank" rel="noopener" class="_"}. + +## How to scan images + +The `docker scan` command allows you to scan existing Docker images using the image name or ID. For example, run the following command to scan the hello-world image: + +```console +$ docker scan hello-world + +Testing hello-world... + +Organization: docker-desktop-test +Package manager: linux +Project name: docker-image|hello-world +Docker image: hello-world +Licenses: enabled + +✓ Tested 0 dependencies for known issues, no vulnerable paths found. + +Note that we do not currently have vulnerability data for your image. +``` + +### Get a detailed scan report + +You can get a detailed scan report about a Docker image by providing the Dockerfile used to create the image. The syntax is `docker scan --file PATH_TO_DOCKERFILE DOCKER_IMAGE`. + +For example, if you apply the option to the `docker-scan` test image, it displays the following result: + +```console +$ docker scan --file Dockerfile docker-scan:e2e +Testing docker-scan:e2e +... +✗ High severity vulnerability found in perl + Description: Integer Overflow or Wraparound + Info: https://snyk.io/vuln/SNYK-DEBIAN10-PERL-570802 + Introduced through: git@1:2.20.1-2+deb10u3, meta-common-packages@meta + From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6 + From: git@1:2.20.1-2+deb10u3 > liberror-perl@0.17027-2 > perl@5.28.1-6 + From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6 > perl/perl-modules-5.28@5.28.1-6 + and 3 more... + Introduced by your base image (golang:1.14.6) + +Organization: docker-desktop-test +Package manager: deb +Target file: Dockerfile +Project name: docker-image|99138c65ebc7 +Docker image: 99138c65ebc7 +Base image: golang:1.14.6 +Licenses: enabled + +Tested 200 dependencies for known issues, found 157 issues. + +According to our scan, you are currently using the most secure version of the selected base image +``` + +### Excluding the base image + +When using docker scan with the `--file` flag, you can also add the `--exclude-base` tag. This excludes the base image (specified in the Dockerfile using the `FROM` directive) vulnerabilities from your report. For example: + +```console +$ docker scan --file Dockerfile --exclude-base docker-scan:e2e +Testing docker-scan:e2e +... +✗ Medium severity vulnerability found in libidn2/libidn2-0 + Description: Improper Input Validation + Info: https://snyk.io/vuln/SNYK-DEBIAN10-LIBIDN2-474100 + Introduced through: iputils/iputils-ping@3:20180629-2+deb10u1, wget@1.20.1-1.1, curl@7.64.0-4+deb10u1, git@1:2.20.1-2+deb10u3 + From: iputils/iputils-ping@3:20180629-2+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1 + From: wget@1.20.1-1.1 > libidn2/libidn2-0@2.0.5-1+deb10u1 + From: curl@7.64.0-4+deb10u1 > curl/libcurl4@7.64.0-4+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1 + and 3 more... + Introduced in your Dockerfile by 'RUN apk add -U --no-cache wget tar' + + + +Organization: docker-desktop-test +Package manager: deb +Target file: Dockerfile +Project name: docker-image|99138c65ebc7 +Docker image: 99138c65ebc7 +Base image: golang:1.14.6 +Licenses: enabled + +Tested 200 dependencies for known issues, found 16 issues. +``` + +### Viewing the JSON output + +You can also display the scan result as a JSON output by adding the `--json` flag to the command. For example: + +```console +$ docker scan --json hello-world +{ + "vulnerabilities": [], + "ok": true, + "dependencyCount": 0, + "org": "docker-desktop-test", + "policy": "# Snyk (https://snyk.io) policy file, patches or ignores known vulnerabilities.\nversion: v1.19.0\nignore: {}\npatch: {}\n", + "isPrivate": true, + "licensesPolicy": { + "severities": {}, + "orgLicenseRules": { + "AGPL-1.0": { + "licenseType": "AGPL-1.0", + "severity": "high", + "instructions": "" + }, + ... + "SimPL-2.0": { + "licenseType": "SimPL-2.0", + "severity": "high", + "instructions": "" + } + } + }, + "packageManager": "linux", + "ignoreSettings": null, + "docker": { + "baseImageRemediation": { + "code": "SCRATCH_BASE_IMAGE", + "advice": [ + { + "message": "Note that we do not currently have vulnerability data for your image.", + "bold": true, + "color": "yellow" + } + ] + }, + "binariesVulns": { + "issuesData": {}, + "affectedPkgs": {} + } + }, + "summary": "No known vulnerabilities", + "filesystemPolicy": false, + "uniqueCount": 0, + "projectName": "docker-image|hello-world", + "path": "hello-world" +} +``` + +In addition to the `--json` flag, you can also use the `--group-issues` flag to display a vulnerability only once in the scan report: + +```console +$ docker scan --json --group-issues docker-scan:e2e +{ + { + "title": "Improper Check for Dropped Privileges", + ... + "packageName": "bash", + "language": "linux", + "packageManager": "debian:10", + "description": "## Overview\nAn issue was discovered in disable_priv_mode in shell.c in GNU Bash through 5.0 patch 11. By default, if Bash is run with its effective UID not equal to its real UID, it will drop privileges by setting its effective UID to its real UID. However, it does so incorrectly. On Linux and other systems that support \"saved UID\" functionality, the saved UID is not dropped. An attacker with command execution in the shell can use \"enable -f\" for runtime loading of a new builtin, which can be a shared object that calls setuid() and therefore regains privileges. However, binaries running with an effective UID of 0 are unaffected.\n\n## References\n- [CONFIRM](https://security.netapp.com/advisory/ntap-20200430-0003/)\n- [Debian Security Tracker](https://security-tracker.debian.org/tracker/CVE-2019-18276)\n- [GitHub Commit](https://github.com/bminor/bash/commit/951bdaad7a18cc0dc1036bba86b18b90874d39ff)\n- [MISC](http://packetstormsecurity.com/files/155498/Bash-5.0-Patch-11-Privilege-Escalation.html)\n- [MISC](https://www.youtube.com/watch?v=-wGtxJ8opa8)\n- [Ubuntu CVE Tracker](http://people.ubuntu.com/~ubuntu-security/cve/CVE-2019-18276)\n", + "identifiers": { + "ALTERNATIVE": [], + "CVE": [ + "CVE-2019-18276" + ], + "CWE": [ + "CWE-273" + ] + }, + "severity": "low", + "severityWithCritical": "low", + "cvssScore": 7.8, + "CVSSv3": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:F", + ... + "from": [ + "docker-image|docker-scan@e2e", + "bash@5.0-4" + ], + "upgradePath": [], + "isUpgradable": false, + "isPatchable": false, + "name": "bash", + "version": "5.0-4" + }, + ... + "summary": "880 vulnerable dependency paths", + "filesystemPolicy": false, + "filtered": { + "ignore": [], + "patch": [] + }, + "uniqueCount": 158, + "projectName": "docker-image|docker-scan", + "platform": "linux/amd64", + "path": "docker-scan:e2e" +} +``` + +You can find all the sources of the vulnerability in the `from` section. + +### Checking the dependency tree + +To view the dependency tree of your image, use the --dependency-tree flag. This displays all the dependencies before the scan result. For example: + +```console +$ docker scan --dependency-tree debian:buster + +$ docker-image|99138c65ebc7 @ latest + ├─ ca-certificates @ 20200601~deb10u1 + │ └─ openssl @ 1.1.1d-0+deb10u3 + │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3 + ├─ curl @ 7.64.0-4+deb10u1 + │ └─ curl/libcurl4 @ 7.64.0-4+deb10u1 + │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3 + │ ├─ krb5/libgssapi-krb5-2 @ 1.17-3 + │ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3 + │ │ ├─ krb5/libk5crypto3 @ 1.17-3 + │ │ │ └─ krb5/libkrb5support0 @ 1.17-3 + │ │ ├─ krb5/libkrb5-3 @ 1.17-3 + │ │ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3 + │ │ │ ├─ krb5/libk5crypto3 @ 1.17-3 + │ │ │ ├─ krb5/libkrb5support0 @ 1.17-3 + │ │ │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3 + │ │ └─ krb5/libkrb5support0 @ 1.17-3 + │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1 + │ │ └─ libunistring/libunistring2 @ 0.9.10-1 + │ ├─ krb5/libk5crypto3 @ 1.17-3 + │ ├─ krb5/libkrb5-3 @ 1.17-3 + │ ├─ openldap/libldap-2.4-2 @ 2.4.47+dfsg-3+deb10u2 + │ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4 + │ │ │ ├─ nettle/libhogweed4 @ 3.4.1-1 + │ │ │ │ └─ nettle/libnettle6 @ 3.4.1-1 + │ │ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1 + │ │ │ ├─ nettle/libnettle6 @ 3.4.1-1 + │ │ │ ├─ p11-kit/libp11-kit0 @ 0.23.15-2 + │ │ │ │ └─ libffi/libffi6 @ 3.2.1-9 + │ │ │ ├─ libtasn1-6 @ 4.13-3 + │ │ │ └─ libunistring/libunistring2 @ 0.9.10-1 + │ │ ├─ cyrus-sasl2/libsasl2-2 @ 2.1.27+dfsg-1+deb10u1 + │ │ │ └─ cyrus-sasl2/libsasl2-modules-db @ 2.1.27+dfsg-1+deb10u1 + │ │ │ └─ db5.3/libdb5.3 @ 5.3.28+dfsg1-0.5 + │ │ └─ openldap/libldap-common @ 2.4.47+dfsg-3+deb10u2 + │ ├─ nghttp2/libnghttp2-14 @ 1.36.0-2+deb10u1 + │ ├─ libpsl/libpsl5 @ 0.20.2-2 + │ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1 + │ │ └─ libunistring/libunistring2 @ 0.9.10-1 + │ ├─ rtmpdump/librtmp1 @ 2.4+20151223.gitfa8646d.1-2 + │ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4 + │ │ ├─ nettle/libhogweed4 @ 3.4.1-1 + │ │ └─ nettle/libnettle6 @ 3.4.1-1 + │ ├─ libssh2/libssh2-1 @ 1.8.0-2.1 + │ │ └─ libgcrypt20 @ 1.8.4-5 + │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3 + ├─ gnupg2/dirmngr @ 2.2.12-1+deb10u1 + ... + +Organization: docker-desktop-test +Package manager: deb +Project name: docker-image|99138c65ebc7 +Docker image: 99138c65ebc7 +Licenses: enabled + +Tested 200 dependencies for known issues, found 157 issues. + +For more free scans that keep your images secure, sign up to Snyk at https://dockr.ly/3ePqVcp. +``` + +For more information about the vulnerability data, see [Docker Vulnerability Scanning CLI Cheat Sheet](https://goto.docker.com/rs/929-FJL-178/images/cheat-sheet-docker-desktop-vulnerability-scanning-CLI.pdf){: target="_blank" rel="noopener" class="_"}. + +### Limiting the level of vulnerabilities displayed + +Docker scan allows you to choose the level of vulnerabilities displayed in your scan report using the `--severity` flag. +You can set the severity flag to `low`, `medium`, or` high` depending on the level of vulnerabilities you’d like to see in your report. +For example, if you set the severity level as `medium`, the scan report displays all vulnerabilities that are classified as medium and high. + + ```console +$ docker scan --severity=medium docker-scan:e2e +./bin/docker-scan_darwin_amd64 scan --severity=medium docker-scan:e2e + +Testing docker-scan:e2e... + +✗ Medium severity vulnerability found in sqlite3/libsqlite3-0 + Description: Divide By Zero + Info: https://snyk.io/vuln/SNYK-DEBIAN10-SQLITE3-466337 + Introduced through: gnupg2/gnupg@2.2.12-1+deb10u1, subversion@1.10.4-1+deb10u1, mercurial@4.8.2-1+deb10u1 + From: gnupg2/gnupg@2.2.12-1+deb10u1 > gnupg2/gpg@2.2.12-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3 + From: subversion@1.10.4-1+deb10u1 > subversion/libsvn1@1.10.4-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3 + From: mercurial@4.8.2-1+deb10u1 > python-defaults/python@2.7.16-1 > python2.7@2.7.16-2+deb10u1 > python2.7/libpython2.7-stdlib@2.7.16-2+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3 + +✗ Medium severity vulnerability found in sqlite3/libsqlite3-0 + Description: Uncontrolled Recursion +... +✗ High severity vulnerability found in binutils/binutils-common + Description: Missing Release of Resource after Effective Lifetime + Info: https://snyk.io/vuln/SNYK-DEBIAN10-BINUTILS-403318 + Introduced through: gcc-defaults/g++@4:8.3.0-1 + From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-common@2.31.1-16 + From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/libbinutils@2.31.1-16 > binutils/binutils-common@2.31.1-16 + From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-x86-64-linux-gnu@2.31.1-16 > binutils/binutils-common@2.31.1-16 + and 4 more... + +Organization: docker-desktop-test +Package manager: deb +Project name: docker-image|docker-scan +Docker image: docker-scan:e2e +Platform: linux/amd64 +Licenses: enabled + +Tested 200 dependencies for known issues, found 37 issues. +``` + +## Provider authentication + +If you have an existing Snyk account, you can directly use your Snyk [API token](https://app.snyk.io/account){: target="_blank" rel="noopener" class="_"}: + +```console +$ docker scan --login --token SNYK_AUTH_TOKEN + +Your account has been authenticated. Snyk is now ready to be used. +``` + +If you use the `--login` flag without any token, you will be redirected to the Snyk website to login. + +## Prerequisites + +To run vulnerability scanning on your Docker images, you must meet the following requirements: + +1. Download and install the latest version of Docker Desktop. + + - [Download for Mac with Intel chip](https://desktop.docker.com/mac/main/amd64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-amd64) + - [Download for Mac with Apple chip](https://desktop.docker.com/mac/main/arm64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-arm64) + - [Download for Windows](https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe) + +2. Sign into [Docker Hub](https://hub.docker.com){: target="_blank" rel="noopener" class="_"}. + +3. From the Docker Desktop menu, select **Sign in/ Create Docker ID**. Alternatively, open a terminal and run the command `docker login`. + +4. (Optional) You can create a [Snyk account](https://dockr.ly/3ePqVcp){: target="_blank" rel="noopener" class="_"} for scans, or use the additional monthly free scans provided by Snyk with your Docker Hub account. + +Check your installation by running `docker scan --version`, it should print the current version of docker scan and the Snyk engine version. For example: + +```console +$ docker scan --version +Version: v0.5.0 +Git commit: 5a09266 +Provider: Snyk (1.432.0) +``` + +> **Note:** +> +> Docker Scan uses the Snyk binary installed in your environment by default. If +this is not available, it uses the Snyk binary embedded in Docker Desktop. +> The minimum version required for Snyk is `1.385.0`. + +## Supported options + +The high-level `docker scan` command scans local images using the image name or the image ID. It supports the following options: + +| Option | Description | +|:------------------------------------------------------------------ :------------------------------------------------| +| `--accept-license` | Accept the license agreement of the third-party scanning provider | +| `--dependency-tree` | Display the dependency tree of the image along with scan results | +| `--exclude-base` | Exclude the base image during scanning. This option requires the --file option to be set | +| `-f`, `--file string` | Specify the location of the Dockerfile associated with the image. This option displays a detailed scan result | +| `--json` | Display the result of the scan in JSON format| +| `--login` | Log into Snyk using an optional token (using the flag --token), or by using a web-based token | +| `--reject-license` | Reject the license agreement of the third-party scanning provider | +| `--severity string` | Only report vulnerabilities of provided level or higher (low, medium, high) | +| `--token string` | Use the authentication token to log into the third-party scanning provider | +| `--version` | Display the Docker Scan plugin version | + +## Known issues + +**WSL 2** + +- The Vulnerability scanning feature doesn’t work with Alpine distributions. +- If you are using Debian and OpenSUSE distributions, the login process only works with the `--token` flag, you won’t be redirected to the Snyk website for authentication. + +## Feedback + +Your feedback is very important to us. Let us know your feedback by creating an issue in the [scan-cli-plugin](https://github.com/docker/scan-cli-plugin/issues/new){: target="_blank" rel="noopener" class="_"} GitHub repository. diff --git a/images/logo-docker-main2.svg b/images/logo-docker-main2.svg index 34ddcf45e4..d91d9f42cd 100644 --- a/images/logo-docker-main2.svg +++ b/images/logo-docker-main2.svg @@ -1,74 +1,74 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - +Eh+IC+zG+XX9V4ABABi5llylUZDhAAAAAElFTkSuQmCC" transform="matrix(3.0019 0 0 3.0019 162.7422 22.292)"> + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - +Eh+IC+zG+XX9V4ABABi5llylUZDhAAAAAElFTkSuQmCC" transform="matrix(3.0019 0 0 3.0019 162.7422 22.292)"> + + + + + + + + + diff --git a/images/resources_file_icon.svg b/images/resources_file_icon.svg index 36d2ae0e02..f918846d4f 100644 --- a/images/resources_file_icon.svg +++ b/images/resources_file_icon.svg @@ -1,13 +1,13 @@ - - - - - - - - - - - + + + + + + + + + + + diff --git a/images/resources_lap_icon.svg b/images/resources_lap_icon.svg index f879b6fa3b..fdc2712296 100644 --- a/images/resources_lap_icon.svg +++ b/images/resources_lap_icon.svg @@ -1,16 +1,16 @@ - - - -]> - - - - - - + + + +]> + + + + + +