mirror of https://github.com/docker/docs.git
fix files with CRLF control characters
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
This commit is contained in:
parent
2b87d7858d
commit
f3c4a25133
|
@ -1,471 +1,471 @@
|
|||
---
|
||||
title: Deploying Docker containers on Azure
|
||||
description: Deploying Docker containers on Azure
|
||||
keywords: Docker, Azure, Integration, ACI, context, Compose, cli, deploy, containers, cloud
|
||||
redirect_from:
|
||||
- /engine/context/aci-integration/
|
||||
toc_min: 1
|
||||
toc_max: 2
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment.
|
||||
|
||||
In addition, the integration between Docker and Microsoft developer technologies allow developers to use the Docker CLI to:
|
||||
|
||||
- Easily log into Azure
|
||||
- Set up an ACI context in one Docker command allowing you to switch from a local context to a cloud context and run applications quickly and easily
|
||||
- Simplify single container and multi-container application development using the Compose specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service
|
||||
|
||||
Also see the [full list of container features supported by ACI](aci-container-features.md) and [full list of compose features supported by ACI](aci-compose-features.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To deploy Docker containers on Azure, you must meet the following requirements:
|
||||
|
||||
1. Download and install the latest version of Docker Desktop.
|
||||
|
||||
- [Download for Mac](../desktop/mac/install.md)
|
||||
- [Download for Windows](../desktop/windows/install.md)
|
||||
|
||||
Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux).
|
||||
|
||||
2. Ensure you have an Azure subscription. You can get started with an [Azure free account](https://aka.ms/AA8r2pj){: target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
## Run Docker containers on ACI
|
||||
|
||||
Docker not only runs containers locally, but also enables developers to seamlessly deploy Docker containers on ACI using `docker run` or deploy multi-container applications defined in a Compose file using the `docker compose up` command.
|
||||
|
||||
The following sections contain instructions on how to deploy your Docker containers on ACI.
|
||||
Also see the [full list of container features supported by ACI](aci-container-features.md).
|
||||
|
||||
### Log into Azure
|
||||
|
||||
Run the following commands to log into Azure:
|
||||
|
||||
```console
|
||||
$ docker login azure
|
||||
```
|
||||
|
||||
This opens your web browser and prompts you to enter your Azure login credentials.
|
||||
If the Docker CLI cannot open a browser, it will fall back to the [Azure device code flow](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code){:target="_blank" rel="noopener" class="_"} and lets you connect manually.
|
||||
Note that the [Azure command line](https://docs.microsoft.com/en-us/cli/azure/){:target="_blank" rel="noopener" class="_"} login is separated from the Docker CLI Azure login.
|
||||
|
||||
Alternatively, you can log in without interaction (typically in
|
||||
scripts or continuous integration scenarios), using an Azure Service
|
||||
Principal, with `docker login azure --client-id xx --client-secret yy --tenant-id zz`
|
||||
|
||||
>**Note**
|
||||
>
|
||||
> Logging in through the Azure Service Provider obtains an access token valid
|
||||
for a short period (typically 1h), but it does not allow you to automatically
|
||||
and transparently refresh this token. You must manually re-login
|
||||
when the access token has expired when logging in with a Service Provider.
|
||||
|
||||
You can also use the `--tenant-id` option alone to specify a tenant, if
|
||||
you have several ones available in Azure.
|
||||
|
||||
### Create an ACI context
|
||||
|
||||
After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI.
|
||||
Creating an ACI context requires an Azure subscription, a [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal), and a region.
|
||||
For example, let us create a new context called `myacicontext`:
|
||||
|
||||
```console
|
||||
$ docker context create aci myacicontext
|
||||
```
|
||||
|
||||
This command automatically uses your Azure login credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags: `--subscription-id`,
|
||||
`--resource-group`, and `--location`.
|
||||
|
||||
If you don't have any existing resource groups in your Azure account, the `docker context create aci myacicontext` command creates one for you. You don’t have to specify any additional options to do this.
|
||||
|
||||
After you have created an ACI context, you can list your Docker contexts by running the `docker context ls` command:
|
||||
|
||||
```console
|
||||
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
myacicontext aci myResourceGroupGTA@eastus
|
||||
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
|
||||
```
|
||||
|
||||
### Run a container
|
||||
|
||||
Now that you've logged in and created an ACI context, you can start using Docker commands to deploy containers on ACI.
|
||||
|
||||
There are two ways to use your new ACI context. You can use the `--context` flag with the Docker command to specify that you would like to run the command using your newly created ACI context.
|
||||
|
||||
```console
|
||||
$ docker --context myacicontext run -p 80:80 nginx
|
||||
```
|
||||
|
||||
Or, you can change context using `docker context use` to select the ACI context to be your focus for running Docker commands. For example, we can use the `docker context use` command to deploy an Nginx container:
|
||||
|
||||
```console
|
||||
$ docker context use myacicontext
|
||||
$ docker run -p 80:80 nginx
|
||||
```
|
||||
|
||||
After you've switched to the `myacicontext` context, you can use `docker ps` to list your containers running on ACI.
|
||||
|
||||
In the case of the demonstration Nginx container started above, the result of the ps command will display in column "PORTS" the IP address and port on which the container is running. For example, it may show `52.154.202.35:80->80/tcp`, and you can view the Nginx welcome page by browsing `http://52.154.202.35`.
|
||||
|
||||
To view logs from your container, run:
|
||||
|
||||
```console
|
||||
$ docker logs <CONTAINER_ID>
|
||||
```
|
||||
|
||||
To execute a command in a running container, run:
|
||||
|
||||
```console
|
||||
$ docker exec -t <CONTAINER_ID> COMMAND
|
||||
```
|
||||
|
||||
To stop and remove a container from ACI, run:
|
||||
|
||||
```console
|
||||
$ docker stop <CONTAINER_ID>
|
||||
$ docker rm <CONTAINER_ID>
|
||||
```
|
||||
|
||||
You can remove containers using `docker rm`. To remove a running container, you must use the `--force` flag, or stop the container using `docker stop` before removing it.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The semantics of restarting a container on ACI are different to those when using a local Docker context for local development. On ACI, the container will be reset to its initial state and started on a new node. This includes the container's filesystem so all state that is not stored in a volume will be lost on restart.
|
||||
|
||||
## Running Compose applications
|
||||
|
||||
You can also deploy and manage multi-container applications defined in Compose files to ACI using the `docker compose` command.
|
||||
All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file.
|
||||
Name resolution between containers is achieved by writing service names in the `/etc/hosts` file that is shared automatically by all containers in the container group.
|
||||
|
||||
Also see the [full list of compose features supported by ACI](aci-compose-features.md).
|
||||
|
||||
1. Ensure you are using your ACI context. You can do this either by specifying the `--context myacicontext` flag or by setting the default context using the command `docker context use myacicontext`.
|
||||
|
||||
2. Run `docker compose up` and `docker compose down` to start and then stop a full Compose application.
|
||||
|
||||
By default, `docker compose up` uses the `docker-compose.yaml` file in the current folder. You can specify the working directory using the --workdir flag or specify the Compose file directly using `docker compose --file mycomposefile.yaml up`.
|
||||
|
||||
You can also specify a name for the Compose application using the `--project-name` flag during deployment. If no name is specified, a name will be derived from the working directory.
|
||||
|
||||
Containers started as part of Compose applications will be displayed along with single containers when using `docker ps`. Their container ID will be of the format: `<COMPOSE-PROJECT>_<SERVICE>`.
|
||||
These containers cannot be stopped, started, or removed independently since they are all part of the same ACI container group.
|
||||
You can view each container's logs with `docker logs`. You can list deployed Compose applications with `docker compose ls`. This will list only compose applications, not single containers started with `docker run`. You can remove a Compose application with `docker compose down`.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The current Docker Azure integration does not allow fetching a combined log stream from all the containers that make up the Compose application.
|
||||
|
||||
## Updating applications
|
||||
|
||||
From a deployed Compose application, you can update the application by re-deploying it with the same project name: `docker compose --project-name PROJECT up`.
|
||||
|
||||
Updating an application means the ACI node will be reused, and the application will keep the same IP address that was previously allocated to expose ports, if any. ACI has some limitations on what can be updated in an existing application (you will not be able to change CPU/memory reservation for example), in these cases, you need to deploy a new application from scratch.
|
||||
|
||||
Updating is the default behavior if you invoke `docker compose up` on an already deployed Compose file, as the Compose project name is derived from the directory where the Compose file is located by default. You need to explicitly execute `docker compose down` before running `docker compose up` again in order to totally reset a Compose application.
|
||||
|
||||
## Releasing resources
|
||||
|
||||
Single containers and Compose applications can be removed from ACI with
|
||||
the `docker prune` command. The `docker prune` command removes deployments
|
||||
that are not currently running. To remove running depoyments, you can specify
|
||||
`--force`. The `--dry-run` option lists deployments that are planned for
|
||||
removal, but it doesn't actually remove them.
|
||||
|
||||
```console
|
||||
$ ./bin/docker --context acicontext prune --dry-run --force
|
||||
Resources that would be deleted:
|
||||
my-application
|
||||
Total CPUs reclaimed: 2.01, total memory reclaimed: 2.30 GB
|
||||
```
|
||||
|
||||
## Exposing ports
|
||||
|
||||
Single containers and Compose applications can optionally expose ports.
|
||||
For single containers, this is done using the `--publish` (`-p`) flag of the `docker run` command : `docker run -p 80:80 nginx`.
|
||||
|
||||
For Compose applications, you must specify exposed ports in the Compose file service definition:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
nginx:
|
||||
image: nginx
|
||||
ports:
|
||||
- "80:80"
|
||||
```
|
||||
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> ACI does not allow port mapping (that is, changing port number while exposing port). Therefore, the source and target ports must be the same when deploying to ACI.
|
||||
>
|
||||
> All containers in the same Compose application are deployed in the same ACI container group. Different containers in the same Compose application cannot expose the same port when deployed to ACI.
|
||||
|
||||
By default, when exposing ports for your application, a random public IP address is associated with the container group supporting the deployed application (single container or Compose application).
|
||||
This IP address can be obtained when listing containers with `docker ps` or using `docker inspect`.
|
||||
|
||||
### DNS label name
|
||||
|
||||
In addition to exposing ports on a random IP address, you can specify a DNS label name to expose your application on an FQDN of the form: `<NAME>.region.azurecontainer.io`.
|
||||
|
||||
You can set this name with the `--domainname` flag when performing a `docker run`, or by using the `domainname` field in the Compose file when performing a `docker compose up`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
nginx:
|
||||
image: nginx
|
||||
domainname: "myapp"
|
||||
ports:
|
||||
- "80:80"
|
||||
```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The domain of a Compose application can only be set once, if you specify the
|
||||
> `domainname` for several services, the value must be identical.
|
||||
>
|
||||
> The FQDN `<DOMAINNAME>.region.azurecontainer.io` must be available.
|
||||
|
||||
## Using Azure file share as volumes in ACI containers
|
||||
|
||||
You can deploy containers or Compose applications that use persistent data
|
||||
stored in volumes. Azure File Share can be used to support volumes for ACI
|
||||
containers.
|
||||
|
||||
Using an existing Azure File Share with storage account name `mystorageaccount`
|
||||
and file share name `myfileshare`, you can specify a volume in your deployment `run`
|
||||
command as follows:
|
||||
|
||||
```console
|
||||
$ docker run -v mystorageaccount/myfileshare:/target/path myimage
|
||||
```
|
||||
|
||||
The runtime container will see the file share content in `/target/path`.
|
||||
|
||||
In a Compose application, the volume specification must use the following syntax
|
||||
in the Compose file:
|
||||
|
||||
```yaml
|
||||
myservice:
|
||||
image: nginx
|
||||
volumes:
|
||||
- mydata:/mount/testvolumes
|
||||
|
||||
volumes:
|
||||
mydata:
|
||||
driver: azure_file
|
||||
driver_opts:
|
||||
share_name: myfileshare
|
||||
storage_account_name: mystorageaccount
|
||||
```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The volume short syntax in Compose files cannot be used as it is aimed at volume definition for local bind mounts. Using the volume driver and driver option syntax in Compose files makes the volume definition a lot more clear.
|
||||
|
||||
In single or multi-container deployments, the Docker CLI will use your Azure login to fetch the key to the storage account, and provide this key with the container deployment information, so that the container can access the volume.
|
||||
Volumes can be used from any file share in any storage account you have access to with your Azure login. You can specify `rw` (read/write) or `ro` (read only) when mounting the volume (`rw` is the default).
|
||||
|
||||
### Managing Azure volumes
|
||||
|
||||
To create a volume that you can use in containers or Compose applications when
|
||||
using your ACI Docker context, you can use the `docker volume create` command,
|
||||
and specify an Azure storage account name and the file share name:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume create test-volume --storage-account mystorageaccount
|
||||
[+] Running 2/2
|
||||
⠿ mystorageaccount Created 26.2s
|
||||
⠿ test-volume Created 0.9s
|
||||
mystorageaccount/test-volume
|
||||
```
|
||||
|
||||
By default, if the storage account does not already exist, this command
|
||||
creates a new storage account using the Standard LRS as a default SKU, and the
|
||||
resource group and location associated with your Docker ACI context.
|
||||
|
||||
If you specify an existing storage account, the command creates a new
|
||||
file share in the existing account:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume create test-volume2 --storage-account mystorageaccount
|
||||
[+] Running 2/2
|
||||
⠿ mystorageaccount Use existing 0.7s
|
||||
⠿ test-volume2 Created 0.7s
|
||||
mystorageaccount/test-volume2
|
||||
```
|
||||
|
||||
Alternatively, you can create an Azure storage account or a file share using the Azure
|
||||
portal, or the `az` [command line](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-cli).
|
||||
|
||||
You can also list volumes that are available for use in containers or Compose applications:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume ls
|
||||
ID DESCRIPTION
|
||||
mystorageaccount/test-volume Fileshare test-volume in mystorageaccount storage account
|
||||
mystorageaccount/test-volume2 Fileshare test-volume2 in mystorageaccount storage account
|
||||
```
|
||||
|
||||
To delete a volume and the corresponding Azure file share, use the `volume rm` command:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume rm mystorageaccount/test-volume
|
||||
mystorageaccount/test-volume
|
||||
```
|
||||
|
||||
This permanently deletes the Azure file share and all its data.
|
||||
|
||||
When deleting a volume in Azure, the command checks whether the specified file share
|
||||
is the only file share available in the storage account. If the storage account is
|
||||
created with the `docker volume create` command, `docker volume rm` also
|
||||
deletes the storage account when it does not have any file shares.
|
||||
If you are using a storage account created without the `docker volume create` command
|
||||
(through Azure portal or with the `az` command line for example), `docker volume rm`
|
||||
does not delete the storage account, even when it has zero remaining file shares.
|
||||
|
||||
## Environment variables
|
||||
|
||||
When using `docker run`, you can pass the environment variables to ACI containers using the `--env` flag.
|
||||
For Compose applications, you can specify the environment variables in the Compose file with the `environment` or `env-file` service field, or with the `--environment` command line flag.
|
||||
|
||||
## Health checks
|
||||
|
||||
You can specify a container health checks using either the `--healthcheck-` prefixed flags with `docker run`, or in a Compose file with the `healthcheck` section of the service.
|
||||
|
||||
Health checks are converted to ACI `LivenessProbe`s. ACI runs the health check command periodically, and if it fails, the container will be terminated.
|
||||
|
||||
Health checks must be used in addition to restart policies to ensure the container is then restarted on termination. The default restart policy for `docker run` is `no` which will not restart the container. The default restart policy for Compose is `any` which will always try restarting the service containers.
|
||||
|
||||
Example using `docker run`:
|
||||
|
||||
```console
|
||||
$ docker --context acicontext run -p 80:80 --restart always --health-cmd "curl http://localhost:80" --health-interval 3s nginx
|
||||
```
|
||||
|
||||
Example using Compose files:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
web:
|
||||
image: nginx
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:80"]
|
||||
interval: 10s
|
||||
```
|
||||
|
||||
## Private Docker Hub images and using the Azure Container Registry
|
||||
|
||||
You can deploy private images to ACI that are hosted by any container registry. You need to log into the relevant registry using `docker login` before running `docker run` or `docker compose up`. The Docker CLI will fetch your registry login for the deployed images and send the credentials along with the image deployment information to ACI.
|
||||
In the case of the Azure Container Registry, the command line will try to automatically log you into ACR from your Azure login. You don't need to manually login to the ACR registry first, if your Azure login has access to the ACR.
|
||||
|
||||
## Using ACI resource groups as namespaces
|
||||
|
||||
You can create several Docker contexts associated with ACI. Each context must be associated with a unique Azure resource group. This allows you to use Docker contexts as namespaces. You can switch between namespaces using `docker context use <CONTEXT>`.
|
||||
|
||||
When you run the `docker ps` command, it only lists containers in your current Docker context. There won’t be any contention in container names or Compose application names between two Docker contexts.
|
||||
|
||||
## Install the Docker Compose CLI on Linux
|
||||
|
||||
The Docker Compose CLI adds support for running and managing containers on Azure Container Instances (ACI).
|
||||
|
||||
### Install Prerequisites
|
||||
|
||||
- [Docker 19.03 or later](../get-docker.md)
|
||||
|
||||
### Install script
|
||||
|
||||
You can install the new CLI using the install script:
|
||||
|
||||
```console
|
||||
$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
|
||||
```
|
||||
|
||||
### Manual install
|
||||
|
||||
You can download the Docker ACI Integration CLI from the
|
||||
[latest release](https://github.com/docker/compose-cli/releases/latest){: target="_blank" rel="noopener" class="_"} page.
|
||||
|
||||
You will then need to make it executable:
|
||||
|
||||
```console
|
||||
$ chmod +x docker-aci
|
||||
```
|
||||
|
||||
To enable using the local Docker Engine and to use existing Docker contexts, you
|
||||
must have the existing Docker CLI as `com.docker.cli` somewhere in your
|
||||
`PATH`. You can do this by creating a symbolic link from the existing Docker
|
||||
CLI:
|
||||
|
||||
```console
|
||||
$ ln -s /path/to/existing/docker /directory/in/PATH/com.docker.cli
|
||||
```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The `PATH` environment variable is a colon-separated list of
|
||||
> directories with priority from left to right. You can view it using
|
||||
> `echo $PATH`. You can find the path to the existing Docker CLI using
|
||||
> `which docker`. You may need root permissions to make this link.
|
||||
|
||||
On a fresh install of Ubuntu 20.04 with Docker Engine
|
||||
[already installed](../engine/install/ubuntu.md):
|
||||
|
||||
```console
|
||||
$ echo $PATH
|
||||
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
||||
$ which docker
|
||||
/usr/bin/docker
|
||||
$ sudo ln -s /usr/bin/docker /usr/local/bin/com.docker.cli
|
||||
```
|
||||
|
||||
You can verify that this is working by checking that the new CLI works with the
|
||||
default context:
|
||||
|
||||
```console
|
||||
$ ./docker-aci --context default ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
$ echo $?
|
||||
0
|
||||
```
|
||||
|
||||
To make this CLI with ACI integration your default Docker CLI, you must move it
|
||||
to a directory in your `PATH` with higher priority than the existing Docker CLI.
|
||||
|
||||
Again, on a fresh Ubuntu 20.04:
|
||||
|
||||
```console
|
||||
$ which docker
|
||||
/usr/bin/docker
|
||||
$ echo $PATH
|
||||
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
||||
$ sudo mv docker-aci /usr/local/bin/docker
|
||||
$ which docker
|
||||
/usr/local/bin/docker
|
||||
$ docker version
|
||||
...
|
||||
Azure integration 0.1.4
|
||||
...
|
||||
```
|
||||
|
||||
### Supported commands
|
||||
|
||||
After you have installed the Docker ACI Integration CLI, run `--help` to see the current list of commands.
|
||||
|
||||
### Uninstall
|
||||
|
||||
To remove the Docker Azure Integration CLI, you need to remove the binary you downloaded and `com.docker.cli` from your `PATH`. If you installed using the script, this can be done as follows:
|
||||
|
||||
```console
|
||||
$ sudo rm /usr/local/bin/docker /usr/local/bin/com.docker.cli
|
||||
```
|
||||
|
||||
## Feedback
|
||||
|
||||
Thank you for trying out Docker Azure Integration. Your feedback is very important to us. Let us know your feedback by creating an issue in the [compose-cli](https://github.com/docker/compose-cli){: target="_blank" rel="noopener" class="_"} GitHub repository.
|
||||
---
|
||||
title: Deploying Docker containers on Azure
|
||||
description: Deploying Docker containers on Azure
|
||||
keywords: Docker, Azure, Integration, ACI, context, Compose, cli, deploy, containers, cloud
|
||||
redirect_from:
|
||||
- /engine/context/aci-integration/
|
||||
toc_min: 1
|
||||
toc_max: 2
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment.
|
||||
|
||||
In addition, the integration between Docker and Microsoft developer technologies allow developers to use the Docker CLI to:
|
||||
|
||||
- Easily log into Azure
|
||||
- Set up an ACI context in one Docker command allowing you to switch from a local context to a cloud context and run applications quickly and easily
|
||||
- Simplify single container and multi-container application development using the Compose specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service
|
||||
|
||||
Also see the [full list of container features supported by ACI](aci-container-features.md) and [full list of compose features supported by ACI](aci-compose-features.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To deploy Docker containers on Azure, you must meet the following requirements:
|
||||
|
||||
1. Download and install the latest version of Docker Desktop.
|
||||
|
||||
- [Download for Mac](../desktop/mac/install.md)
|
||||
- [Download for Windows](../desktop/windows/install.md)
|
||||
|
||||
Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux).
|
||||
|
||||
2. Ensure you have an Azure subscription. You can get started with an [Azure free account](https://aka.ms/AA8r2pj){: target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
## Run Docker containers on ACI
|
||||
|
||||
Docker not only runs containers locally, but also enables developers to seamlessly deploy Docker containers on ACI using `docker run` or deploy multi-container applications defined in a Compose file using the `docker compose up` command.
|
||||
|
||||
The following sections contain instructions on how to deploy your Docker containers on ACI.
|
||||
Also see the [full list of container features supported by ACI](aci-container-features.md).
|
||||
|
||||
### Log into Azure
|
||||
|
||||
Run the following commands to log into Azure:
|
||||
|
||||
```console
|
||||
$ docker login azure
|
||||
```
|
||||
|
||||
This opens your web browser and prompts you to enter your Azure login credentials.
|
||||
If the Docker CLI cannot open a browser, it will fall back to the [Azure device code flow](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code){:target="_blank" rel="noopener" class="_"} and lets you connect manually.
|
||||
Note that the [Azure command line](https://docs.microsoft.com/en-us/cli/azure/){:target="_blank" rel="noopener" class="_"} login is separated from the Docker CLI Azure login.
|
||||
|
||||
Alternatively, you can log in without interaction (typically in
|
||||
scripts or continuous integration scenarios), using an Azure Service
|
||||
Principal, with `docker login azure --client-id xx --client-secret yy --tenant-id zz`
|
||||
|
||||
>**Note**
|
||||
>
|
||||
> Logging in through the Azure Service Provider obtains an access token valid
|
||||
for a short period (typically 1h), but it does not allow you to automatically
|
||||
and transparently refresh this token. You must manually re-login
|
||||
when the access token has expired when logging in with a Service Provider.
|
||||
|
||||
You can also use the `--tenant-id` option alone to specify a tenant, if
|
||||
you have several ones available in Azure.
|
||||
|
||||
### Create an ACI context
|
||||
|
||||
After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI.
|
||||
Creating an ACI context requires an Azure subscription, a [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal), and a region.
|
||||
For example, let us create a new context called `myacicontext`:
|
||||
|
||||
```console
|
||||
$ docker context create aci myacicontext
|
||||
```
|
||||
|
||||
This command automatically uses your Azure login credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags: `--subscription-id`,
|
||||
`--resource-group`, and `--location`.
|
||||
|
||||
If you don't have any existing resource groups in your Azure account, the `docker context create aci myacicontext` command creates one for you. You don’t have to specify any additional options to do this.
|
||||
|
||||
After you have created an ACI context, you can list your Docker contexts by running the `docker context ls` command:
|
||||
|
||||
```console
|
||||
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
myacicontext aci myResourceGroupGTA@eastus
|
||||
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
|
||||
```
|
||||
|
||||
### Run a container
|
||||
|
||||
Now that you've logged in and created an ACI context, you can start using Docker commands to deploy containers on ACI.
|
||||
|
||||
There are two ways to use your new ACI context. You can use the `--context` flag with the Docker command to specify that you would like to run the command using your newly created ACI context.
|
||||
|
||||
```console
|
||||
$ docker --context myacicontext run -p 80:80 nginx
|
||||
```
|
||||
|
||||
Or, you can change context using `docker context use` to select the ACI context to be your focus for running Docker commands. For example, we can use the `docker context use` command to deploy an Nginx container:
|
||||
|
||||
```console
|
||||
$ docker context use myacicontext
|
||||
$ docker run -p 80:80 nginx
|
||||
```
|
||||
|
||||
After you've switched to the `myacicontext` context, you can use `docker ps` to list your containers running on ACI.
|
||||
|
||||
In the case of the demonstration Nginx container started above, the result of the ps command will display in column "PORTS" the IP address and port on which the container is running. For example, it may show `52.154.202.35:80->80/tcp`, and you can view the Nginx welcome page by browsing `http://52.154.202.35`.
|
||||
|
||||
To view logs from your container, run:
|
||||
|
||||
```console
|
||||
$ docker logs <CONTAINER_ID>
|
||||
```
|
||||
|
||||
To execute a command in a running container, run:
|
||||
|
||||
```console
|
||||
$ docker exec -t <CONTAINER_ID> COMMAND
|
||||
```
|
||||
|
||||
To stop and remove a container from ACI, run:
|
||||
|
||||
```console
|
||||
$ docker stop <CONTAINER_ID>
|
||||
$ docker rm <CONTAINER_ID>
|
||||
```
|
||||
|
||||
You can remove containers using `docker rm`. To remove a running container, you must use the `--force` flag, or stop the container using `docker stop` before removing it.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The semantics of restarting a container on ACI are different to those when using a local Docker context for local development. On ACI, the container will be reset to its initial state and started on a new node. This includes the container's filesystem so all state that is not stored in a volume will be lost on restart.
|
||||
|
||||
## Running Compose applications
|
||||
|
||||
You can also deploy and manage multi-container applications defined in Compose files to ACI using the `docker compose` command.
|
||||
All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file.
|
||||
Name resolution between containers is achieved by writing service names in the `/etc/hosts` file that is shared automatically by all containers in the container group.
|
||||
|
||||
Also see the [full list of compose features supported by ACI](aci-compose-features.md).
|
||||
|
||||
1. Ensure you are using your ACI context. You can do this either by specifying the `--context myacicontext` flag or by setting the default context using the command `docker context use myacicontext`.
|
||||
|
||||
2. Run `docker compose up` and `docker compose down` to start and then stop a full Compose application.
|
||||
|
||||
By default, `docker compose up` uses the `docker-compose.yaml` file in the current folder. You can specify the working directory using the --workdir flag or specify the Compose file directly using `docker compose --file mycomposefile.yaml up`.
|
||||
|
||||
You can also specify a name for the Compose application using the `--project-name` flag during deployment. If no name is specified, a name will be derived from the working directory.
|
||||
|
||||
Containers started as part of Compose applications will be displayed along with single containers when using `docker ps`. Their container ID will be of the format: `<COMPOSE-PROJECT>_<SERVICE>`.
|
||||
These containers cannot be stopped, started, or removed independently since they are all part of the same ACI container group.
|
||||
You can view each container's logs with `docker logs`. You can list deployed Compose applications with `docker compose ls`. This will list only compose applications, not single containers started with `docker run`. You can remove a Compose application with `docker compose down`.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The current Docker Azure integration does not allow fetching a combined log stream from all the containers that make up the Compose application.
|
||||
|
||||
## Updating applications
|
||||
|
||||
From a deployed Compose application, you can update the application by re-deploying it with the same project name: `docker compose --project-name PROJECT up`.
|
||||
|
||||
Updating an application means the ACI node will be reused, and the application will keep the same IP address that was previously allocated to expose ports, if any. ACI has some limitations on what can be updated in an existing application (you will not be able to change CPU/memory reservation for example), in these cases, you need to deploy a new application from scratch.
|
||||
|
||||
Updating is the default behavior if you invoke `docker compose up` on an already deployed Compose file, as the Compose project name is derived from the directory where the Compose file is located by default. You need to explicitly execute `docker compose down` before running `docker compose up` again in order to totally reset a Compose application.
|
||||
|
||||
## Releasing resources
|
||||
|
||||
Single containers and Compose applications can be removed from ACI with
|
||||
the `docker prune` command. The `docker prune` command removes deployments
|
||||
that are not currently running. To remove running depoyments, you can specify
|
||||
`--force`. The `--dry-run` option lists deployments that are planned for
|
||||
removal, but it doesn't actually remove them.
|
||||
|
||||
```console
|
||||
$ ./bin/docker --context acicontext prune --dry-run --force
|
||||
Resources that would be deleted:
|
||||
my-application
|
||||
Total CPUs reclaimed: 2.01, total memory reclaimed: 2.30 GB
|
||||
```
|
||||
|
||||
## Exposing ports
|
||||
|
||||
Single containers and Compose applications can optionally expose ports.
|
||||
For single containers, this is done using the `--publish` (`-p`) flag of the `docker run` command : `docker run -p 80:80 nginx`.
|
||||
|
||||
For Compose applications, you must specify exposed ports in the Compose file service definition:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
nginx:
|
||||
image: nginx
|
||||
ports:
|
||||
- "80:80"
|
||||
```
|
||||
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> ACI does not allow port mapping (that is, changing port number while exposing port). Therefore, the source and target ports must be the same when deploying to ACI.
|
||||
>
|
||||
> All containers in the same Compose application are deployed in the same ACI container group. Different containers in the same Compose application cannot expose the same port when deployed to ACI.
|
||||
|
||||
By default, when exposing ports for your application, a random public IP address is associated with the container group supporting the deployed application (single container or Compose application).
|
||||
This IP address can be obtained when listing containers with `docker ps` or using `docker inspect`.
|
||||
|
||||
### DNS label name
|
||||
|
||||
In addition to exposing ports on a random IP address, you can specify a DNS label name to expose your application on an FQDN of the form: `<NAME>.region.azurecontainer.io`.
|
||||
|
||||
You can set this name with the `--domainname` flag when performing a `docker run`, or by using the `domainname` field in the Compose file when performing a `docker compose up`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
nginx:
|
||||
image: nginx
|
||||
domainname: "myapp"
|
||||
ports:
|
||||
- "80:80"
|
||||
```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The domain of a Compose application can only be set once, if you specify the
|
||||
> `domainname` for several services, the value must be identical.
|
||||
>
|
||||
> The FQDN `<DOMAINNAME>.region.azurecontainer.io` must be available.
|
||||
|
||||
## Using Azure file share as volumes in ACI containers
|
||||
|
||||
You can deploy containers or Compose applications that use persistent data
|
||||
stored in volumes. Azure File Share can be used to support volumes for ACI
|
||||
containers.
|
||||
|
||||
Using an existing Azure File Share with storage account name `mystorageaccount`
|
||||
and file share name `myfileshare`, you can specify a volume in your deployment `run`
|
||||
command as follows:
|
||||
|
||||
```console
|
||||
$ docker run -v mystorageaccount/myfileshare:/target/path myimage
|
||||
```
|
||||
|
||||
The runtime container will see the file share content in `/target/path`.
|
||||
|
||||
In a Compose application, the volume specification must use the following syntax
|
||||
in the Compose file:
|
||||
|
||||
```yaml
|
||||
myservice:
|
||||
image: nginx
|
||||
volumes:
|
||||
- mydata:/mount/testvolumes
|
||||
|
||||
volumes:
|
||||
mydata:
|
||||
driver: azure_file
|
||||
driver_opts:
|
||||
share_name: myfileshare
|
||||
storage_account_name: mystorageaccount
|
||||
```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The volume short syntax in Compose files cannot be used as it is aimed at volume definition for local bind mounts. Using the volume driver and driver option syntax in Compose files makes the volume definition a lot more clear.
|
||||
|
||||
In single or multi-container deployments, the Docker CLI will use your Azure login to fetch the key to the storage account, and provide this key with the container deployment information, so that the container can access the volume.
|
||||
Volumes can be used from any file share in any storage account you have access to with your Azure login. You can specify `rw` (read/write) or `ro` (read only) when mounting the volume (`rw` is the default).
|
||||
|
||||
### Managing Azure volumes
|
||||
|
||||
To create a volume that you can use in containers or Compose applications when
|
||||
using your ACI Docker context, you can use the `docker volume create` command,
|
||||
and specify an Azure storage account name and the file share name:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume create test-volume --storage-account mystorageaccount
|
||||
[+] Running 2/2
|
||||
⠿ mystorageaccount Created 26.2s
|
||||
⠿ test-volume Created 0.9s
|
||||
mystorageaccount/test-volume
|
||||
```
|
||||
|
||||
By default, if the storage account does not already exist, this command
|
||||
creates a new storage account using the Standard LRS as a default SKU, and the
|
||||
resource group and location associated with your Docker ACI context.
|
||||
|
||||
If you specify an existing storage account, the command creates a new
|
||||
file share in the existing account:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume create test-volume2 --storage-account mystorageaccount
|
||||
[+] Running 2/2
|
||||
⠿ mystorageaccount Use existing 0.7s
|
||||
⠿ test-volume2 Created 0.7s
|
||||
mystorageaccount/test-volume2
|
||||
```
|
||||
|
||||
Alternatively, you can create an Azure storage account or a file share using the Azure
|
||||
portal, or the `az` [command line](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-cli).
|
||||
|
||||
You can also list volumes that are available for use in containers or Compose applications:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume ls
|
||||
ID DESCRIPTION
|
||||
mystorageaccount/test-volume Fileshare test-volume in mystorageaccount storage account
|
||||
mystorageaccount/test-volume2 Fileshare test-volume2 in mystorageaccount storage account
|
||||
```
|
||||
|
||||
To delete a volume and the corresponding Azure file share, use the `volume rm` command:
|
||||
|
||||
```console
|
||||
$ docker --context aci volume rm mystorageaccount/test-volume
|
||||
mystorageaccount/test-volume
|
||||
```
|
||||
|
||||
This permanently deletes the Azure file share and all its data.
|
||||
|
||||
When deleting a volume in Azure, the command checks whether the specified file share
|
||||
is the only file share available in the storage account. If the storage account is
|
||||
created with the `docker volume create` command, `docker volume rm` also
|
||||
deletes the storage account when it does not have any file shares.
|
||||
If you are using a storage account created without the `docker volume create` command
|
||||
(through Azure portal or with the `az` command line for example), `docker volume rm`
|
||||
does not delete the storage account, even when it has zero remaining file shares.
|
||||
|
||||
## Environment variables
|
||||
|
||||
When using `docker run`, you can pass the environment variables to ACI containers using the `--env` flag.
|
||||
For Compose applications, you can specify the environment variables in the Compose file with the `environment` or `env-file` service field, or with the `--environment` command line flag.
|
||||
|
||||
## Health checks
|
||||
|
||||
You can specify a container health checks using either the `--healthcheck-` prefixed flags with `docker run`, or in a Compose file with the `healthcheck` section of the service.
|
||||
|
||||
Health checks are converted to ACI `LivenessProbe`s. ACI runs the health check command periodically, and if it fails, the container will be terminated.
|
||||
|
||||
Health checks must be used in addition to restart policies to ensure the container is then restarted on termination. The default restart policy for `docker run` is `no` which will not restart the container. The default restart policy for Compose is `any` which will always try restarting the service containers.
|
||||
|
||||
Example using `docker run`:
|
||||
|
||||
```console
|
||||
$ docker --context acicontext run -p 80:80 --restart always --health-cmd "curl http://localhost:80" --health-interval 3s nginx
|
||||
```
|
||||
|
||||
Example using Compose files:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
web:
|
||||
image: nginx
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:80"]
|
||||
interval: 10s
|
||||
```
|
||||
|
||||
## Private Docker Hub images and using the Azure Container Registry
|
||||
|
||||
You can deploy private images to ACI that are hosted by any container registry. You need to log into the relevant registry using `docker login` before running `docker run` or `docker compose up`. The Docker CLI will fetch your registry login for the deployed images and send the credentials along with the image deployment information to ACI.
|
||||
In the case of the Azure Container Registry, the command line will try to automatically log you into ACR from your Azure login. You don't need to manually login to the ACR registry first, if your Azure login has access to the ACR.
|
||||
|
||||
## Using ACI resource groups as namespaces
|
||||
|
||||
You can create several Docker contexts associated with ACI. Each context must be associated with a unique Azure resource group. This allows you to use Docker contexts as namespaces. You can switch between namespaces using `docker context use <CONTEXT>`.
|
||||
|
||||
When you run the `docker ps` command, it only lists containers in your current Docker context. There won’t be any contention in container names or Compose application names between two Docker contexts.
|
||||
|
||||
## Install the Docker Compose CLI on Linux
|
||||
|
||||
The Docker Compose CLI adds support for running and managing containers on Azure Container Instances (ACI).
|
||||
|
||||
### Install Prerequisites
|
||||
|
||||
- [Docker 19.03 or later](../get-docker.md)
|
||||
|
||||
### Install script
|
||||
|
||||
You can install the new CLI using the install script:
|
||||
|
||||
```console
|
||||
$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
|
||||
```
|
||||
|
||||
### Manual install
|
||||
|
||||
You can download the Docker ACI Integration CLI from the
|
||||
[latest release](https://github.com/docker/compose-cli/releases/latest){: target="_blank" rel="noopener" class="_"} page.
|
||||
|
||||
You will then need to make it executable:
|
||||
|
||||
```console
|
||||
$ chmod +x docker-aci
|
||||
```
|
||||
|
||||
To enable using the local Docker Engine and to use existing Docker contexts, you
|
||||
must have the existing Docker CLI as `com.docker.cli` somewhere in your
|
||||
`PATH`. You can do this by creating a symbolic link from the existing Docker
|
||||
CLI:
|
||||
|
||||
```console
|
||||
$ ln -s /path/to/existing/docker /directory/in/PATH/com.docker.cli
|
||||
```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The `PATH` environment variable is a colon-separated list of
|
||||
> directories with priority from left to right. You can view it using
|
||||
> `echo $PATH`. You can find the path to the existing Docker CLI using
|
||||
> `which docker`. You may need root permissions to make this link.
|
||||
|
||||
On a fresh install of Ubuntu 20.04 with Docker Engine
|
||||
[already installed](../engine/install/ubuntu.md):
|
||||
|
||||
```console
|
||||
$ echo $PATH
|
||||
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
||||
$ which docker
|
||||
/usr/bin/docker
|
||||
$ sudo ln -s /usr/bin/docker /usr/local/bin/com.docker.cli
|
||||
```
|
||||
|
||||
You can verify that this is working by checking that the new CLI works with the
|
||||
default context:
|
||||
|
||||
```console
|
||||
$ ./docker-aci --context default ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
$ echo $?
|
||||
0
|
||||
```
|
||||
|
||||
To make this CLI with ACI integration your default Docker CLI, you must move it
|
||||
to a directory in your `PATH` with higher priority than the existing Docker CLI.
|
||||
|
||||
Again, on a fresh Ubuntu 20.04:
|
||||
|
||||
```console
|
||||
$ which docker
|
||||
/usr/bin/docker
|
||||
$ echo $PATH
|
||||
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
||||
$ sudo mv docker-aci /usr/local/bin/docker
|
||||
$ which docker
|
||||
/usr/local/bin/docker
|
||||
$ docker version
|
||||
...
|
||||
Azure integration 0.1.4
|
||||
...
|
||||
```
|
||||
|
||||
### Supported commands
|
||||
|
||||
After you have installed the Docker ACI Integration CLI, run `--help` to see the current list of commands.
|
||||
|
||||
### Uninstall
|
||||
|
||||
To remove the Docker Azure Integration CLI, you need to remove the binary you downloaded and `com.docker.cli` from your `PATH`. If you installed using the script, this can be done as follows:
|
||||
|
||||
```console
|
||||
$ sudo rm /usr/local/bin/docker /usr/local/bin/com.docker.cli
|
||||
```
|
||||
|
||||
## Feedback
|
||||
|
||||
Thank you for trying out Docker Azure Integration. Your feedback is very important to us. Let us know your feedback by creating an issue in the [compose-cli](https://github.com/docker/compose-cli){: target="_blank" rel="noopener" class="_"} GitHub repository.
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,272 +1,272 @@
|
|||
---
|
||||
title: Docker Context
|
||||
description: Learn about Docker Context
|
||||
keywords: engine, context, cli, kubernetes
|
||||
---
|
||||
|
||||
|
||||
## Introduction
|
||||
|
||||
This guide shows how _contexts_ make it easy for a **single Docker CLI** to manage multiple Swarm clusters, multiple Kubernetes clusters, and multiple individual Docker nodes.
|
||||
|
||||
A single Docker CLI can have multiple contexts. Each context contains all of the endpoint and security information required to manage a different cluster or node. The `docker context` command makes it easy to configure these contexts and switch between them.
|
||||
|
||||
As an example, a single Docker client on your company laptop might be configured with two contexts; **dev-k8s** and **prod-swarm**. **dev-k8s** contains the endpoint data and security credentials to configure and manage a Kubernetes cluster in a development environment. **prod-swarm** contains everything required to manage a Swarm cluster in a production environment. Once these contexts are configured, you can use the top-level `docker context use <context-name>` to easily switch between them.
|
||||
|
||||
For information on using Docker Context to deploy your apps to the cloud, see [Deploying Docker containers on Azure](../../cloud/aci-integration.md) and [Deploying Docker containers on ECS](../../cloud/ecs-integration.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To follow the examples in this guide, you'll need:
|
||||
|
||||
- A Docker client that supports the top-level `context` command
|
||||
|
||||
Run `docker context` to verify that your Docker client supports contexts.
|
||||
|
||||
You will also need one of the following:
|
||||
|
||||
- Docker Swarm cluster
|
||||
- Single-engine Docker node
|
||||
- Kubernetes cluster
|
||||
|
||||
## The anatomy of a context
|
||||
|
||||
A context is a combination of several properties. These include:
|
||||
|
||||
- Name
|
||||
- Endpoint configuration
|
||||
- TLS info
|
||||
- Orchestrator
|
||||
|
||||
The easiest way to see what a context looks like is to view the **default** context.
|
||||
|
||||
```console
|
||||
$ docker context ls
|
||||
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
default * Current... unix:///var/run/docker.sock swarm
|
||||
```
|
||||
|
||||
This shows a single context called "default". It's configured to talk to a Swarm cluster through the local `/var/run/docker.sock` Unix socket. It has no Kubernetes endpoint configured.
|
||||
|
||||
The asterisk in the `NAME` column indicates that this is the active context. This means all `docker` commands will be executed against the "default" context unless overridden with environment variables such as `DOCKER_HOST` and `DOCKER_CONTEXT`, or on the command-line with the `--context` and `--host` flags.
|
||||
|
||||
Dig a bit deeper with `docker context inspect`. In this example, we're inspecting the context called `default`.
|
||||
|
||||
```console
|
||||
$ docker context inspect default
|
||||
[
|
||||
{
|
||||
"Name": "default",
|
||||
"Metadata": {
|
||||
"StackOrchestrator": "swarm"
|
||||
},
|
||||
"Endpoints": {
|
||||
"docker": {
|
||||
"Host": "unix:///var/run/docker.sock",
|
||||
"SkipTLSVerify": false
|
||||
}
|
||||
},
|
||||
"TLSMaterial": {},
|
||||
"Storage": {
|
||||
"MetadataPath": "\u003cIN MEMORY\u003e",
|
||||
"TLSPath": "\u003cIN MEMORY\u003e"
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
This context is using "swarm" as the orchestrator (`metadata.stackOrchestrator`). It is configured to talk to an endpoint exposed on a local Unix socket at `/var/run/docker.sock` (`Endpoints.docker.Host`), and requires TLS verification (`Endpoints.docker.SkipTLSVerify`).
|
||||
|
||||
### Create a new context
|
||||
|
||||
You can create new contexts with the `docker context create` command.
|
||||
|
||||
The following example creates a new context called "docker-test" and specifies the following:
|
||||
|
||||
- Default orchestrator = Swarm
|
||||
- Issue commands to the local Unix socket `/var/run/docker.sock`
|
||||
|
||||
```console
|
||||
$ docker context create docker-test \
|
||||
--default-stack-orchestrator=swarm \
|
||||
--docker host=unix:///var/run/docker.sock
|
||||
|
||||
Successfully created context "docker-test"
|
||||
```
|
||||
|
||||
The new context is stored in a `meta.json` file below `~/.docker/contexts/`. Each new context you create gets its own `meta.json` stored in a dedicated sub-directory of `~/.docker/contexts/`.
|
||||
|
||||
> **Note:** The default context behaves differently than manually created contexts. It does not have a `meta.json` configuration file, and it dynamically updates based on the current configuration. For example, if you switch your current Kubernetes config using `kubectl config use-context`, the default Docker context will dynamically update itself to the new Kubernetes endpoint.
|
||||
|
||||
You can view the new context with `docker context ls` and `docker context inspect <context-name>`.
|
||||
|
||||
The following can be used to create a config with Kubernetes as the default orchestrator using the existing kubeconfig stored in `/home/ubuntu/.kube/config`. For this to work, you will need a valid kubeconfig file in `/home/ubuntu/.kube/config`. If your kubeconfig has more than one context, the current context (`kubectl config current-context`) will be used.
|
||||
|
||||
```console
|
||||
$ docker context create k8s-test \
|
||||
--default-stack-orchestrator=kubernetes \
|
||||
--kubernetes config-file=/home/ubuntu/.kube/config \
|
||||
--docker host=unix:///var/run/docker.sock
|
||||
|
||||
Successfully created context "k8s-test"
|
||||
```
|
||||
|
||||
You can view all contexts on the system with `docker context ls`.
|
||||
|
||||
```console
|
||||
$ docker context ls
|
||||
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
default * Current unix:///var/run/docker.sock https://35.226.99.100 (default) swarm
|
||||
k8s-test unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes
|
||||
docker-test unix:///var/run/docker.sock swarm
|
||||
```
|
||||
|
||||
The current context is indicated with an asterisk ("\*").
|
||||
|
||||
## Use a different context
|
||||
|
||||
You can use `docker context use` to quickly switch between contexts.
|
||||
|
||||
The following command will switch the `docker` CLI to use the "k8s-test" context.
|
||||
|
||||
```console
|
||||
$ docker context use k8s-test
|
||||
|
||||
k8s-test
|
||||
Current context is now "k8s-test"
|
||||
```
|
||||
|
||||
Verify the operation by listing all contexts and ensuring the asterisk ("\*") is against the "k8s-test" context.
|
||||
|
||||
```console
|
||||
$ docker context ls
|
||||
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
default Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://35.226.99.100 (default) swarm
|
||||
docker-test unix:///var/run/docker.sock swarm
|
||||
k8s-test * unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes
|
||||
```
|
||||
|
||||
`docker` commands will now target endpoints defined in the "k8s-test" context.
|
||||
|
||||
You can also set the current context using the `DOCKER_CONTEXT` environment variable. This overrides the context set with `docker context use`.
|
||||
|
||||
Use the appropriate command below to set the context to `docker-test` using an environment variable.
|
||||
|
||||
Windows PowerShell:
|
||||
|
||||
```console
|
||||
> $Env:DOCKER_CONTEXT=docker-test
|
||||
```
|
||||
|
||||
Linux:
|
||||
|
||||
```console
|
||||
$ export DOCKER_CONTEXT=docker-test
|
||||
```
|
||||
|
||||
Run a `docker context ls` to verify that the "docker-test" context is now the active context.
|
||||
|
||||
You can also use the global `--context` flag to override the context specified by the `DOCKER_CONTEXT` environment variable. For example, the following will send the command to a context called "production".
|
||||
|
||||
```console
|
||||
$ docker --context production container ls
|
||||
```
|
||||
|
||||
## Exporting and importing Docker contexts
|
||||
|
||||
The `docker context` command makes it easy to export and import contexts on different machines with the Docker client installed.
|
||||
|
||||
You can use the `docker context export` command to export an existing context to a file. This file can later be imported on another machine that has the `docker` client installed.
|
||||
|
||||
By default, contexts will be exported as a _native Docker contexts_. You can export and import these using the `docker context` command. If the context you are exporting includes a Kubernetes endpoint, the Kubernetes part of the context will be included in the `export` and `import` operations.
|
||||
|
||||
There is also an option to export just the Kubernetes part of a context. This will produce a native kubeconfig file that can be manually merged with an existing `~/.kube/config` file on another host that has `kubectl` installed. You cannot export just the Kubernetes portion of a context and then import it with `docker context import`. The only way to import the exported Kubernetes config is to manually merge it into an existing kubeconfig file.
|
||||
|
||||
Let's look at exporting and importing a native Docker context.
|
||||
|
||||
### Exporting and importing a native Docker context
|
||||
|
||||
The following example exports an existing context called "docker-test". It will be written to a file called `docker-test.dockercontext`.
|
||||
|
||||
```console
|
||||
$ docker context export docker-test
|
||||
Written file "docker-test.dockercontext"
|
||||
```
|
||||
|
||||
Check the contents of the export file.
|
||||
|
||||
```console
|
||||
$ cat docker-test.dockercontext
|
||||
meta.json0000644000000000000000000000022300000000000011023 0ustar0000000000000000{"Name":"docker-test","Metadata":{"StackOrchestrator":"swarm"},"Endpoints":{"docker":{"Host":"unix:///var/run/docker.sock","SkipTLSVerify":false}}}tls0000700000000000000000000000000000000000000007716 5ustar0000000000000000
|
||||
```
|
||||
|
||||
This file can be imported on another host using `docker context import`. The target host must have the Docker client installed.
|
||||
|
||||
```console
|
||||
$ docker context import docker-test docker-test.dockercontext
|
||||
docker-test
|
||||
Successfully imported context "docker-test"
|
||||
```
|
||||
|
||||
You can verify that the context was imported with `docker context ls`.
|
||||
|
||||
The format of the import command is `docker context import <context-name> <context-file>`.
|
||||
|
||||
Now, let's look at exporting just the Kubernetes parts of a context.
|
||||
|
||||
### Exporting a Kubernetes context
|
||||
|
||||
You can export a Kubernetes context only if the context you are exporting has a Kubernetes endpoint configured. You cannot import a Kubernetes context using `docker context import`.
|
||||
|
||||
These steps will use the `--kubeconfig` flag to export **only** the Kubernetes elements of the existing `k8s-test` context to a file called "k8s-test.kubeconfig". The `cat` command will then show that it's exported as a valid kubeconfig file.
|
||||
|
||||
```console
|
||||
$ docker context export k8s-test --kubeconfig
|
||||
Written file "k8s-test.kubeconfig"
|
||||
```
|
||||
|
||||
Verify that the exported file contains a valid kubectl config.
|
||||
|
||||
```console
|
||||
$ cat k8s-test.kubeconfig
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data:
|
||||
<Snip>
|
||||
server: https://35.226.99.100
|
||||
name: cluster
|
||||
contexts:
|
||||
- context:
|
||||
cluster: cluster
|
||||
namespace: default
|
||||
user: authInfo
|
||||
name: context
|
||||
current-context: context
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: authInfo
|
||||
user:
|
||||
auth-provider:
|
||||
config:
|
||||
cmd-args: config config-helper --format=json
|
||||
cmd-path: /snap/google-cloud-sdk/77/bin/gcloud
|
||||
expiry-key: '{.credential.token_expiry}'
|
||||
token-key: '{.credential.access_token}'
|
||||
name: gcp
|
||||
```
|
||||
|
||||
You can merge this with an existing `~/.kube/config` file on another machine.
|
||||
|
||||
## Updating a context
|
||||
|
||||
You can use `docker context update` to update fields in an existing context.
|
||||
|
||||
The following example updates the "Description" field in the existing `k8s-test` context.
|
||||
|
||||
```console
|
||||
$ docker context update k8s-test --description "Test Kubernetes cluster"
|
||||
k8s-test
|
||||
Successfully updated context "k8s-test"
|
||||
```
|
||||
---
|
||||
title: Docker Context
|
||||
description: Learn about Docker Context
|
||||
keywords: engine, context, cli, kubernetes
|
||||
---
|
||||
|
||||
|
||||
## Introduction
|
||||
|
||||
This guide shows how _contexts_ make it easy for a **single Docker CLI** to manage multiple Swarm clusters, multiple Kubernetes clusters, and multiple individual Docker nodes.
|
||||
|
||||
A single Docker CLI can have multiple contexts. Each context contains all of the endpoint and security information required to manage a different cluster or node. The `docker context` command makes it easy to configure these contexts and switch between them.
|
||||
|
||||
As an example, a single Docker client on your company laptop might be configured with two contexts; **dev-k8s** and **prod-swarm**. **dev-k8s** contains the endpoint data and security credentials to configure and manage a Kubernetes cluster in a development environment. **prod-swarm** contains everything required to manage a Swarm cluster in a production environment. Once these contexts are configured, you can use the top-level `docker context use <context-name>` to easily switch between them.
|
||||
|
||||
For information on using Docker Context to deploy your apps to the cloud, see [Deploying Docker containers on Azure](../../cloud/aci-integration.md) and [Deploying Docker containers on ECS](../../cloud/ecs-integration.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To follow the examples in this guide, you'll need:
|
||||
|
||||
- A Docker client that supports the top-level `context` command
|
||||
|
||||
Run `docker context` to verify that your Docker client supports contexts.
|
||||
|
||||
You will also need one of the following:
|
||||
|
||||
- Docker Swarm cluster
|
||||
- Single-engine Docker node
|
||||
- Kubernetes cluster
|
||||
|
||||
## The anatomy of a context
|
||||
|
||||
A context is a combination of several properties. These include:
|
||||
|
||||
- Name
|
||||
- Endpoint configuration
|
||||
- TLS info
|
||||
- Orchestrator
|
||||
|
||||
The easiest way to see what a context looks like is to view the **default** context.
|
||||
|
||||
```console
|
||||
$ docker context ls
|
||||
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
default * Current... unix:///var/run/docker.sock swarm
|
||||
```
|
||||
|
||||
This shows a single context called "default". It's configured to talk to a Swarm cluster through the local `/var/run/docker.sock` Unix socket. It has no Kubernetes endpoint configured.
|
||||
|
||||
The asterisk in the `NAME` column indicates that this is the active context. This means all `docker` commands will be executed against the "default" context unless overridden with environment variables such as `DOCKER_HOST` and `DOCKER_CONTEXT`, or on the command-line with the `--context` and `--host` flags.
|
||||
|
||||
Dig a bit deeper with `docker context inspect`. In this example, we're inspecting the context called `default`.
|
||||
|
||||
```console
|
||||
$ docker context inspect default
|
||||
[
|
||||
{
|
||||
"Name": "default",
|
||||
"Metadata": {
|
||||
"StackOrchestrator": "swarm"
|
||||
},
|
||||
"Endpoints": {
|
||||
"docker": {
|
||||
"Host": "unix:///var/run/docker.sock",
|
||||
"SkipTLSVerify": false
|
||||
}
|
||||
},
|
||||
"TLSMaterial": {},
|
||||
"Storage": {
|
||||
"MetadataPath": "\u003cIN MEMORY\u003e",
|
||||
"TLSPath": "\u003cIN MEMORY\u003e"
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
This context is using "swarm" as the orchestrator (`metadata.stackOrchestrator`). It is configured to talk to an endpoint exposed on a local Unix socket at `/var/run/docker.sock` (`Endpoints.docker.Host`), and requires TLS verification (`Endpoints.docker.SkipTLSVerify`).
|
||||
|
||||
### Create a new context
|
||||
|
||||
You can create new contexts with the `docker context create` command.
|
||||
|
||||
The following example creates a new context called "docker-test" and specifies the following:
|
||||
|
||||
- Default orchestrator = Swarm
|
||||
- Issue commands to the local Unix socket `/var/run/docker.sock`
|
||||
|
||||
```console
|
||||
$ docker context create docker-test \
|
||||
--default-stack-orchestrator=swarm \
|
||||
--docker host=unix:///var/run/docker.sock
|
||||
|
||||
Successfully created context "docker-test"
|
||||
```
|
||||
|
||||
The new context is stored in a `meta.json` file below `~/.docker/contexts/`. Each new context you create gets its own `meta.json` stored in a dedicated sub-directory of `~/.docker/contexts/`.
|
||||
|
||||
> **Note:** The default context behaves differently than manually created contexts. It does not have a `meta.json` configuration file, and it dynamically updates based on the current configuration. For example, if you switch your current Kubernetes config using `kubectl config use-context`, the default Docker context will dynamically update itself to the new Kubernetes endpoint.
|
||||
|
||||
You can view the new context with `docker context ls` and `docker context inspect <context-name>`.
|
||||
|
||||
The following can be used to create a config with Kubernetes as the default orchestrator using the existing kubeconfig stored in `/home/ubuntu/.kube/config`. For this to work, you will need a valid kubeconfig file in `/home/ubuntu/.kube/config`. If your kubeconfig has more than one context, the current context (`kubectl config current-context`) will be used.
|
||||
|
||||
```console
|
||||
$ docker context create k8s-test \
|
||||
--default-stack-orchestrator=kubernetes \
|
||||
--kubernetes config-file=/home/ubuntu/.kube/config \
|
||||
--docker host=unix:///var/run/docker.sock
|
||||
|
||||
Successfully created context "k8s-test"
|
||||
```
|
||||
|
||||
You can view all contexts on the system with `docker context ls`.
|
||||
|
||||
```console
|
||||
$ docker context ls
|
||||
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
default * Current unix:///var/run/docker.sock https://35.226.99.100 (default) swarm
|
||||
k8s-test unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes
|
||||
docker-test unix:///var/run/docker.sock swarm
|
||||
```
|
||||
|
||||
The current context is indicated with an asterisk ("\*").
|
||||
|
||||
## Use a different context
|
||||
|
||||
You can use `docker context use` to quickly switch between contexts.
|
||||
|
||||
The following command will switch the `docker` CLI to use the "k8s-test" context.
|
||||
|
||||
```console
|
||||
$ docker context use k8s-test
|
||||
|
||||
k8s-test
|
||||
Current context is now "k8s-test"
|
||||
```
|
||||
|
||||
Verify the operation by listing all contexts and ensuring the asterisk ("\*") is against the "k8s-test" context.
|
||||
|
||||
```console
|
||||
$ docker context ls
|
||||
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
|
||||
default Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://35.226.99.100 (default) swarm
|
||||
docker-test unix:///var/run/docker.sock swarm
|
||||
k8s-test * unix:///var/run/docker.sock https://35.226.99.100 (default) kubernetes
|
||||
```
|
||||
|
||||
`docker` commands will now target endpoints defined in the "k8s-test" context.
|
||||
|
||||
You can also set the current context using the `DOCKER_CONTEXT` environment variable. This overrides the context set with `docker context use`.
|
||||
|
||||
Use the appropriate command below to set the context to `docker-test` using an environment variable.
|
||||
|
||||
Windows PowerShell:
|
||||
|
||||
```console
|
||||
> $Env:DOCKER_CONTEXT=docker-test
|
||||
```
|
||||
|
||||
Linux:
|
||||
|
||||
```console
|
||||
$ export DOCKER_CONTEXT=docker-test
|
||||
```
|
||||
|
||||
Run a `docker context ls` to verify that the "docker-test" context is now the active context.
|
||||
|
||||
You can also use the global `--context` flag to override the context specified by the `DOCKER_CONTEXT` environment variable. For example, the following will send the command to a context called "production".
|
||||
|
||||
```console
|
||||
$ docker --context production container ls
|
||||
```
|
||||
|
||||
## Exporting and importing Docker contexts
|
||||
|
||||
The `docker context` command makes it easy to export and import contexts on different machines with the Docker client installed.
|
||||
|
||||
You can use the `docker context export` command to export an existing context to a file. This file can later be imported on another machine that has the `docker` client installed.
|
||||
|
||||
By default, contexts will be exported as a _native Docker contexts_. You can export and import these using the `docker context` command. If the context you are exporting includes a Kubernetes endpoint, the Kubernetes part of the context will be included in the `export` and `import` operations.
|
||||
|
||||
There is also an option to export just the Kubernetes part of a context. This will produce a native kubeconfig file that can be manually merged with an existing `~/.kube/config` file on another host that has `kubectl` installed. You cannot export just the Kubernetes portion of a context and then import it with `docker context import`. The only way to import the exported Kubernetes config is to manually merge it into an existing kubeconfig file.
|
||||
|
||||
Let's look at exporting and importing a native Docker context.
|
||||
|
||||
### Exporting and importing a native Docker context
|
||||
|
||||
The following example exports an existing context called "docker-test". It will be written to a file called `docker-test.dockercontext`.
|
||||
|
||||
```console
|
||||
$ docker context export docker-test
|
||||
Written file "docker-test.dockercontext"
|
||||
```
|
||||
|
||||
Check the contents of the export file.
|
||||
|
||||
```console
|
||||
$ cat docker-test.dockercontext
|
||||
meta.json0000644000000000000000000000022300000000000011023 0ustar0000000000000000{"Name":"docker-test","Metadata":{"StackOrchestrator":"swarm"},"Endpoints":{"docker":{"Host":"unix:///var/run/docker.sock","SkipTLSVerify":false}}}tls0000700000000000000000000000000000000000000007716 5ustar0000000000000000
|
||||
```
|
||||
|
||||
This file can be imported on another host using `docker context import`. The target host must have the Docker client installed.
|
||||
|
||||
```console
|
||||
$ docker context import docker-test docker-test.dockercontext
|
||||
docker-test
|
||||
Successfully imported context "docker-test"
|
||||
```
|
||||
|
||||
You can verify that the context was imported with `docker context ls`.
|
||||
|
||||
The format of the import command is `docker context import <context-name> <context-file>`.
|
||||
|
||||
Now, let's look at exporting just the Kubernetes parts of a context.
|
||||
|
||||
### Exporting a Kubernetes context
|
||||
|
||||
You can export a Kubernetes context only if the context you are exporting has a Kubernetes endpoint configured. You cannot import a Kubernetes context using `docker context import`.
|
||||
|
||||
These steps will use the `--kubeconfig` flag to export **only** the Kubernetes elements of the existing `k8s-test` context to a file called "k8s-test.kubeconfig". The `cat` command will then show that it's exported as a valid kubeconfig file.
|
||||
|
||||
```console
|
||||
$ docker context export k8s-test --kubeconfig
|
||||
Written file "k8s-test.kubeconfig"
|
||||
```
|
||||
|
||||
Verify that the exported file contains a valid kubectl config.
|
||||
|
||||
```console
|
||||
$ cat k8s-test.kubeconfig
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data:
|
||||
<Snip>
|
||||
server: https://35.226.99.100
|
||||
name: cluster
|
||||
contexts:
|
||||
- context:
|
||||
cluster: cluster
|
||||
namespace: default
|
||||
user: authInfo
|
||||
name: context
|
||||
current-context: context
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: authInfo
|
||||
user:
|
||||
auth-provider:
|
||||
config:
|
||||
cmd-args: config config-helper --format=json
|
||||
cmd-path: /snap/google-cloud-sdk/77/bin/gcloud
|
||||
expiry-key: '{.credential.token_expiry}'
|
||||
token-key: '{.credential.access_token}'
|
||||
name: gcp
|
||||
```
|
||||
|
||||
You can merge this with an existing `~/.kube/config` file on another machine.
|
||||
|
||||
## Updating a context
|
||||
|
||||
You can use `docker context update` to update fields in an existing context.
|
||||
|
||||
The following example updates the "Description" field in the existing `k8s-test` context.
|
||||
|
||||
```console
|
||||
$ docker context update k8s-test --description "Test Kubernetes cluster"
|
||||
k8s-test
|
||||
Successfully updated context "k8s-test"
|
||||
```
|
||||
|
|
|
@ -1,470 +1,470 @@
|
|||
---
|
||||
title: Vulnerability scanning for Docker local images
|
||||
description: Vulnerability scanning for Docker local images
|
||||
keywords: Docker, scan, Snyk, images, local, CVE, vulnerability, security
|
||||
toc_min: 1
|
||||
toc_max: 2
|
||||
---
|
||||
|
||||
{% include sign-up-cta.html
|
||||
body="Did you know that you can now get 10 free scans per month? Sign in to Docker to start scanning your images for vulnerabilities."
|
||||
header-text="Scan your images for free"
|
||||
target-url="https://www.docker.com/pricing?utm_source=docker&utm_medium=webreferral&utm_campaign=docs_driven_upgrade_scan"
|
||||
%}
|
||||
|
||||
Looking to speed up your development cycles? Quickly detect and learn how to remediate CVEs in your images by running `docker scan IMAGE_NAME`. Check out [How to scan images](#how-to-scan-images) for details.
|
||||
|
||||
Vulnerability scanning for Docker local images allows developers and development teams to review the security state of the container images and take actions to fix issues identified during the scan, resulting in more secure deployments. Docker Scan runs on Snyk engine, providing users with visibility into the security posture of their local Dockerfiles and local images.
|
||||
|
||||
Users trigger vulnerability scans through the CLI, and use the CLI to view the
|
||||
scan results. The scan results contain a list of Common Vulnerabilities and
|
||||
Exposures (CVEs), the sources, such as OS packages and libraries, versions in
|
||||
which they were introduced, and a recommended fixed version (if available) to
|
||||
remediate the CVEs discovered.
|
||||
|
||||
> **Log4j 2 CVE-2021-44228**
|
||||
>
|
||||
> Versions of `docker Scan` earlier than `v0.11.0` are not able to detect [Log4j 2
|
||||
> CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){:
|
||||
> target="_blank" rel="noopener" class="_"}. You must update your Docker
|
||||
> Desktop installation to 4.3.1 or higher to fix this issue. For more
|
||||
> information, see [Scan images for Log4j 2 CVE](#scan-images-for-log4j-2-cve).
|
||||
{: .important}
|
||||
|
||||
For information about the system requirements to run vulnerability scanning, see [Prerequisites](#prerequisites).
|
||||
|
||||
This page contains information about the `docker scan` CLI command. For
|
||||
information about automatically scanning Docker images through Docker Hub, see
|
||||
[Hub Vulnerability Scanning](/docker-hub/vulnerability-scanning/).
|
||||
|
||||
## Scan images for Log4j 2 CVE
|
||||
|
||||
Docker Scan versions earlier than `v0.11.0` do not detect [Log4j 2
|
||||
CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){:
|
||||
target="_blank" rel="noopener" class="_"} when you scan your
|
||||
images for vulnerabilities. You must update your Docker installation to the
|
||||
latest version to fix this issue.
|
||||
|
||||
If you are using the `docker scan` plugin shipped
|
||||
with Docker Desktop, update Docker Desktop to version 4.3.1 or
|
||||
higher. See the release notes for [Mac](../../desktop/mac/release-notes/index.md) and
|
||||
[Windows](../../desktop/windows/release-notes/index.md) for download information.
|
||||
|
||||
If you are using Linux, run the following command to manually install the latest
|
||||
version of `docker scan`:
|
||||
|
||||
On `.deb` based distros, such as Ubuntu and Debian:
|
||||
|
||||
```console
|
||||
$ apt-get update && apt-get install docker-scan-plugin
|
||||
```
|
||||
|
||||
On rpm-based distros, such as CentOS or Fedora:
|
||||
|
||||
```console
|
||||
$ yum install docker-scan-plugin
|
||||
```
|
||||
|
||||
Alternatively, you can manually download the `docker scan` binaries from the [Docker Scan](https://github.com/docker/scan-cli-plugin/releases/tag/v0.11.0){:
|
||||
target="_blank" rel="noopener" class="_"} GitHub repository and
|
||||
[install](https://github.com/docker/scan-cli-plugin){:
|
||||
target="_blank" rel="noopener" class="_"} in the plugins directory.
|
||||
|
||||
### Verify the `docker scan` version
|
||||
|
||||
After upgrading `docker scan`, verify you are running the latest version by
|
||||
running the following command:
|
||||
|
||||
```console
|
||||
$ docker scan --accept-license --version
|
||||
Version: v0.12.0
|
||||
Git commit: 1074dd0
|
||||
Provider: Snyk (1.790.0 (standalone))
|
||||
```
|
||||
|
||||
If your code output contains `ORGAPACHELOGGINGLOG4J`, it is
|
||||
likely that your code is affected by the Log4j 2 CVE-2021-44228 vulnerability. When you run the updated version of `docker scan`, you should also see a message
|
||||
in the output log similar to:
|
||||
|
||||
```console
|
||||
Upgrade org.apache.logging.log4j:log4j-core@2.14.0 to org.apache.logging.log4j:log4j-core@2.15.0 to fix
|
||||
✗ Arbitrary Code Execution (new) [Critical Severity][https://snyk.io/vuln/SNYK-JAVA-ORGAPACHELOGGINGLOG4J-2314720] in org.apache.logging.log4j:log4j-core@2.14.0
|
||||
introduced by org.apache.logging.log4j:log4j-core@2.14.0
|
||||
```
|
||||
|
||||
For more information, read our blog post [Apache Log4j 2
|
||||
CVE-2021-44228](https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/){:
|
||||
target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
## How to scan images
|
||||
|
||||
The `docker scan` command allows you to scan existing Docker images using the image name or ID. For example, run the following command to scan the hello-world image:
|
||||
|
||||
```console
|
||||
$ docker scan hello-world
|
||||
|
||||
Testing hello-world...
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: linux
|
||||
Project name: docker-image|hello-world
|
||||
Docker image: hello-world
|
||||
Licenses: enabled
|
||||
|
||||
✓ Tested 0 dependencies for known issues, no vulnerable paths found.
|
||||
|
||||
Note that we do not currently have vulnerability data for your image.
|
||||
```
|
||||
|
||||
### Get a detailed scan report
|
||||
|
||||
You can get a detailed scan report about a Docker image by providing the Dockerfile used to create the image. The syntax is `docker scan --file PATH_TO_DOCKERFILE DOCKER_IMAGE`.
|
||||
|
||||
For example, if you apply the option to the `docker-scan` test image, it displays the following result:
|
||||
|
||||
```console
|
||||
$ docker scan --file Dockerfile docker-scan:e2e
|
||||
Testing docker-scan:e2e
|
||||
...
|
||||
✗ High severity vulnerability found in perl
|
||||
Description: Integer Overflow or Wraparound
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-PERL-570802
|
||||
Introduced through: git@1:2.20.1-2+deb10u3, meta-common-packages@meta
|
||||
From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6
|
||||
From: git@1:2.20.1-2+deb10u3 > liberror-perl@0.17027-2 > perl@5.28.1-6
|
||||
From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6 > perl/perl-modules-5.28@5.28.1-6
|
||||
and 3 more...
|
||||
Introduced by your base image (golang:1.14.6)
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Target file: Dockerfile
|
||||
Project name: docker-image|99138c65ebc7
|
||||
Docker image: 99138c65ebc7
|
||||
Base image: golang:1.14.6
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 157 issues.
|
||||
|
||||
According to our scan, you are currently using the most secure version of the selected base image
|
||||
```
|
||||
|
||||
### Excluding the base image
|
||||
|
||||
When using docker scan with the `--file` flag, you can also add the `--exclude-base` tag. This excludes the base image (specified in the Dockerfile using the `FROM` directive) vulnerabilities from your report. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --file Dockerfile --exclude-base docker-scan:e2e
|
||||
Testing docker-scan:e2e
|
||||
...
|
||||
✗ Medium severity vulnerability found in libidn2/libidn2-0
|
||||
Description: Improper Input Validation
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-LIBIDN2-474100
|
||||
Introduced through: iputils/iputils-ping@3:20180629-2+deb10u1, wget@1.20.1-1.1, curl@7.64.0-4+deb10u1, git@1:2.20.1-2+deb10u3
|
||||
From: iputils/iputils-ping@3:20180629-2+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1
|
||||
From: wget@1.20.1-1.1 > libidn2/libidn2-0@2.0.5-1+deb10u1
|
||||
From: curl@7.64.0-4+deb10u1 > curl/libcurl4@7.64.0-4+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1
|
||||
and 3 more...
|
||||
Introduced in your Dockerfile by 'RUN apk add -U --no-cache wget tar'
|
||||
|
||||
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Target file: Dockerfile
|
||||
Project name: docker-image|99138c65ebc7
|
||||
Docker image: 99138c65ebc7
|
||||
Base image: golang:1.14.6
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 16 issues.
|
||||
```
|
||||
|
||||
### Viewing the JSON output
|
||||
|
||||
You can also display the scan result as a JSON output by adding the `--json` flag to the command. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --json hello-world
|
||||
{
|
||||
"vulnerabilities": [],
|
||||
"ok": true,
|
||||
"dependencyCount": 0,
|
||||
"org": "docker-desktop-test",
|
||||
"policy": "# Snyk (https://snyk.io) policy file, patches or ignores known vulnerabilities.\nversion: v1.19.0\nignore: {}\npatch: {}\n",
|
||||
"isPrivate": true,
|
||||
"licensesPolicy": {
|
||||
"severities": {},
|
||||
"orgLicenseRules": {
|
||||
"AGPL-1.0": {
|
||||
"licenseType": "AGPL-1.0",
|
||||
"severity": "high",
|
||||
"instructions": ""
|
||||
},
|
||||
...
|
||||
"SimPL-2.0": {
|
||||
"licenseType": "SimPL-2.0",
|
||||
"severity": "high",
|
||||
"instructions": ""
|
||||
}
|
||||
}
|
||||
},
|
||||
"packageManager": "linux",
|
||||
"ignoreSettings": null,
|
||||
"docker": {
|
||||
"baseImageRemediation": {
|
||||
"code": "SCRATCH_BASE_IMAGE",
|
||||
"advice": [
|
||||
{
|
||||
"message": "Note that we do not currently have vulnerability data for your image.",
|
||||
"bold": true,
|
||||
"color": "yellow"
|
||||
}
|
||||
]
|
||||
},
|
||||
"binariesVulns": {
|
||||
"issuesData": {},
|
||||
"affectedPkgs": {}
|
||||
}
|
||||
},
|
||||
"summary": "No known vulnerabilities",
|
||||
"filesystemPolicy": false,
|
||||
"uniqueCount": 0,
|
||||
"projectName": "docker-image|hello-world",
|
||||
"path": "hello-world"
|
||||
}
|
||||
```
|
||||
|
||||
In addition to the `--json` flag, you can also use the `--group-issues` flag to display a vulnerability only once in the scan report:
|
||||
|
||||
```console
|
||||
$ docker scan --json --group-issues docker-scan:e2e
|
||||
{
|
||||
{
|
||||
"title": "Improper Check for Dropped Privileges",
|
||||
...
|
||||
"packageName": "bash",
|
||||
"language": "linux",
|
||||
"packageManager": "debian:10",
|
||||
"description": "## Overview\nAn issue was discovered in disable_priv_mode in shell.c in GNU Bash through 5.0 patch 11. By default, if Bash is run with its effective UID not equal to its real UID, it will drop privileges by setting its effective UID to its real UID. However, it does so incorrectly. On Linux and other systems that support \"saved UID\" functionality, the saved UID is not dropped. An attacker with command execution in the shell can use \"enable -f\" for runtime loading of a new builtin, which can be a shared object that calls setuid() and therefore regains privileges. However, binaries running with an effective UID of 0 are unaffected.\n\n## References\n- [CONFIRM](https://security.netapp.com/advisory/ntap-20200430-0003/)\n- [Debian Security Tracker](https://security-tracker.debian.org/tracker/CVE-2019-18276)\n- [GitHub Commit](https://github.com/bminor/bash/commit/951bdaad7a18cc0dc1036bba86b18b90874d39ff)\n- [MISC](http://packetstormsecurity.com/files/155498/Bash-5.0-Patch-11-Privilege-Escalation.html)\n- [MISC](https://www.youtube.com/watch?v=-wGtxJ8opa8)\n- [Ubuntu CVE Tracker](http://people.ubuntu.com/~ubuntu-security/cve/CVE-2019-18276)\n",
|
||||
"identifiers": {
|
||||
"ALTERNATIVE": [],
|
||||
"CVE": [
|
||||
"CVE-2019-18276"
|
||||
],
|
||||
"CWE": [
|
||||
"CWE-273"
|
||||
]
|
||||
},
|
||||
"severity": "low",
|
||||
"severityWithCritical": "low",
|
||||
"cvssScore": 7.8,
|
||||
"CVSSv3": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:F",
|
||||
...
|
||||
"from": [
|
||||
"docker-image|docker-scan@e2e",
|
||||
"bash@5.0-4"
|
||||
],
|
||||
"upgradePath": [],
|
||||
"isUpgradable": false,
|
||||
"isPatchable": false,
|
||||
"name": "bash",
|
||||
"version": "5.0-4"
|
||||
},
|
||||
...
|
||||
"summary": "880 vulnerable dependency paths",
|
||||
"filesystemPolicy": false,
|
||||
"filtered": {
|
||||
"ignore": [],
|
||||
"patch": []
|
||||
},
|
||||
"uniqueCount": 158,
|
||||
"projectName": "docker-image|docker-scan",
|
||||
"platform": "linux/amd64",
|
||||
"path": "docker-scan:e2e"
|
||||
}
|
||||
```
|
||||
|
||||
You can find all the sources of the vulnerability in the `from` section.
|
||||
|
||||
### Checking the dependency tree
|
||||
|
||||
To view the dependency tree of your image, use the --dependency-tree flag. This displays all the dependencies before the scan result. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --dependency-tree debian:buster
|
||||
|
||||
$ docker-image|99138c65ebc7 @ latest
|
||||
├─ ca-certificates @ 20200601~deb10u1
|
||||
│ └─ openssl @ 1.1.1d-0+deb10u3
|
||||
│ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3
|
||||
├─ curl @ 7.64.0-4+deb10u1
|
||||
│ └─ curl/libcurl4 @ 7.64.0-4+deb10u1
|
||||
│ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3
|
||||
│ ├─ krb5/libgssapi-krb5-2 @ 1.17-3
|
||||
│ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3
|
||||
│ │ ├─ krb5/libk5crypto3 @ 1.17-3
|
||||
│ │ │ └─ krb5/libkrb5support0 @ 1.17-3
|
||||
│ │ ├─ krb5/libkrb5-3 @ 1.17-3
|
||||
│ │ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3
|
||||
│ │ │ ├─ krb5/libk5crypto3 @ 1.17-3
|
||||
│ │ │ ├─ krb5/libkrb5support0 @ 1.17-3
|
||||
│ │ │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3
|
||||
│ │ └─ krb5/libkrb5support0 @ 1.17-3
|
||||
│ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1
|
||||
│ │ └─ libunistring/libunistring2 @ 0.9.10-1
|
||||
│ ├─ krb5/libk5crypto3 @ 1.17-3
|
||||
│ ├─ krb5/libkrb5-3 @ 1.17-3
|
||||
│ ├─ openldap/libldap-2.4-2 @ 2.4.47+dfsg-3+deb10u2
|
||||
│ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4
|
||||
│ │ │ ├─ nettle/libhogweed4 @ 3.4.1-1
|
||||
│ │ │ │ └─ nettle/libnettle6 @ 3.4.1-1
|
||||
│ │ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1
|
||||
│ │ │ ├─ nettle/libnettle6 @ 3.4.1-1
|
||||
│ │ │ ├─ p11-kit/libp11-kit0 @ 0.23.15-2
|
||||
│ │ │ │ └─ libffi/libffi6 @ 3.2.1-9
|
||||
│ │ │ ├─ libtasn1-6 @ 4.13-3
|
||||
│ │ │ └─ libunistring/libunistring2 @ 0.9.10-1
|
||||
│ │ ├─ cyrus-sasl2/libsasl2-2 @ 2.1.27+dfsg-1+deb10u1
|
||||
│ │ │ └─ cyrus-sasl2/libsasl2-modules-db @ 2.1.27+dfsg-1+deb10u1
|
||||
│ │ │ └─ db5.3/libdb5.3 @ 5.3.28+dfsg1-0.5
|
||||
│ │ └─ openldap/libldap-common @ 2.4.47+dfsg-3+deb10u2
|
||||
│ ├─ nghttp2/libnghttp2-14 @ 1.36.0-2+deb10u1
|
||||
│ ├─ libpsl/libpsl5 @ 0.20.2-2
|
||||
│ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1
|
||||
│ │ └─ libunistring/libunistring2 @ 0.9.10-1
|
||||
│ ├─ rtmpdump/librtmp1 @ 2.4+20151223.gitfa8646d.1-2
|
||||
│ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4
|
||||
│ │ ├─ nettle/libhogweed4 @ 3.4.1-1
|
||||
│ │ └─ nettle/libnettle6 @ 3.4.1-1
|
||||
│ ├─ libssh2/libssh2-1 @ 1.8.0-2.1
|
||||
│ │ └─ libgcrypt20 @ 1.8.4-5
|
||||
│ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3
|
||||
├─ gnupg2/dirmngr @ 2.2.12-1+deb10u1
|
||||
...
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Project name: docker-image|99138c65ebc7
|
||||
Docker image: 99138c65ebc7
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 157 issues.
|
||||
|
||||
For more free scans that keep your images secure, sign up to Snyk at https://dockr.ly/3ePqVcp.
|
||||
```
|
||||
|
||||
For more information about the vulnerability data, see [Docker Vulnerability Scanning CLI Cheat Sheet](https://goto.docker.com/rs/929-FJL-178/images/cheat-sheet-docker-desktop-vulnerability-scanning-CLI.pdf){: target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
### Limiting the level of vulnerabilities displayed
|
||||
|
||||
Docker scan allows you to choose the level of vulnerabilities displayed in your scan report using the `--severity` flag.
|
||||
You can set the severity flag to `low`, `medium`, or` high` depending on the level of vulnerabilities you’d like to see in your report.
|
||||
For example, if you set the severity level as `medium`, the scan report displays all vulnerabilities that are classified as medium and high.
|
||||
|
||||
```console
|
||||
$ docker scan --severity=medium docker-scan:e2e
|
||||
./bin/docker-scan_darwin_amd64 scan --severity=medium docker-scan:e2e
|
||||
|
||||
Testing docker-scan:e2e...
|
||||
|
||||
✗ Medium severity vulnerability found in sqlite3/libsqlite3-0
|
||||
Description: Divide By Zero
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-SQLITE3-466337
|
||||
Introduced through: gnupg2/gnupg@2.2.12-1+deb10u1, subversion@1.10.4-1+deb10u1, mercurial@4.8.2-1+deb10u1
|
||||
From: gnupg2/gnupg@2.2.12-1+deb10u1 > gnupg2/gpg@2.2.12-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3
|
||||
From: subversion@1.10.4-1+deb10u1 > subversion/libsvn1@1.10.4-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3
|
||||
From: mercurial@4.8.2-1+deb10u1 > python-defaults/python@2.7.16-1 > python2.7@2.7.16-2+deb10u1 > python2.7/libpython2.7-stdlib@2.7.16-2+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3
|
||||
|
||||
✗ Medium severity vulnerability found in sqlite3/libsqlite3-0
|
||||
Description: Uncontrolled Recursion
|
||||
...
|
||||
✗ High severity vulnerability found in binutils/binutils-common
|
||||
Description: Missing Release of Resource after Effective Lifetime
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-BINUTILS-403318
|
||||
Introduced through: gcc-defaults/g++@4:8.3.0-1
|
||||
From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-common@2.31.1-16
|
||||
From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/libbinutils@2.31.1-16 > binutils/binutils-common@2.31.1-16
|
||||
From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-x86-64-linux-gnu@2.31.1-16 > binutils/binutils-common@2.31.1-16
|
||||
and 4 more...
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Project name: docker-image|docker-scan
|
||||
Docker image: docker-scan:e2e
|
||||
Platform: linux/amd64
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 37 issues.
|
||||
```
|
||||
|
||||
## Provider authentication
|
||||
|
||||
If you have an existing Snyk account, you can directly use your Snyk [API token](https://app.snyk.io/account){: target="_blank" rel="noopener" class="_"}:
|
||||
|
||||
```console
|
||||
$ docker scan --login --token SNYK_AUTH_TOKEN
|
||||
|
||||
Your account has been authenticated. Snyk is now ready to be used.
|
||||
```
|
||||
|
||||
If you use the `--login` flag without any token, you will be redirected to the Snyk website to login.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To run vulnerability scanning on your Docker images, you must meet the following requirements:
|
||||
|
||||
1. Download and install the latest version of Docker Desktop.
|
||||
|
||||
- [Download for Mac with Intel chip](https://desktop.docker.com/mac/main/amd64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-amd64)
|
||||
- [Download for Mac with Apple chip](https://desktop.docker.com/mac/main/arm64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-arm64)
|
||||
- [Download for Windows](https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe)
|
||||
|
||||
2. Sign into [Docker Hub](https://hub.docker.com){: target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
3. From the Docker Desktop menu, select **Sign in/ Create Docker ID**. Alternatively, open a terminal and run the command `docker login`.
|
||||
|
||||
4. (Optional) You can create a [Snyk account](https://dockr.ly/3ePqVcp){: target="_blank" rel="noopener" class="_"} for scans, or use the additional monthly free scans provided by Snyk with your Docker Hub account.
|
||||
|
||||
Check your installation by running `docker scan --version`, it should print the current version of docker scan and the Snyk engine version. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --version
|
||||
Version: v0.5.0
|
||||
Git commit: 5a09266
|
||||
Provider: Snyk (1.432.0)
|
||||
```
|
||||
|
||||
> **Note:**
|
||||
>
|
||||
> Docker Scan uses the Snyk binary installed in your environment by default. If
|
||||
this is not available, it uses the Snyk binary embedded in Docker Desktop.
|
||||
> The minimum version required for Snyk is `1.385.0`.
|
||||
|
||||
## Supported options
|
||||
|
||||
The high-level `docker scan` command scans local images using the image name or the image ID. It supports the following options:
|
||||
|
||||
| Option | Description |
|
||||
|:------------------------------------------------------------------ :------------------------------------------------|
|
||||
| `--accept-license` | Accept the license agreement of the third-party scanning provider |
|
||||
| `--dependency-tree` | Display the dependency tree of the image along with scan results |
|
||||
| `--exclude-base` | Exclude the base image during scanning. This option requires the --file option to be set |
|
||||
| `-f`, `--file string` | Specify the location of the Dockerfile associated with the image. This option displays a detailed scan result |
|
||||
| `--json` | Display the result of the scan in JSON format|
|
||||
| `--login` | Log into Snyk using an optional token (using the flag --token), or by using a web-based token |
|
||||
| `--reject-license` | Reject the license agreement of the third-party scanning provider |
|
||||
| `--severity string` | Only report vulnerabilities of provided level or higher (low, medium, high) |
|
||||
| `--token string` | Use the authentication token to log into the third-party scanning provider |
|
||||
| `--version` | Display the Docker Scan plugin version |
|
||||
|
||||
## Known issues
|
||||
|
||||
**WSL 2**
|
||||
|
||||
- The Vulnerability scanning feature doesn’t work with Alpine distributions.
|
||||
- If you are using Debian and OpenSUSE distributions, the login process only works with the `--token` flag, you won’t be redirected to the Snyk website for authentication.
|
||||
|
||||
## Feedback
|
||||
|
||||
Your feedback is very important to us. Let us know your feedback by creating an issue in the [scan-cli-plugin](https://github.com/docker/scan-cli-plugin/issues/new){: target="_blank" rel="noopener" class="_"} GitHub repository.
|
||||
---
|
||||
title: Vulnerability scanning for Docker local images
|
||||
description: Vulnerability scanning for Docker local images
|
||||
keywords: Docker, scan, Snyk, images, local, CVE, vulnerability, security
|
||||
toc_min: 1
|
||||
toc_max: 2
|
||||
---
|
||||
|
||||
{% include sign-up-cta.html
|
||||
body="Did you know that you can now get 10 free scans per month? Sign in to Docker to start scanning your images for vulnerabilities."
|
||||
header-text="Scan your images for free"
|
||||
target-url="https://www.docker.com/pricing?utm_source=docker&utm_medium=webreferral&utm_campaign=docs_driven_upgrade_scan"
|
||||
%}
|
||||
|
||||
Looking to speed up your development cycles? Quickly detect and learn how to remediate CVEs in your images by running `docker scan IMAGE_NAME`. Check out [How to scan images](#how-to-scan-images) for details.
|
||||
|
||||
Vulnerability scanning for Docker local images allows developers and development teams to review the security state of the container images and take actions to fix issues identified during the scan, resulting in more secure deployments. Docker Scan runs on Snyk engine, providing users with visibility into the security posture of their local Dockerfiles and local images.
|
||||
|
||||
Users trigger vulnerability scans through the CLI, and use the CLI to view the
|
||||
scan results. The scan results contain a list of Common Vulnerabilities and
|
||||
Exposures (CVEs), the sources, such as OS packages and libraries, versions in
|
||||
which they were introduced, and a recommended fixed version (if available) to
|
||||
remediate the CVEs discovered.
|
||||
|
||||
> **Log4j 2 CVE-2021-44228**
|
||||
>
|
||||
> Versions of `docker Scan` earlier than `v0.11.0` are not able to detect [Log4j 2
|
||||
> CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){:
|
||||
> target="_blank" rel="noopener" class="_"}. You must update your Docker
|
||||
> Desktop installation to 4.3.1 or higher to fix this issue. For more
|
||||
> information, see [Scan images for Log4j 2 CVE](#scan-images-for-log4j-2-cve).
|
||||
{: .important}
|
||||
|
||||
For information about the system requirements to run vulnerability scanning, see [Prerequisites](#prerequisites).
|
||||
|
||||
This page contains information about the `docker scan` CLI command. For
|
||||
information about automatically scanning Docker images through Docker Hub, see
|
||||
[Hub Vulnerability Scanning](/docker-hub/vulnerability-scanning/).
|
||||
|
||||
## Scan images for Log4j 2 CVE
|
||||
|
||||
Docker Scan versions earlier than `v0.11.0` do not detect [Log4j 2
|
||||
CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228){:
|
||||
target="_blank" rel="noopener" class="_"} when you scan your
|
||||
images for vulnerabilities. You must update your Docker installation to the
|
||||
latest version to fix this issue.
|
||||
|
||||
If you are using the `docker scan` plugin shipped
|
||||
with Docker Desktop, update Docker Desktop to version 4.3.1 or
|
||||
higher. See the release notes for [Mac](../../desktop/mac/release-notes/index.md) and
|
||||
[Windows](../../desktop/windows/release-notes/index.md) for download information.
|
||||
|
||||
If you are using Linux, run the following command to manually install the latest
|
||||
version of `docker scan`:
|
||||
|
||||
On `.deb` based distros, such as Ubuntu and Debian:
|
||||
|
||||
```console
|
||||
$ apt-get update && apt-get install docker-scan-plugin
|
||||
```
|
||||
|
||||
On rpm-based distros, such as CentOS or Fedora:
|
||||
|
||||
```console
|
||||
$ yum install docker-scan-plugin
|
||||
```
|
||||
|
||||
Alternatively, you can manually download the `docker scan` binaries from the [Docker Scan](https://github.com/docker/scan-cli-plugin/releases/tag/v0.11.0){:
|
||||
target="_blank" rel="noopener" class="_"} GitHub repository and
|
||||
[install](https://github.com/docker/scan-cli-plugin){:
|
||||
target="_blank" rel="noopener" class="_"} in the plugins directory.
|
||||
|
||||
### Verify the `docker scan` version
|
||||
|
||||
After upgrading `docker scan`, verify you are running the latest version by
|
||||
running the following command:
|
||||
|
||||
```console
|
||||
$ docker scan --accept-license --version
|
||||
Version: v0.12.0
|
||||
Git commit: 1074dd0
|
||||
Provider: Snyk (1.790.0 (standalone))
|
||||
```
|
||||
|
||||
If your code output contains `ORGAPACHELOGGINGLOG4J`, it is
|
||||
likely that your code is affected by the Log4j 2 CVE-2021-44228 vulnerability. When you run the updated version of `docker scan`, you should also see a message
|
||||
in the output log similar to:
|
||||
|
||||
```console
|
||||
Upgrade org.apache.logging.log4j:log4j-core@2.14.0 to org.apache.logging.log4j:log4j-core@2.15.0 to fix
|
||||
✗ Arbitrary Code Execution (new) [Critical Severity][https://snyk.io/vuln/SNYK-JAVA-ORGAPACHELOGGINGLOG4J-2314720] in org.apache.logging.log4j:log4j-core@2.14.0
|
||||
introduced by org.apache.logging.log4j:log4j-core@2.14.0
|
||||
```
|
||||
|
||||
For more information, read our blog post [Apache Log4j 2
|
||||
CVE-2021-44228](https://www.docker.com/blog/apache-log4j-2-cve-2021-44228/){:
|
||||
target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
## How to scan images
|
||||
|
||||
The `docker scan` command allows you to scan existing Docker images using the image name or ID. For example, run the following command to scan the hello-world image:
|
||||
|
||||
```console
|
||||
$ docker scan hello-world
|
||||
|
||||
Testing hello-world...
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: linux
|
||||
Project name: docker-image|hello-world
|
||||
Docker image: hello-world
|
||||
Licenses: enabled
|
||||
|
||||
✓ Tested 0 dependencies for known issues, no vulnerable paths found.
|
||||
|
||||
Note that we do not currently have vulnerability data for your image.
|
||||
```
|
||||
|
||||
### Get a detailed scan report
|
||||
|
||||
You can get a detailed scan report about a Docker image by providing the Dockerfile used to create the image. The syntax is `docker scan --file PATH_TO_DOCKERFILE DOCKER_IMAGE`.
|
||||
|
||||
For example, if you apply the option to the `docker-scan` test image, it displays the following result:
|
||||
|
||||
```console
|
||||
$ docker scan --file Dockerfile docker-scan:e2e
|
||||
Testing docker-scan:e2e
|
||||
...
|
||||
✗ High severity vulnerability found in perl
|
||||
Description: Integer Overflow or Wraparound
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-PERL-570802
|
||||
Introduced through: git@1:2.20.1-2+deb10u3, meta-common-packages@meta
|
||||
From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6
|
||||
From: git@1:2.20.1-2+deb10u3 > liberror-perl@0.17027-2 > perl@5.28.1-6
|
||||
From: git@1:2.20.1-2+deb10u3 > perl@5.28.1-6 > perl/perl-modules-5.28@5.28.1-6
|
||||
and 3 more...
|
||||
Introduced by your base image (golang:1.14.6)
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Target file: Dockerfile
|
||||
Project name: docker-image|99138c65ebc7
|
||||
Docker image: 99138c65ebc7
|
||||
Base image: golang:1.14.6
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 157 issues.
|
||||
|
||||
According to our scan, you are currently using the most secure version of the selected base image
|
||||
```
|
||||
|
||||
### Excluding the base image
|
||||
|
||||
When using docker scan with the `--file` flag, you can also add the `--exclude-base` tag. This excludes the base image (specified in the Dockerfile using the `FROM` directive) vulnerabilities from your report. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --file Dockerfile --exclude-base docker-scan:e2e
|
||||
Testing docker-scan:e2e
|
||||
...
|
||||
✗ Medium severity vulnerability found in libidn2/libidn2-0
|
||||
Description: Improper Input Validation
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-LIBIDN2-474100
|
||||
Introduced through: iputils/iputils-ping@3:20180629-2+deb10u1, wget@1.20.1-1.1, curl@7.64.0-4+deb10u1, git@1:2.20.1-2+deb10u3
|
||||
From: iputils/iputils-ping@3:20180629-2+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1
|
||||
From: wget@1.20.1-1.1 > libidn2/libidn2-0@2.0.5-1+deb10u1
|
||||
From: curl@7.64.0-4+deb10u1 > curl/libcurl4@7.64.0-4+deb10u1 > libidn2/libidn2-0@2.0.5-1+deb10u1
|
||||
and 3 more...
|
||||
Introduced in your Dockerfile by 'RUN apk add -U --no-cache wget tar'
|
||||
|
||||
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Target file: Dockerfile
|
||||
Project name: docker-image|99138c65ebc7
|
||||
Docker image: 99138c65ebc7
|
||||
Base image: golang:1.14.6
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 16 issues.
|
||||
```
|
||||
|
||||
### Viewing the JSON output
|
||||
|
||||
You can also display the scan result as a JSON output by adding the `--json` flag to the command. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --json hello-world
|
||||
{
|
||||
"vulnerabilities": [],
|
||||
"ok": true,
|
||||
"dependencyCount": 0,
|
||||
"org": "docker-desktop-test",
|
||||
"policy": "# Snyk (https://snyk.io) policy file, patches or ignores known vulnerabilities.\nversion: v1.19.0\nignore: {}\npatch: {}\n",
|
||||
"isPrivate": true,
|
||||
"licensesPolicy": {
|
||||
"severities": {},
|
||||
"orgLicenseRules": {
|
||||
"AGPL-1.0": {
|
||||
"licenseType": "AGPL-1.0",
|
||||
"severity": "high",
|
||||
"instructions": ""
|
||||
},
|
||||
...
|
||||
"SimPL-2.0": {
|
||||
"licenseType": "SimPL-2.0",
|
||||
"severity": "high",
|
||||
"instructions": ""
|
||||
}
|
||||
}
|
||||
},
|
||||
"packageManager": "linux",
|
||||
"ignoreSettings": null,
|
||||
"docker": {
|
||||
"baseImageRemediation": {
|
||||
"code": "SCRATCH_BASE_IMAGE",
|
||||
"advice": [
|
||||
{
|
||||
"message": "Note that we do not currently have vulnerability data for your image.",
|
||||
"bold": true,
|
||||
"color": "yellow"
|
||||
}
|
||||
]
|
||||
},
|
||||
"binariesVulns": {
|
||||
"issuesData": {},
|
||||
"affectedPkgs": {}
|
||||
}
|
||||
},
|
||||
"summary": "No known vulnerabilities",
|
||||
"filesystemPolicy": false,
|
||||
"uniqueCount": 0,
|
||||
"projectName": "docker-image|hello-world",
|
||||
"path": "hello-world"
|
||||
}
|
||||
```
|
||||
|
||||
In addition to the `--json` flag, you can also use the `--group-issues` flag to display a vulnerability only once in the scan report:
|
||||
|
||||
```console
|
||||
$ docker scan --json --group-issues docker-scan:e2e
|
||||
{
|
||||
{
|
||||
"title": "Improper Check for Dropped Privileges",
|
||||
...
|
||||
"packageName": "bash",
|
||||
"language": "linux",
|
||||
"packageManager": "debian:10",
|
||||
"description": "## Overview\nAn issue was discovered in disable_priv_mode in shell.c in GNU Bash through 5.0 patch 11. By default, if Bash is run with its effective UID not equal to its real UID, it will drop privileges by setting its effective UID to its real UID. However, it does so incorrectly. On Linux and other systems that support \"saved UID\" functionality, the saved UID is not dropped. An attacker with command execution in the shell can use \"enable -f\" for runtime loading of a new builtin, which can be a shared object that calls setuid() and therefore regains privileges. However, binaries running with an effective UID of 0 are unaffected.\n\n## References\n- [CONFIRM](https://security.netapp.com/advisory/ntap-20200430-0003/)\n- [Debian Security Tracker](https://security-tracker.debian.org/tracker/CVE-2019-18276)\n- [GitHub Commit](https://github.com/bminor/bash/commit/951bdaad7a18cc0dc1036bba86b18b90874d39ff)\n- [MISC](http://packetstormsecurity.com/files/155498/Bash-5.0-Patch-11-Privilege-Escalation.html)\n- [MISC](https://www.youtube.com/watch?v=-wGtxJ8opa8)\n- [Ubuntu CVE Tracker](http://people.ubuntu.com/~ubuntu-security/cve/CVE-2019-18276)\n",
|
||||
"identifiers": {
|
||||
"ALTERNATIVE": [],
|
||||
"CVE": [
|
||||
"CVE-2019-18276"
|
||||
],
|
||||
"CWE": [
|
||||
"CWE-273"
|
||||
]
|
||||
},
|
||||
"severity": "low",
|
||||
"severityWithCritical": "low",
|
||||
"cvssScore": 7.8,
|
||||
"CVSSv3": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:F",
|
||||
...
|
||||
"from": [
|
||||
"docker-image|docker-scan@e2e",
|
||||
"bash@5.0-4"
|
||||
],
|
||||
"upgradePath": [],
|
||||
"isUpgradable": false,
|
||||
"isPatchable": false,
|
||||
"name": "bash",
|
||||
"version": "5.0-4"
|
||||
},
|
||||
...
|
||||
"summary": "880 vulnerable dependency paths",
|
||||
"filesystemPolicy": false,
|
||||
"filtered": {
|
||||
"ignore": [],
|
||||
"patch": []
|
||||
},
|
||||
"uniqueCount": 158,
|
||||
"projectName": "docker-image|docker-scan",
|
||||
"platform": "linux/amd64",
|
||||
"path": "docker-scan:e2e"
|
||||
}
|
||||
```
|
||||
|
||||
You can find all the sources of the vulnerability in the `from` section.
|
||||
|
||||
### Checking the dependency tree
|
||||
|
||||
To view the dependency tree of your image, use the --dependency-tree flag. This displays all the dependencies before the scan result. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --dependency-tree debian:buster
|
||||
|
||||
$ docker-image|99138c65ebc7 @ latest
|
||||
├─ ca-certificates @ 20200601~deb10u1
|
||||
│ └─ openssl @ 1.1.1d-0+deb10u3
|
||||
│ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3
|
||||
├─ curl @ 7.64.0-4+deb10u1
|
||||
│ └─ curl/libcurl4 @ 7.64.0-4+deb10u1
|
||||
│ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3
|
||||
│ ├─ krb5/libgssapi-krb5-2 @ 1.17-3
|
||||
│ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3
|
||||
│ │ ├─ krb5/libk5crypto3 @ 1.17-3
|
||||
│ │ │ └─ krb5/libkrb5support0 @ 1.17-3
|
||||
│ │ ├─ krb5/libkrb5-3 @ 1.17-3
|
||||
│ │ │ ├─ e2fsprogs/libcom-err2 @ 1.44.5-1+deb10u3
|
||||
│ │ │ ├─ krb5/libk5crypto3 @ 1.17-3
|
||||
│ │ │ ├─ krb5/libkrb5support0 @ 1.17-3
|
||||
│ │ │ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3
|
||||
│ │ └─ krb5/libkrb5support0 @ 1.17-3
|
||||
│ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1
|
||||
│ │ └─ libunistring/libunistring2 @ 0.9.10-1
|
||||
│ ├─ krb5/libk5crypto3 @ 1.17-3
|
||||
│ ├─ krb5/libkrb5-3 @ 1.17-3
|
||||
│ ├─ openldap/libldap-2.4-2 @ 2.4.47+dfsg-3+deb10u2
|
||||
│ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4
|
||||
│ │ │ ├─ nettle/libhogweed4 @ 3.4.1-1
|
||||
│ │ │ │ └─ nettle/libnettle6 @ 3.4.1-1
|
||||
│ │ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1
|
||||
│ │ │ ├─ nettle/libnettle6 @ 3.4.1-1
|
||||
│ │ │ ├─ p11-kit/libp11-kit0 @ 0.23.15-2
|
||||
│ │ │ │ └─ libffi/libffi6 @ 3.2.1-9
|
||||
│ │ │ ├─ libtasn1-6 @ 4.13-3
|
||||
│ │ │ └─ libunistring/libunistring2 @ 0.9.10-1
|
||||
│ │ ├─ cyrus-sasl2/libsasl2-2 @ 2.1.27+dfsg-1+deb10u1
|
||||
│ │ │ └─ cyrus-sasl2/libsasl2-modules-db @ 2.1.27+dfsg-1+deb10u1
|
||||
│ │ │ └─ db5.3/libdb5.3 @ 5.3.28+dfsg1-0.5
|
||||
│ │ └─ openldap/libldap-common @ 2.4.47+dfsg-3+deb10u2
|
||||
│ ├─ nghttp2/libnghttp2-14 @ 1.36.0-2+deb10u1
|
||||
│ ├─ libpsl/libpsl5 @ 0.20.2-2
|
||||
│ │ ├─ libidn2/libidn2-0 @ 2.0.5-1+deb10u1
|
||||
│ │ └─ libunistring/libunistring2 @ 0.9.10-1
|
||||
│ ├─ rtmpdump/librtmp1 @ 2.4+20151223.gitfa8646d.1-2
|
||||
│ │ ├─ gnutls28/libgnutls30 @ 3.6.7-4+deb10u4
|
||||
│ │ ├─ nettle/libhogweed4 @ 3.4.1-1
|
||||
│ │ └─ nettle/libnettle6 @ 3.4.1-1
|
||||
│ ├─ libssh2/libssh2-1 @ 1.8.0-2.1
|
||||
│ │ └─ libgcrypt20 @ 1.8.4-5
|
||||
│ └─ openssl/libssl1.1 @ 1.1.1d-0+deb10u3
|
||||
├─ gnupg2/dirmngr @ 2.2.12-1+deb10u1
|
||||
...
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Project name: docker-image|99138c65ebc7
|
||||
Docker image: 99138c65ebc7
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 157 issues.
|
||||
|
||||
For more free scans that keep your images secure, sign up to Snyk at https://dockr.ly/3ePqVcp.
|
||||
```
|
||||
|
||||
For more information about the vulnerability data, see [Docker Vulnerability Scanning CLI Cheat Sheet](https://goto.docker.com/rs/929-FJL-178/images/cheat-sheet-docker-desktop-vulnerability-scanning-CLI.pdf){: target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
### Limiting the level of vulnerabilities displayed
|
||||
|
||||
Docker scan allows you to choose the level of vulnerabilities displayed in your scan report using the `--severity` flag.
|
||||
You can set the severity flag to `low`, `medium`, or` high` depending on the level of vulnerabilities you’d like to see in your report.
|
||||
For example, if you set the severity level as `medium`, the scan report displays all vulnerabilities that are classified as medium and high.
|
||||
|
||||
```console
|
||||
$ docker scan --severity=medium docker-scan:e2e
|
||||
./bin/docker-scan_darwin_amd64 scan --severity=medium docker-scan:e2e
|
||||
|
||||
Testing docker-scan:e2e...
|
||||
|
||||
✗ Medium severity vulnerability found in sqlite3/libsqlite3-0
|
||||
Description: Divide By Zero
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-SQLITE3-466337
|
||||
Introduced through: gnupg2/gnupg@2.2.12-1+deb10u1, subversion@1.10.4-1+deb10u1, mercurial@4.8.2-1+deb10u1
|
||||
From: gnupg2/gnupg@2.2.12-1+deb10u1 > gnupg2/gpg@2.2.12-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3
|
||||
From: subversion@1.10.4-1+deb10u1 > subversion/libsvn1@1.10.4-1+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3
|
||||
From: mercurial@4.8.2-1+deb10u1 > python-defaults/python@2.7.16-1 > python2.7@2.7.16-2+deb10u1 > python2.7/libpython2.7-stdlib@2.7.16-2+deb10u1 > sqlite3/libsqlite3-0@3.27.2-3
|
||||
|
||||
✗ Medium severity vulnerability found in sqlite3/libsqlite3-0
|
||||
Description: Uncontrolled Recursion
|
||||
...
|
||||
✗ High severity vulnerability found in binutils/binutils-common
|
||||
Description: Missing Release of Resource after Effective Lifetime
|
||||
Info: https://snyk.io/vuln/SNYK-DEBIAN10-BINUTILS-403318
|
||||
Introduced through: gcc-defaults/g++@4:8.3.0-1
|
||||
From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-common@2.31.1-16
|
||||
From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/libbinutils@2.31.1-16 > binutils/binutils-common@2.31.1-16
|
||||
From: gcc-defaults/g++@4:8.3.0-1 > gcc-defaults/gcc@4:8.3.0-1 > gcc-8@8.3.0-6 > binutils@2.31.1-16 > binutils/binutils-x86-64-linux-gnu@2.31.1-16 > binutils/binutils-common@2.31.1-16
|
||||
and 4 more...
|
||||
|
||||
Organization: docker-desktop-test
|
||||
Package manager: deb
|
||||
Project name: docker-image|docker-scan
|
||||
Docker image: docker-scan:e2e
|
||||
Platform: linux/amd64
|
||||
Licenses: enabled
|
||||
|
||||
Tested 200 dependencies for known issues, found 37 issues.
|
||||
```
|
||||
|
||||
## Provider authentication
|
||||
|
||||
If you have an existing Snyk account, you can directly use your Snyk [API token](https://app.snyk.io/account){: target="_blank" rel="noopener" class="_"}:
|
||||
|
||||
```console
|
||||
$ docker scan --login --token SNYK_AUTH_TOKEN
|
||||
|
||||
Your account has been authenticated. Snyk is now ready to be used.
|
||||
```
|
||||
|
||||
If you use the `--login` flag without any token, you will be redirected to the Snyk website to login.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To run vulnerability scanning on your Docker images, you must meet the following requirements:
|
||||
|
||||
1. Download and install the latest version of Docker Desktop.
|
||||
|
||||
- [Download for Mac with Intel chip](https://desktop.docker.com/mac/main/amd64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-amd64)
|
||||
- [Download for Mac with Apple chip](https://desktop.docker.com/mac/main/arm64/Docker.dmg?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-mac-arm64)
|
||||
- [Download for Windows](https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe)
|
||||
|
||||
2. Sign into [Docker Hub](https://hub.docker.com){: target="_blank" rel="noopener" class="_"}.
|
||||
|
||||
3. From the Docker Desktop menu, select **Sign in/ Create Docker ID**. Alternatively, open a terminal and run the command `docker login`.
|
||||
|
||||
4. (Optional) You can create a [Snyk account](https://dockr.ly/3ePqVcp){: target="_blank" rel="noopener" class="_"} for scans, or use the additional monthly free scans provided by Snyk with your Docker Hub account.
|
||||
|
||||
Check your installation by running `docker scan --version`, it should print the current version of docker scan and the Snyk engine version. For example:
|
||||
|
||||
```console
|
||||
$ docker scan --version
|
||||
Version: v0.5.0
|
||||
Git commit: 5a09266
|
||||
Provider: Snyk (1.432.0)
|
||||
```
|
||||
|
||||
> **Note:**
|
||||
>
|
||||
> Docker Scan uses the Snyk binary installed in your environment by default. If
|
||||
this is not available, it uses the Snyk binary embedded in Docker Desktop.
|
||||
> The minimum version required for Snyk is `1.385.0`.
|
||||
|
||||
## Supported options
|
||||
|
||||
The high-level `docker scan` command scans local images using the image name or the image ID. It supports the following options:
|
||||
|
||||
| Option | Description |
|
||||
|:------------------------------------------------------------------ :------------------------------------------------|
|
||||
| `--accept-license` | Accept the license agreement of the third-party scanning provider |
|
||||
| `--dependency-tree` | Display the dependency tree of the image along with scan results |
|
||||
| `--exclude-base` | Exclude the base image during scanning. This option requires the --file option to be set |
|
||||
| `-f`, `--file string` | Specify the location of the Dockerfile associated with the image. This option displays a detailed scan result |
|
||||
| `--json` | Display the result of the scan in JSON format|
|
||||
| `--login` | Log into Snyk using an optional token (using the flag --token), or by using a web-based token |
|
||||
| `--reject-license` | Reject the license agreement of the third-party scanning provider |
|
||||
| `--severity string` | Only report vulnerabilities of provided level or higher (low, medium, high) |
|
||||
| `--token string` | Use the authentication token to log into the third-party scanning provider |
|
||||
| `--version` | Display the Docker Scan plugin version |
|
||||
|
||||
## Known issues
|
||||
|
||||
**WSL 2**
|
||||
|
||||
- The Vulnerability scanning feature doesn’t work with Alpine distributions.
|
||||
- If you are using Debian and OpenSUSE distributions, the login process only works with the `--token` flag, you won’t be redirected to the Snyk website for authentication.
|
||||
|
||||
## Feedback
|
||||
|
||||
Your feedback is very important to us. Let us know your feedback by creating an issue in the [scan-cli-plugin](https://github.com/docker/scan-cli-plugin/issues/new){: target="_blank" rel="noopener" class="_"} GitHub repository.
|
||||
|
|
|
@ -1,74 +1,74 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 16.0.2, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||
width="400px" height="264px" viewBox="0 0 400 264" enable-background="new 0 0 400 264" xml:space="preserve">
|
||||
<g>
|
||||
<defs>
|
||||
<path id="SVGID_1_" d="M100.567,45.571c0,0-21.796,17.396-27.741,17.894c-5.942,0.495-31.7,17.395-31.7,17.395l-16.84-0.498
|
||||
L5.461,58.991l25.761,193.104h298.922L120.59,41.819L100.567,45.571z"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_2_">
|
||||
<use xlink:href="#SVGID_1_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<defs>
|
||||
<filter id="Adobe_OpacityMaskFilter" filterUnits="userSpaceOnUse" x="5.461" y="41.819" width="324.683" height="210.276">
|
||||
<feColorMatrix type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0"/>
|
||||
</filter>
|
||||
</defs>
|
||||
<mask maskUnits="userSpaceOnUse" x="5.461" y="41.819" width="324.683" height="210.276" id="SVGID_3_">
|
||||
<g filter="url(#Adobe_OpacityMaskFilter)">
|
||||
|
||||
<linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="-457.4434" y1="1540.4873" x2="-454.4615" y2="1540.4873" gradientTransform="matrix(0 -77.1078 -77.1078 0 118951.3984 -35029.4609)">
|
||||
<stop offset="0" style="stop-color:#000000"/>
|
||||
<stop offset="1" style="stop-color:#333333"/>
|
||||
</linearGradient>
|
||||
<rect x="5.461" y="41.819" clip-path="url(#SVGID_2_)" fill="url(#SVGID_4_)" width="324.683" height="210.276"/>
|
||||
</g>
|
||||
</mask>
|
||||
|
||||
<linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="-457.4434" y1="1540.4873" x2="-454.4615" y2="1540.4873" gradientTransform="matrix(0 -77.1078 -77.1078 0 118951.3984 -35029.4609)">
|
||||
<stop offset="0" style="stop-color:#FFFFFF"/>
|
||||
<stop offset="1" style="stop-color:#000000"/>
|
||||
</linearGradient>
|
||||
|
||||
<rect x="5.461" y="41.819" clip-path="url(#SVGID_2_)" mask="url(#SVGID_3_)" fill="url(#SVGID_5_)" width="324.683" height="210.276"/>
|
||||
</g>
|
||||
<g>
|
||||
<defs>
|
||||
<path id="SVGID_6_" d="M54.623,44.986h11.771V33.458H54.623V44.986z M11.295,44.982h11.771V33.453H11.295V44.982z M23.065,33.453
|
||||
h0.006H23.065z M25.74,44.982h11.771V33.453H25.74V44.982z M40.184,44.982h11.77V33.453h-11.77V44.982z M69.064,44.982h11.77
|
||||
V33.453h-11.77V44.982z M21.509,64.847c0-3.366,2.792-6.101,6.224-6.101c3.434,0,6.228,2.735,6.228,6.101
|
||||
c0,3.362-2.792,6.099-6.228,6.099C24.3,70.946,21.509,68.209,21.509,64.847 M94.925,26.723c-2.493,2.828-3.235,7.522-2.896,11.127
|
||||
c0.25,2.65,1.101,5.341,2.768,7.473c-1.264,0.746-2.708,1.333-3.986,1.762c-2.61,0.865-5.449,1.342-8.205,1.342H5.172
|
||||
l-0.167,1.756c-0.548,5.831,0.263,11.665,2.729,17l1.052,2.121l0.125,0.197c7.282,12.116,20.068,17.218,34.003,17.218
|
||||
c26.977,0,49.227-11.804,59.444-36.735c6.826,0.343,13.81-1.626,17.154-8.018l0.851-1.637l-1.619-0.912
|
||||
c-3.951-2.23-9.218-2.543-13.696-1.247v0.006c-0.553-4.674-3.683-8.766-7.405-11.698l-1.479-1.159L94.925,26.723z M25.74,30.197
|
||||
h11.771V18.669H25.74V30.197z M40.184,30.197h11.77V18.669h-11.77V30.197z M54.623,30.197h11.771V18.669H54.623V30.197z
|
||||
M54.623,15.411h11.771V3.881H54.623V15.411z"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_7_">
|
||||
<use xlink:href="#SVGID_6_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<rect x="-10.07" y="-11.03" clip-path="url(#SVGID_7_)" fill="#FFFFFF" width="145.347" height="112.66"/>
|
||||
</g>
|
||||
<g enable-background="new ">
|
||||
<g>
|
||||
<defs>
|
||||
<rect id="SVGID_8_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_9_">
|
||||
<use xlink:href="#SVGID_8_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g clip-path="url(#SVGID_9_)">
|
||||
<defs>
|
||||
<rect id="SVGID_10_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_11_">
|
||||
<use xlink:href="#SVGID_10_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g transform="matrix(1 0 0 1 -1.423527e-005 -1.237250e-006)" clip-path="url(#SVGID_11_)">
|
||||
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 16.0.2, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||
width="400px" height="264px" viewBox="0 0 400 264" enable-background="new 0 0 400 264" xml:space="preserve">
|
||||
<g>
|
||||
<defs>
|
||||
<path id="SVGID_1_" d="M100.567,45.571c0,0-21.796,17.396-27.741,17.894c-5.942,0.495-31.7,17.395-31.7,17.395l-16.84-0.498
|
||||
L5.461,58.991l25.761,193.104h298.922L120.59,41.819L100.567,45.571z"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_2_">
|
||||
<use xlink:href="#SVGID_1_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<defs>
|
||||
<filter id="Adobe_OpacityMaskFilter" filterUnits="userSpaceOnUse" x="5.461" y="41.819" width="324.683" height="210.276">
|
||||
<feColorMatrix type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0"/>
|
||||
</filter>
|
||||
</defs>
|
||||
<mask maskUnits="userSpaceOnUse" x="5.461" y="41.819" width="324.683" height="210.276" id="SVGID_3_">
|
||||
<g filter="url(#Adobe_OpacityMaskFilter)">
|
||||
|
||||
<linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="-457.4434" y1="1540.4873" x2="-454.4615" y2="1540.4873" gradientTransform="matrix(0 -77.1078 -77.1078 0 118951.3984 -35029.4609)">
|
||||
<stop offset="0" style="stop-color:#000000"/>
|
||||
<stop offset="1" style="stop-color:#333333"/>
|
||||
</linearGradient>
|
||||
<rect x="5.461" y="41.819" clip-path="url(#SVGID_2_)" fill="url(#SVGID_4_)" width="324.683" height="210.276"/>
|
||||
</g>
|
||||
</mask>
|
||||
|
||||
<linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="-457.4434" y1="1540.4873" x2="-454.4615" y2="1540.4873" gradientTransform="matrix(0 -77.1078 -77.1078 0 118951.3984 -35029.4609)">
|
||||
<stop offset="0" style="stop-color:#FFFFFF"/>
|
||||
<stop offset="1" style="stop-color:#000000"/>
|
||||
</linearGradient>
|
||||
|
||||
<rect x="5.461" y="41.819" clip-path="url(#SVGID_2_)" mask="url(#SVGID_3_)" fill="url(#SVGID_5_)" width="324.683" height="210.276"/>
|
||||
</g>
|
||||
<g>
|
||||
<defs>
|
||||
<path id="SVGID_6_" d="M54.623,44.986h11.771V33.458H54.623V44.986z M11.295,44.982h11.771V33.453H11.295V44.982z M23.065,33.453
|
||||
h0.006H23.065z M25.74,44.982h11.771V33.453H25.74V44.982z M40.184,44.982h11.77V33.453h-11.77V44.982z M69.064,44.982h11.77
|
||||
V33.453h-11.77V44.982z M21.509,64.847c0-3.366,2.792-6.101,6.224-6.101c3.434,0,6.228,2.735,6.228,6.101
|
||||
c0,3.362-2.792,6.099-6.228,6.099C24.3,70.946,21.509,68.209,21.509,64.847 M94.925,26.723c-2.493,2.828-3.235,7.522-2.896,11.127
|
||||
c0.25,2.65,1.101,5.341,2.768,7.473c-1.264,0.746-2.708,1.333-3.986,1.762c-2.61,0.865-5.449,1.342-8.205,1.342H5.172
|
||||
l-0.167,1.756c-0.548,5.831,0.263,11.665,2.729,17l1.052,2.121l0.125,0.197c7.282,12.116,20.068,17.218,34.003,17.218
|
||||
c26.977,0,49.227-11.804,59.444-36.735c6.826,0.343,13.81-1.626,17.154-8.018l0.851-1.637l-1.619-0.912
|
||||
c-3.951-2.23-9.218-2.543-13.696-1.247v0.006c-0.553-4.674-3.683-8.766-7.405-11.698l-1.479-1.159L94.925,26.723z M25.74,30.197
|
||||
h11.771V18.669H25.74V30.197z M40.184,30.197h11.77V18.669h-11.77V30.197z M54.623,30.197h11.771V18.669H54.623V30.197z
|
||||
M54.623,15.411h11.771V3.881H54.623V15.411z"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_7_">
|
||||
<use xlink:href="#SVGID_6_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<rect x="-10.07" y="-11.03" clip-path="url(#SVGID_7_)" fill="#FFFFFF" width="145.347" height="112.66"/>
|
||||
</g>
|
||||
<g enable-background="new ">
|
||||
<g>
|
||||
<defs>
|
||||
<rect id="SVGID_8_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_9_">
|
||||
<use xlink:href="#SVGID_8_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g clip-path="url(#SVGID_9_)">
|
||||
<defs>
|
||||
<rect id="SVGID_10_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_11_">
|
||||
<use xlink:href="#SVGID_10_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g transform="matrix(1 0 0 1 -1.423527e-005 -1.237250e-006)" clip-path="url(#SVGID_11_)">
|
||||
|
||||
<image overflow="visible" width="75" height="18" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAAASCAYAAAATzyPVAAAACXBIWXMAAAOwAAADsAEnxA+tAAAA
|
||||
GXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAABbFJREFUeNrEmHlsVUUUh7u3AoUS
|
||||
aC2iRSrRaiyKa+ISoiwaNEBRULSKW4LExCUYCaRqXKJGElxQFg1C3Uk0+o+hFdOAawCjLUhrEbCF
|
||||
|
@ -96,34 +96,34 @@ SX3W7ih8necFGGOjU7/ZBt8Wj4tXULqB7Glp/Sbm2OFcSw3npf9t1EMZ6LSUg/DaJrLgcxy2FdfL
|
|||
iWWeHW7DWFb7TYlyw05OLnUKyKBmYyX+/7HwnjJqn1CAnHnO12wwwydbyP9jbRHeaYd0Px5xi/id
|
||||
ZyW8dxHvDVMHFrLuUDy3LsJ+OjmQMY4ud1De1FMwZ8S6kpPEa+JXPj0O8dsWPl+kRpGdyjWr5VSt
|
||||
sPseT8mLIpeNQSrwlBZO2K71PV5RiWE/F19Z0PaurPqFYgd/DOb51r6UCn4n6x4U33CTRvrmWmip
|
||||
Eh+IC+zG+XX9V4ABABi5llylUZDhAAAAAElFTkSuQmCC" transform="matrix(3.0019 0 0 3.0019 162.7422 22.292)">
|
||||
</image>
|
||||
</g>
|
||||
</g>
|
||||
<g clip-path="url(#SVGID_9_)">
|
||||
<defs>
|
||||
<rect id="SVGID_12_" x="162.742" y="22.292" width="225.148" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_13_">
|
||||
<use xlink:href="#SVGID_12_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g clip-path="url(#SVGID_13_)" enable-background="new ">
|
||||
<g>
|
||||
<defs>
|
||||
<rect id="SVGID_14_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_15_">
|
||||
<use xlink:href="#SVGID_14_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g clip-path="url(#SVGID_15_)">
|
||||
<defs>
|
||||
<rect id="SVGID_16_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_17_">
|
||||
<use xlink:href="#SVGID_16_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g transform="matrix(1 0 0 1 -1.423527e-005 -1.237250e-006)" clip-path="url(#SVGID_17_)">
|
||||
|
||||
Eh+IC+zG+XX9V4ABABi5llylUZDhAAAAAElFTkSuQmCC" transform="matrix(3.0019 0 0 3.0019 162.7422 22.292)">
|
||||
</image>
|
||||
</g>
|
||||
</g>
|
||||
<g clip-path="url(#SVGID_9_)">
|
||||
<defs>
|
||||
<rect id="SVGID_12_" x="162.742" y="22.292" width="225.148" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_13_">
|
||||
<use xlink:href="#SVGID_12_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g clip-path="url(#SVGID_13_)" enable-background="new ">
|
||||
<g>
|
||||
<defs>
|
||||
<rect id="SVGID_14_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_15_">
|
||||
<use xlink:href="#SVGID_14_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g clip-path="url(#SVGID_15_)">
|
||||
<defs>
|
||||
<rect id="SVGID_16_" x="162.742" y="22.292" width="225.146" height="54.035"/>
|
||||
</defs>
|
||||
<clipPath id="SVGID_17_">
|
||||
<use xlink:href="#SVGID_16_" overflow="visible"/>
|
||||
</clipPath>
|
||||
<g transform="matrix(1 0 0 1 -1.423527e-005 -1.237250e-006)" clip-path="url(#SVGID_17_)">
|
||||
|
||||
<image overflow="visible" width="75" height="18" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAAASCAYAAAATzyPVAAAACXBIWXMAAAOwAAADsAEnxA+tAAAA
|
||||
GXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAABbFJREFUeNrEmHlsVUUUh7u3AoUS
|
||||
aC2iRSrRaiyKa+ISoiwaNEBRULSKW4LExCUYCaRqXKJGElxQFg1C3Uk0+o+hFdOAawCjLUhrEbCF
|
||||
|
@ -151,13 +151,13 @@ SX3W7ih8necFGGOjU7/ZBt8Wj4tXULqB7Glp/Sbm2OFcSw3npf9t1EMZ6LSUg/DaJrLgcxy2FdfL
|
|||
iWWeHW7DWFb7TYlyw05OLnUKyKBmYyX+/7HwnjJqn1CAnHnO12wwwydbyP9jbRHeaYd0Px5xi/id
|
||||
ZyW8dxHvDVMHFrLuUDy3LsJ+OjmQMY4ud1De1FMwZ8S6kpPEa+JXPj0O8dsWPl+kRpGdyjWr5VSt
|
||||
sPseT8mLIpeNQSrwlBZO2K71PV5RiWE/F19Z0PaurPqFYgd/DOb51r6UCn4n6x4U33CTRvrmWmip
|
||||
Eh+IC+zG+XX9V4ABABi5llylUZDhAAAAAElFTkSuQmCC" transform="matrix(3.0019 0 0 3.0019 162.7422 22.292)">
|
||||
</image>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
||||
Eh+IC+zG+XX9V4ABABi5llylUZDhAAAAAElFTkSuQmCC" transform="matrix(3.0019 0 0 3.0019 162.7422 22.292)">
|
||||
</image>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
||||
|
|
Before Width: | Height: | Size: 9.4 KiB After Width: | Height: | Size: 9.3 KiB |
|
@ -1,13 +1,13 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 15.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||
width="84px" height="102px" viewBox="0 0 84 102" enable-background="new 0 0 84 102" xml:space="preserve">
|
||||
<path fill="#2AB6E9" d="M30.966,1.776L2.484,30.217v69.56h80v-98H30.966z M30.484,11.446v18.33H11.952L30.484,11.446z
|
||||
M75.484,92.776h-66v-56h28v-28h38V92.776z"/>
|
||||
<rect x="19.484" y="61.775" fill="#2AB6E9" width="46" height="7"/>
|
||||
<rect x="19.484" y="75.775" fill="#2AB6E9" width="46" height="7"/>
|
||||
<rect x="47.484" y="29.775" fill="#2AB6E9" width="18" height="7"/>
|
||||
<rect x="47.484" y="15.775" fill="#2AB6E9" width="18" height="7"/>
|
||||
<rect x="19.484" y="47.775" fill="#2AB6E9" width="46" height="7"/>
|
||||
</svg>
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 15.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||
width="84px" height="102px" viewBox="0 0 84 102" enable-background="new 0 0 84 102" xml:space="preserve">
|
||||
<path fill="#2AB6E9" d="M30.966,1.776L2.484,30.217v69.56h80v-98H30.966z M30.484,11.446v18.33H11.952L30.484,11.446z
|
||||
M75.484,92.776h-66v-56h28v-28h38V92.776z"/>
|
||||
<rect x="19.484" y="61.775" fill="#2AB6E9" width="46" height="7"/>
|
||||
<rect x="19.484" y="75.775" fill="#2AB6E9" width="46" height="7"/>
|
||||
<rect x="47.484" y="29.775" fill="#2AB6E9" width="18" height="7"/>
|
||||
<rect x="47.484" y="15.775" fill="#2AB6E9" width="18" height="7"/>
|
||||
<rect x="19.484" y="47.775" fill="#2AB6E9" width="46" height="7"/>
|
||||
</svg>
|
||||
|
|
Before Width: | Height: | Size: 983 B After Width: | Height: | Size: 970 B |
|
@ -1,16 +1,16 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 15.0.0, SVG Export Plug-In -->
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" [
|
||||
<!ENTITY ns_flows "http://ns.adobe.com/Flows/1.0/">
|
||||
]>
|
||||
<svg version="1.1"
|
||||
xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:a="http://ns.adobe.com/AdobeSVGViewerExtensions/3.0/"
|
||||
x="0px" y="0px" width="94px" height="84px" viewBox="-1.877 -1.327 94 84" enable-background="new -1.877 -1.327 94 84"
|
||||
xml:space="preserve">
|
||||
<defs>
|
||||
</defs>
|
||||
<path fill="#2AB6E9" d="M62.679,73.929H27.321c-1.775,0-3.214,1.439-3.214,3.214c0,1.775,1.439,3.216,3.214,3.216h35.357
|
||||
c1.774,0,3.214-1.439,3.214-3.216C65.893,75.368,64.453,73.929,62.679,73.929"/>
|
||||
<path fill="#2AB6E9" d="M83.975,0H6.027C2.698,0,0,2.699,0,6.026v55.447C0,64.802,2.698,67.5,6.027,67.5h77.948
|
||||
c3.328,0,6.025-2.698,6.025-6.026V6.026C90,2.699,87.303,0,83.975,0 M83.571,54.643H6.428V6.429h77.143V54.643z"/>
|
||||
</svg>
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 15.0.0, SVG Export Plug-In -->
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" [
|
||||
<!ENTITY ns_flows "http://ns.adobe.com/Flows/1.0/">
|
||||
]>
|
||||
<svg version="1.1"
|
||||
xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:a="http://ns.adobe.com/AdobeSVGViewerExtensions/3.0/"
|
||||
x="0px" y="0px" width="94px" height="84px" viewBox="-1.877 -1.327 94 84" enable-background="new -1.877 -1.327 94 84"
|
||||
xml:space="preserve">
|
||||
<defs>
|
||||
</defs>
|
||||
<path fill="#2AB6E9" d="M62.679,73.929H27.321c-1.775,0-3.214,1.439-3.214,3.214c0,1.775,1.439,3.216,3.214,3.216h35.357
|
||||
c1.774,0,3.214-1.439,3.214-3.216C65.893,75.368,64.453,73.929,62.679,73.929"/>
|
||||
<path fill="#2AB6E9" d="M83.975,0H6.027C2.698,0,0,2.699,0,6.026v55.447C0,64.802,2.698,67.5,6.027,67.5h77.948
|
||||
c3.328,0,6.025-2.698,6.025-6.026V6.026C90,2.699,87.303,0,83.975,0 M83.571,54.643H6.428V6.429h77.143V54.643z"/>
|
||||
</svg>
|
||||
|
|
Before Width: | Height: | Size: 1019 B After Width: | Height: | Size: 1003 B |
Loading…
Reference in New Issue