Merge pull request #11865 from gtardif/compose-cli-docs

Compose cli reference docs
This commit is contained in:
Sebastiaan van Stijn 2021-01-05 11:25:06 +01:00 committed by GitHub
commit e4ecf664d5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 58 additions and 23 deletions

View File

@ -20,6 +20,9 @@ ARG ENGINE_BRANCH="master"
# Distribution
ARG DISTRIBUTION_BRANCH="release/2.7"
# Compose CLI
ARG COMPOSE_CLI_BRANCH="main"
###
# Set up base stages for building and deploying
###
@ -35,6 +38,9 @@ ENV ENGINE_BRANCH=${ENGINE_BRANCH}
ARG DISTRIBUTION_BRANCH
ENV DISTRIBUTION_BRANCH=${DISTRIBUTION_BRANCH}
ARG COMPOSE_CLI_BRANCH
ENV COMPOSE_CLI_BRANCH=${COMPOSE_CLI_BRANCH}
# Fetch upstream resources (reference documentation)
# Only add the files that are needed to build these reference docs, so that these
# docs are only rebuilt if changes were made to ENGINE_BRANCH or DISTRIBUTION_BRANCH.
@ -45,6 +51,7 @@ WORKDIR /usr/src/app/md_source/
COPY ./_scripts/fetch-upstream-resources.sh ./_scripts/
ARG ENGINE_BRANCH
ARG DISTRIBUTION_BRANCH
ARG COMPOSE_CLI_BRANCH
RUN ./_scripts/fetch-upstream-resources.sh .

View File

@ -81,10 +81,20 @@ guides:
title: Configure GitHub Actions
- sectiontitle: Deploy your app to the cloud
section:
- path: /engine/context/aci-integration/
- path: /cloud/aci-integration/
title: Docker and ACI
- path: /engine/context/ecs-integration/
- path: /cloud/aci-container-features/
title: ACI container features
- path: /cloud/aci-compose-features/
title: ACI Compose features
- path: /cloud/ecs-integration/
title: Docker and ECS
- path: /cloud/ecs-architecture/
title: Docker ECS integration architecture
- path: /cloud/ecs-compose-features/
title: ECS Compose features
- path: /cloud/ecs-compose-examples/
title: ECS Compose examples
- sectiontitle: Run your app in production
section:
- sectiontitle: Orchestration

View File

@ -6,6 +6,7 @@
# which are usually set by the Dockerfile.
: "${ENGINE_BRANCH?No release branch set for docker/docker and docker/cli}"
: "${DISTRIBUTION_BRANCH?No release branch set for docker/distribution}"
: "${COMPOSE_CLI_BRANCH?No release branch set for docker/compose-cli}"
# Translate branches for use by svn
engine_svn_branch="branches/${ENGINE_BRANCH}"
@ -16,10 +17,15 @@ distribution_svn_branch="branches/${DISTRIBUTION_BRANCH}"
if [ "${distribution_svn_branch}" = "branches/master" ]; then
distribution_svn_branch=trunk
fi
compose_cli_svn_branch="branches/${COMPOSE_CLI_BRANCH}"
if [ "${compose_cli_svn_branch}" = "branches/main" ]; then
compose_cli_svn_branch=trunk
fi
# Directories to get via SVN. We use this because you can't use git to clone just a portion of a repository
svn co "https://github.com/docker/cli/${engine_svn_branch}/docs/extend" ./engine/extend || (echo "Failed engine/extend download" && exit 1)
svn co "https://github.com/docker/docker/${engine_svn_branch}/docs/api" ./engine/api || (echo "Failed engine/api download" && exit 1)
svn co "https://github.com/docker/compose-cli/${compose_cli_svn_branch}/docs" ./cloud || (echo "Failed compose-cli/docs download" && exit 1)
svn co "https://github.com/docker/distribution/${distribution_svn_branch}/docs/spec" ./registry/spec || (echo "Failed registry/spec download" && exit 1)
svn co "https://github.com/mirantis/compliance/trunk/docs/compliance" ./compliance || (echo "Failed docker/compliance download" && exit 1)
@ -37,3 +43,6 @@ wget --quiet --directory-prefix=./registry/ "https://raw.git
# Remove things we don't want in the build
rm -f ./engine/extend/cli_plugins.md # the cli plugins api is not a stable API, and not included in the TOC for that reason.
rm -f ./registry/spec/api.md.tmpl
rm -f ./cloud/README.md # readme to make things nice in the compose-cli repo, but meaningless here
rm -f ./cloud/architecure.md # Compose-CLI architecture, unrelated to cloud integration
rm -rf ./cloud/images

View File

@ -2,6 +2,8 @@
title: Deploying Docker containers on Azure
description: Deploying Docker containers on Azure
keywords: Docker, Azure, Integration, ACI, context, Compose, cli, deploy, containers, cloud
redirect_from:
- /engine/context/aci-integration/
toc_min: 1
toc_max: 2
---
@ -16,6 +18,8 @@ In addition, the integration between Docker and Microsoft developer technologies
- Set up an ACI context in one Docker command allowing you to switch from a local context to a cloud context and run applications quickly and easily
- Simplify single container and multi-container application development using the Compose specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service
Also see the [full list of container features supported by ACI](aci-container-features.md) and [full list of compose features supported by ACI](aci-compose-features.md).
## Prerequisites
To deploy Docker containers on Azure, you must meet the following requirements:
@ -34,6 +38,7 @@ To deploy Docker containers on Azure, you must meet the following requirements:
Docker not only runs containers locally, but also enables developers to seamlessly deploy Docker containers on ACI using `docker run` or deploy multi-container applications defined in a Compose file using the `docker compose up` command.
The following sections contain instructions on how to deploy your Docker containers on ACI.
Also see the [full list of container features supported by ACI](aci-container-features.md).
### Log into Azure
@ -64,7 +69,7 @@ you have several ones available in Azure.
### Create an ACI context
After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI.
Creating an ACI context requires an Azure subscription, a resource group, and a region.
Creating an ACI context requires an Azure subscription, a [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal), and a region.
For example, let us create a new context called `myacicontext`:
```console
@ -136,6 +141,8 @@ You can also deploy and manage multi-container applications defined in Compose f
All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file.
Name resolution between containers is achieved by writing service names in the `/etc/hosts` file that is shared automatically by all containers in the container group.
Also see the [full list of compose features supported by ACI](aci-compose-features.md).
1. Ensure you are using your ACI context. You can do this either by specifying the `--context myacicontext` flag or by setting the default context using the command `docker context use myacicontext`.
2. Run `docker compose up` and `docker compose down` to start and then stop a full Compose application.

View File

@ -2,6 +2,8 @@
title: Deploying Docker containers on ECS
description: Deploying Docker containers on ECS
keywords: Docker, AWS, ECS, Integration, context, Compose, cli, deploy, containers, cloud
redirect_from:
- /engine/context/ecs-integration/
toc_min: 1
toc_max: 2
---
@ -15,6 +17,8 @@ The integration between Docker and Amazon ECS allows developers to use the Docke
* Set up an AWS context in one Docker command, allowing you to switch from a local context to a cloud context and run applications quickly and easily
* Simplify multi-container application development on Amazon ECS using Compose files
Also see the [ECS integration architecture](ecs-architecture.md), [full list of compose features](ecs-compose-features.md) and [Compose examples for ECS integration](ecs-compose-examples.md).
## Prerequisites
To deploy Docker containers on ECS, you must meet the following requirements:
@ -37,11 +41,9 @@ contain instructions on how to deploy your Compose application on Amazon ECS.
### Requirements
AWS uses a fine-grained permission model, with specific role for each resource type and operation.
AWS uses a fine-grained permission model, with specific role for each resource type and operation.
To ensure that Docker ECS integration is allowed to manage resources for your Compose application, you
have to ensure your AWS credentials grant access to following AWS IAM permissions:
To ensure that Docker ECS integration is allowed to manage resources for your Compose application, you have to ensure your AWS credentials [grant access to following AWS IAM permissions](https://aws.amazon.com/iam/features/manage-permissions/):
* cloudformation:*
* ecs:ListAccountSettings
@ -69,7 +71,7 @@ have to ensure your AWS credentials grant access to following AWS IAM permission
* route53:GetHostedZone
* route53:ListHostedZonesByName
GPU support, which relies on EC2 instances to run containers with attached GPU devices,
GPU support, which relies on EC2 instances to run containers with attached GPU devices,
require a few additional permissions:
* ec2:DescribeVpcs
@ -79,14 +81,13 @@ require a few additional permissions:
* iam:RemoveRoleFromInstanceProfile
* iam:DeleteInstanceProfile
### Create AWS context
Run the `docker context create ecs myecscontext` command to create an Amazon ECS Docker
context named `myecscontext`. If you have already installed and configured the AWS CLI,
the setup command lets you select an existing AWS profile to connect to Amazon.
Otherwise, you can create a new profile by passing an
[AWS access key ID and a secret access key](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys){: target="_blank" rel="noopener" class="_"}.
[AWS access key ID and a secret access key](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys){: target="_blank" rel="noopener" class="_"}.
Finally, you can configure your ECS context to retrieve AWS credentials by `AWS_*` environment variables, which is a common way to integrate with
third-party tools and single-sign-on providers.
@ -101,7 +102,7 @@ After you have created an AWS context, you can list your Docker contexts by runn
```console
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
myecscontext ecs credentials read from environment
myecscontext ecs credentials read from environment
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
```
@ -124,8 +125,8 @@ stop a full Compose application.
You can also specify a name for the Compose application using the `--project-name` flag during deployment. If no name is specified, a name will be derived from the working directory.
Docker ECS integration converts the Compose application model into a set of AWS resources, described as a [CloudFormation](https://aws.amazon.com/cloudformation/){: target="_blank" rel="noopener" class="_"} template. The actual mapping is described in [technical documentation](https://github.com/docker/compose-cli/blob/main/docs/ecs-architecture.md){: target="_blank" rel="noopener" class="_"}.
You can review the generated template using `docker compose convert` command, and follow CloudFormation applying this model within
[AWS web console](https://console.aws.amazon.com/cloudformation/home){: target="_blank" rel="noopener" class="_"} when you run `docker compose up`, in addition to CloudFormation events being displayed
You can review the generated template using `docker compose convert` command, and follow CloudFormation applying this model within
[AWS web console](https://console.aws.amazon.com/cloudformation/home){: target="_blank" rel="noopener" class="_"} when you run `docker compose up`, in addition to CloudFormation events being displayed
in your terminal.
- You can view services created for the Compose application on Amazon ECS and
@ -134,6 +135,7 @@ their state using the `docker compose ps` command.
- You can view logs from containers that are part of the Compose application
using the `docker compose logs` command.
Also see the [full list of compose features](ecs-compose-features.md).
## Rolling update
@ -426,11 +428,11 @@ services:
If your AWS account does not have [permissions](https://github.com/docker/ecs-plugin/blob/master/docs/requirements.md#permissions){: target="_blank" rel="noopener" class="_"} to create such resources, or if you want to manage these yourself, you can use the following custom Compose extensions:
- Use `x-aws-cluster` as a top-level element in your Compose file to set the ARN
- Use `x-aws-cluster` as a top-level element in your Compose file to set the ID
of an ECS cluster when deploying a Compose application. Otherwise, a
cluster will be created for the Compose project.
- Use `x-aws-vpc` as a top-level element in your Compose file to set the ID
- Use `x-aws-vpc` as a top-level element in your Compose file to set the ARN
of a VPC when deploying a Compose application.
- Use `x-aws-loadbalancer` as a top-level element in your Compose file to set
@ -442,14 +444,14 @@ use an existing domain name for your application:
1. Use the AWS web console or CLI to get your VPC and Subnets IDs. You can retrieve the default VPC ID and attached subnets using this AWS CLI commands:
```console
$ aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId'
$ aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId'
"vpc-123456"
$ aws ec2 describe-subnets --filters Name=vpc-id,Values=vpc-123456 --query 'Subnets[*].SubnetId'
[
"subnet-1234abcd",
"subnet-6789ef00",
"subnet-1234abcd",
"subnet-6789ef00",
]
```
1. Use the AWS CLI to create your load balancer. The AWS Web Console can also be used but will require adding at least one listener, which we don't need here.
@ -460,11 +462,11 @@ $ aws elbv2 create-load-balancer --name myloadbalancer --type application --subn
{
"LoadBalancers": [
{
"IpAddressType": "ipv4",
"VpcId": "vpc-123456",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/myloadbalancer/123abcd456",
"DNSName": "myloadbalancer-123456.us-east-1.elb.amazonaws.com",
...
"IpAddressType": "ipv4",
"VpcId": "vpc-123456",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/myloadbalancer/123abcd456",
"DNSName": "myloadbalancer-123456.us-east-1.elb.amazonaws.com",
...
```
1. To assign your application an existing domain name, you can configure your DNS with a
CNAME entry pointing to just-created loadbalancer's `DNSName` reported as you created the loadbalancer.