mirror of https://github.com/docker/docs.git
Update aws.md (#2593)
This commit is contained in:
parent
ccfb68a8fa
commit
a40e346431
|
@ -21,14 +21,14 @@ workers, and set the desired capacity that was selected in the CloudFormation
|
||||||
setup form. The Managers will start up first and create a Swarm manager quorum
|
setup form. The Managers will start up first and create a Swarm manager quorum
|
||||||
using Raft. The workers will then start up and join the swarm one by one, until
|
using Raft. The workers will then start up and join the swarm one by one, until
|
||||||
all of the workers are up and running. At this point you will have a number of
|
all of the workers are up and running. At this point you will have a number of
|
||||||
managers and workers in your swarm, that are ready to handle your application
|
managers and workers in your swarm, that are ready to handle your application
|
||||||
deployments. It then bootstraps UCP controllers on manager nodes and UCP agents
|
deployments. It then bootstraps UCP controllers on manager nodes and UCP agents
|
||||||
on worker nodes. Next, it installs DTR on the manager nodes and configures it
|
on worker nodes. Next, it installs DTR on the manager nodes and configures it
|
||||||
to use an S3 bucket as an image storage backend. Three ELBs, one for UCP, one
|
to use an S3 bucket as an image storage backend. Three ELBs, one for UCP, one
|
||||||
for DTR and a third for your applications, are launched and automatically
|
for DTR and a third for your applications, are launched and automatically
|
||||||
configured to provide resilient loadbalancing across multiple AZs.
|
configured to provide resilient loadbalancing across multiple AZs.
|
||||||
The application ELB gets automatically updated when services are launched or
|
The application ELB gets automatically updated when services are launched or
|
||||||
removed. While UCP and DTR ELBs are configured for HTTPS only.
|
removed. While UCP and DTR ELBs are configured for HTTPS only.
|
||||||
|
|
||||||
Both manager and worker nodes are part of separate ASG groups to allow you to
|
Both manager and worker nodes are part of separate ASG groups to allow you to
|
||||||
scale your cluster when needed. If you increase the number of instances running
|
scale your cluster when needed. If you increase the number of instances running
|
||||||
|
@ -37,7 +37,7 @@ in your worker Auto Scaling group (via the AWS console, or updating the
|
||||||
automatically join the swarm. This architecture ensures that both manager
|
automatically join the swarm. This architecture ensures that both manager
|
||||||
and worker nodes are spread across multiple AZs for resiliency and
|
and worker nodes are spread across multiple AZs for resiliency and
|
||||||
high-availability. The template is adjustable and upgradeable meaning you can
|
high-availability. The template is adjustable and upgradeable meaning you can
|
||||||
adjust your configuration (e.g instance types or Docker engine version)
|
adjust your configuration (e.g instance types or Docker engine version).
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
|
@ -52,9 +52,9 @@ in your worker Auto Scaling group (via the AWS console, or updating the
|
||||||
- S3 Bucket
|
- S3 Bucket
|
||||||
|
|
||||||
- SSH key in AWS in the region where you want to deploy (required to access the completed Docker install)
|
- SSH key in AWS in the region where you want to deploy (required to access the completed Docker install)
|
||||||
- AWS account that support EC2-VPC
|
- AWS account that supports EC2-VPC
|
||||||
|
|
||||||
For more information about adding an SSH key pair to your account, please refer to the [Amazon EC2 Key Pairs docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
|
For more information about adding an SSH key pair to your account, please refer to the [Amazon EC2 Key Pairs docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
|
||||||
|
|
||||||
|
|
||||||
## Cloudformation Parameters
|
## Cloudformation Parameters
|
||||||
|
@ -95,6 +95,7 @@ Enable if you want Docker for AWS to automatically cleanup unused space on your
|
||||||
When enabled, `docker system prune` will run staggered every day, starting at 1:42AM UTC on both workers and managers. The prune times are staggered slightly so that not all nodes will be pruned at the same time. This limits resource spikes on the swarm.
|
When enabled, `docker system prune` will run staggered every day, starting at 1:42AM UTC on both workers and managers. The prune times are staggered slightly so that not all nodes will be pruned at the same time. This limits resource spikes on the swarm.
|
||||||
|
|
||||||
Pruning removes the following:
|
Pruning removes the following:
|
||||||
|
|
||||||
- All stopped containers
|
- All stopped containers
|
||||||
- All volumes not used by at least one container
|
- All volumes not used by at least one container
|
||||||
- All dangling images
|
- All dangling images
|
||||||
|
@ -156,9 +157,9 @@ landing pages:
|
||||||
|
|
||||||
{: .with-border}
|
{: .with-border}
|
||||||
|
|
||||||
> Note: During the installation process, a self-signed certificate is generated
|
> **Note**: During the installation process, a self-signed certificate is generated
|
||||||
for both UCP and DTR. You can replace these certificates with your own
|
for both UCP and DTR. You can replace these certificates with your own
|
||||||
CA-signed certificate after the installation is complete. When you access UCP
|
CA-signed certificate after the installation is complete. When you access UCP
|
||||||
and DTR URLs for the first time, you need to proceed insecurely (multiple times)
|
and DTR URLs for the first time, you need to proceed insecurely (multiple times)
|
||||||
by accepting the provided certificate in the browser.
|
by accepting the provided certificate in the browser.
|
||||||
|
|
||||||
|
@ -170,10 +171,10 @@ names. To do that, please follow the instructions below:
|
||||||
|
|
||||||
1. Create an A or CNAME DNS record for UCP and DTR pointing to the UCP and DTR
|
1. Create an A or CNAME DNS record for UCP and DTR pointing to the UCP and DTR
|
||||||
ELB DNS/IP. You can find the ELB names in the **Output** tab.
|
ELB DNS/IP. You can find the ELB names in the **Output** tab.
|
||||||
2. Login to DTR using the DTR ELB URL and go to **Settings** page.
|
2. Log in to DTR using the DTR ELB URL and go to **Settings** page.
|
||||||
3. Update the **Domain** section with the your DNS and their respective
|
3. Update the **Domain** section with the your DNS and their respective
|
||||||
certificate. Make sure you click **Save** at the end.
|
certificate. Make sure you click **Save** at the end.
|
||||||
4. Login to UCP using the UCP ELB URL and go to **Admin Settings** tab.
|
4. Log in to UCP using the UCP ELB URL and go to **Admin Settings** tab.
|
||||||
5. Under the **Cluster Configuration** update **EXTERNAL SERVICE LOAD BALANCER**
|
5. Under the **Cluster Configuration** update **EXTERNAL SERVICE LOAD BALANCER**
|
||||||
with your custom UCP DNS name. Then click on **Update Settings**.
|
with your custom UCP DNS name. Then click on **Update Settings**.
|
||||||
6. Under the **Certificates** section, upload or paste your own certificates for
|
6. Under the **Certificates** section, upload or paste your own certificates for
|
||||||
|
@ -204,7 +205,7 @@ Once you download the bundle and load it, run the following command:
|
||||||
Once you run this Docker container, you'll be requested to choose a replica
|
Once you run this Docker container, you'll be requested to choose a replica
|
||||||
to reconfigure. Press **Enter** to proceed with the chosen one.
|
to reconfigure. Press **Enter** to proceed with the chosen one.
|
||||||
|
|
||||||
8. Now you may access UCP and DTR with your own custom DNS names
|
8. Now you may access UCP and DTR with your own custom DNS names.
|
||||||
|
|
||||||
## Deploy and Access Your Applications on Docker Enterprise Edition
|
## Deploy and Access Your Applications on Docker Enterprise Edition
|
||||||
|
|
||||||
|
@ -305,7 +306,7 @@ Changing manager count live is **_not_** currently supported.
|
||||||
|
|
||||||
### AWS Console
|
### AWS Console
|
||||||
|
|
||||||
Login to the AWS console, and go to the EC2 dashboard. On the lower left hand
|
Log in to the AWS console, and go to the EC2 dashboard. On the lower left hand
|
||||||
side select the "Auto Scaling Groups" link.
|
side select the "Auto Scaling Groups" link.
|
||||||
|
|
||||||
Look for the Auto Scaling group with the name that looks like
|
Look for the Auto Scaling group with the name that looks like
|
||||||
|
@ -328,7 +329,7 @@ pool until it reaches the new size.
|
||||||
|
|
||||||
### CloudFormation Update
|
### CloudFormation Update
|
||||||
Go to the CloudFormation management page, and click the checkbox next to the
|
Go to the CloudFormation management page, and click the checkbox next to the
|
||||||
stack you want to update. Then Click on the action button at the top, and
|
stack you want to update. Then click on the action button at the top, and
|
||||||
select "Update Stack".
|
select "Update Stack".
|
||||||
|
|
||||||
{: .with-border}
|
{: .with-border}
|
||||||
|
|
Loading…
Reference in New Issue