Docker for AWS, uninstall, faq's, pix, copyedit (#4404)

* added faq and topic on uninstall Docker for AWS

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* added screen snaps, corrected links

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* formatting updates

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* copyedits per review

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>
This commit is contained in:
Victoria Bialas 2017-08-28 17:39:57 -07:00 committed by GitHub
parent 657cd941f5
commit 99f0e0e245
17 changed files with 259 additions and 87 deletions

View File

@ -83,12 +83,15 @@ $ ssh docker@ip-172-31-31-40.us-east-2.compute.internal
#### Use SSH agent forwarding
SSH agent forwarding allows you to forward along your ssh keys when connecting from one node to another. This eliminates the need for installing your private key on all nodes you might want to connect from.
SSH agent forwarding allows you to forward along your ssh keys when connecting
from one node to another. This eliminates the need for installing your private
key on all nodes you might want to connect from.
You can use this feature to SSH into worker nodes from a manager node without
installing keys directly on the manager.
If your haven't added your ssh key to the `ssh-agent` you will also need to do this first.
If your haven't added your ssh key to the `ssh-agent` you will also need to do
this first.
To see the keys in the agent already, run:
@ -102,13 +105,16 @@ If you don't see your key, add it like this.
$ ssh-add ~/.ssh/your_key
```
On Mac OS X, the `ssh-agent` will forget this key, once it gets restarted. But you can import your SSH key into your Keychain like this. This will have your key survive restarts.
On macOS, the `ssh-agent` will forget this key, once it gets restarted. But
you can import your SSH key into your Keychain like this. This will have your
key survive restarts.
```bash
$ ssh-add -K ~/.ssh/your_key
```
You can then enable SSH forwarding per-session using the `-A` flag for the ssh command.
You can then enable SSH forwarding per-session using the `-A` flag for the ssh
command.
Connect to the Manager.
@ -139,11 +145,13 @@ You can now start creating containers and services.
$ docker run hello-world
You can run websites too. Ports exposed with `-p` are automatically exposed through the platform load balancer:
You can run websites too. Ports exposed with `-p` are automatically exposed
through the platform load balancer:
$ docker service create --name nginx -p 80:80 nginx
Once up, find the `DefaultDNSTarget` output in either the AWS or Azure portals to access the site.
Once up, find the `DefaultDNSTarget` output in either the AWS or Azure portals
to access the site.
### Execute docker commands in all swarm nodes
@ -153,23 +161,39 @@ Usage : `swarm-exec {Docker command}`
The following will install a test plugin in all the nodes in the cluster
Example : `swarm-exec docker plugin install --grant-all-permissions mavenugo/test-docker-netplugin`
Example : `swarm-exec docker plugin install --grant-all-permissions
mavenugo/test-docker-netplugin`
This tool internally makes use of docker global-mode service that runs a task on each of the nodes in the cluster. This task in turn executes your docker command. The global-mode service also guarantees that when a new node is added to the cluster or during upgrades, a new task is executed on that node and hence the docker command will be automatically executed.
This tool internally makes use of docker global-mode service that runs a task on
each of the nodes in the cluster. This task in turn executes your docker
command. The global-mode service also guarantees that when a new node is added
to the cluster or during upgrades, a new task is executed on that node and hence
the docker command will be automatically executed.
### Distributed Application Bundles
To deploy complex multi-container apps, you can use [distributed application bundles](/compose/bundles.md). You can either run `docker deploy` to deploy a bundle on your machine over an SSH tunnel, or copy the bundle (for example using `scp`) to a manager node, SSH into the manager and then run `docker deploy` (if you have multiple managers, you have to ensure that your session is on one that has the bundle file).
To deploy complex multi-container apps, you can use [distributed application
bundles](/compose/bundles.md). You can either run `docker deploy` to deploy a
bundle on your machine over an SSH tunnel, or copy the bundle (for example using
`scp`) to a manager node, SSH into the manager and then run `docker deploy` (if
you have multiple managers, you have to ensure that your session is on one that
has the bundle file).
A good sample app to test application bundles is the [Docker voting app](https://github.com/docker/example-voting-app).
A good sample app to test application bundles is the [Docker voting
app](https://github.com/docker/example-voting-app).
By default, apps deployed with bundles do not have ports publicly exposed. Update port mappings for services, and Docker will automatically wire up the underlying platform load balancers:
By default, apps deployed with bundles do not have ports publicly exposed.
Update port mappings for services, and Docker will automatically wire up the
underlying platform load balancers:
docker service update --publish-add 80:80 <example-service>
### Images in private repos
To create swarm services using images in private repos, first make sure you're authenticated and have access to the private repo, then create the service with the `--with-registry-auth` flag (the example below assumes you're using Docker Hub):
To create swarm services using images in private repos, first make sure you're
authenticated and have access to the private repo, then create the service with
the `--with-registry-auth` flag (the example below assumes you're using Docker
Hub):
docker login
...

View File

@ -9,19 +9,20 @@ toc_max: 2
Two different download channels are available for Docker for AWS:
* The **stable channel** provides a general availability release-ready deployment
for a fully baked and tested, more reliable cluster. The stable version of Docker
for AWS comes with the latest released version of Docker Engine. The release
schedule is synched with Docker Engine releases and hotfixes. On the stable
channel, you can select whether to send usage statistics and other data.
* The **stable channel** provides a general availability release-ready deployment for a fully baked and
tested, more reliable cluster. The stable version of Docker for AWS comes with
the latest released version of Docker Engine. The release schedule is synched
with Docker Engine releases and hotfixes. On the stable channel, you can select
whether to send usage statistics and other data.
* The **edge channel** provides a deployment with new features we are working on,
but is not necessarily fully tested. It comes with the experimental version of
Docker Engine. Bugs, crashes, and issues are more likely to occur with the edge
cluster, but you get a chance to preview new functionality, experiment, and provide
feedback as the deployment evolve. Releases are typically more frequent than for
stable, often one or more per month. Usage statistics and crash reports are sent
by default. You do not have the option to disable this on the edge channel.
* The **edge channel** provides a deployment with new features we are
working on, but is not necessarily fully tested. It comes with the
experimental version of Docker Engine. Bugs, crashes, and issues are
more likely to occur with the edge cluster, but you get a chance to preview
new functionality, experiment, and provide feedback as the deployment
evolve. Releases are typically more frequent than for stable, often one
or more per month. Usage statistics and crash reports are sent by default.
You do not have the option to disable this on the edge schannel.
## Can I use my own AMI?
@ -29,19 +30,37 @@ No, at this time we only support the default Docker for AWS AMI.
## How can I use Docker for AWS with an AWS account in an EC2-Classic region?
If you have an AWS account that was created before **December 4th, 2013** you have what is known as an **EC2-Classic** account on regions where you have previously deployed resources. **EC2-Classic** accounts don't have default VPC's or the associated subnets, etc. This causes a problem when using our CloudFormation template because we are using the [Fn:GetAZs](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html) function they provide to determine which availability zones you have access to. When used in a region where you have **EC2-Classic**, this function will return all availability zones for a region, even ones you don't have access to. When you have an **EC2-VPC** account, it will return only the availability zones you have access to.
If you have an AWS account that was created before **December 4th, 2013** you
have what is known as an **EC2-Classic** account on regions where you have
previously deployed resources. **EC2-Classic** accounts don't have default VPC's
or the associated subnets, etc. This causes a problem when using our
CloudFormation template because we are using the
[Fn:GetAZs](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html)
function they provide to determine which availability zones you have access to.
When used in a region where you have **EC2-Classic**, this function will return
all availability zones for a region, even ones you don't have access to. When
you have an **EC2-VPC** account, it will return only the availability zones you
have access to.
This will cause an error like the following:
> "Value (us-east-1a) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-east-1d, us-east-1c, us-east-1b, us-east-1e."
> "Value (us-east-1a) for parameter availabilityZone is invalid.
Subnets can currently only be created in the following availability
zones: us-east-1d, us-east-1c, us-east-1b, us-east-1e."
If you have an **EC2-Classic** account, and you don't have access to the `a` and `b` availability zones for that region.
If you have an **EC2-Classic** account, and you don't have access to the `a` and
`b` availability zones for that region.
There isn't anything we can do right now to fix this issue, we have contacted Amazon, and we are hoping they will be able to provide us with a way to determine if an account is either **EC2-Classic** or **EC2-VPC**, so we can act accordingly.
There isn't anything we can do right now to fix this issue, we have contacted
Amazon, and we are hoping they will be able to provide us with a way to
determine if an account is either **EC2-Classic** or **EC2-VPC**, so we can act
accordingly.
### How to tell if you are in the EC2-Classic region.
[This AWS documentation page](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html) will describe how you can tell if you have EC2-Classic, EC2-VPC or both.
[This AWS documentation
page](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html)
will describe how you can tell if you have EC2-Classic, EC2-VPC or both.
### Possible fixes to the EC2-Classic region issue:
There are a few workarounds that you can try to get Docker for AWS up and running for you.
@ -150,3 +169,10 @@ $ sudo ping 10.0.0.4
```
> **Note**: Access to Docker for AWS and Azure happens through a shell container that itself runs on Docker.
## How do I uninstall Docker for AWS?
You can remove the Docker for AWS setup and stacks through the [AWS
Console](https://console.aws.amazon.com/console/home){: target="_blank"
class="_"} on the CloudFormation page. See [Uninstalling or removing a
stack](/docker-for-aws/index.md#uninstalling-or-removing-a-stack).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 184 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 125 KiB

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 9.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

After

Width:  |  Height:  |  Size: 82 KiB

View File

@ -11,9 +11,14 @@ redirect_from:
{% include d4a_buttons.md %}
## Docker Enterprise Edition (EE) for AWS
This deployment is fully baked and tested, and comes with the latest Enterprise Edition version of Docker. <br/>This release is maintained and receives <strong>security and critical bugfixes for one year</strong>.
[Deploy Docker Enterprise Edition (EE) for AWS](https://store.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn"}
This deployment is fully baked and tested, and comes with the latest Docker
Enterprise Edition for AWS.
This release is maintained and receives **security and critical bug fixes for
one year**.
[Deploy Docker Enterprise Edition (EE) for AWS](https://store.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn blank_"}
## Docker Community Edition (CE) for AWS
@ -49,7 +54,10 @@ using CloudFormation. For more about stable and edge channels, see the
#### Test Channel
<p>This is the test channel. It is used for testing release candidates for upcoming releases. Use this channel if you want to test out the new releases before they are available.</p>
This is the test channel. It is used for testing release candidates for
upcoming releases. Use this channel if you want to test out the new releases
before they are available.
{{aws_blue_test}}
### Deployment options
@ -90,38 +98,61 @@ If you need to install Docker for AWS with an existing VPC, you need to do a few
- SSH key in AWS in the region where you want to deploy (required to access the completed Docker install)
- AWS account that support EC2-VPC (See the [FAQ for details about EC2-Classic](faqs.md))
For more information about adding an SSH key pair to your account, please refer to the [Amazon EC2 Key Pairs docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
For more information about adding an SSH key pair to your account, please refer
to the [Amazon EC2 Key Pairs
docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
Note that the China and US Gov Cloud AWS partitions are not currently supported.
### Configuration
Docker for AWS is installed with a CloudFormation template that configures Docker in swarm-mode, running on instances backed custom AMIs. There are two ways you can deploy Docker for AWS. You can use the AWS Management Console (browser based), or use the AWS CLI. Both have the following configuration options.
Docker for AWS is installed with a CloudFormation template that configures
Docker in swarm mode, running on instances backed by custom AMIs. There are two
ways you can deploy Docker for AWS. You can use the AWS Management Console
(browser based), or use the AWS CLI. Both have the following configuration
options.
#### Configuration options
##### KeyName
Pick the SSH key that will be used when you SSH into the manager nodes.
##### InstanceType
The EC2 instance type for your worker nodes.
##### ManagerInstanceType
The EC2 instance type for your manager nodes. The larger your swarm, the larger the instance size you should use.
The EC2 instance type for your manager nodes. The larger your swarm, the larger
the instance size you should use.
##### ClusterSize
The number of workers you want in your swarm (0-1000).
##### ManagerSize
The number of Managers in your swarm. You can pick either 1, 3 or 5 managers. We only recommend 1 manager for testing and dev setups. There are no failover guarantees with 1 manager — if the single manager fails the swarm will go down as well. Additionally, upgrading single-manager swarms is not currently guaranteed to succeed.
We recommend at least 3 managers, and if you have a lot of workers, you should pick 5 managers.
The number of managers in your swarm. On Docker CE, you can select either 1,
3 or 5 managers. We only recommend 1 manager for testing and dev setups. There
are no failover guarantees with 1 manager — if the single manager fails the
swarm will go down as well. Additionally, upgrading single-manager swarms is not
currently guaranteed to succeed.
On Docker EE, you can choose to run with 3 or 5 managers.
We recommend at least 3 managers, and if you have a lot of workers, you should
use 5 managers.
##### EnableSystemPrune
Enable if you want Docker for AWS to automatically cleanup unused space on your swarm nodes.
Enable if you want Docker for AWS to automatically cleanup unused space on your
swarm nodes.
When enabled, `docker system prune` will run staggered every day, starting at 1:42AM UTC on both workers and managers. The prune times are staggered slightly so that not all nodes will be pruned at the same time. This limits resource spikes on the swarm.
When enabled, `docker system prune` will run staggered every day, starting at
1:42AM UTC on both workers and managers. The prune times are staggered slightly
so that not all nodes will be pruned at the same time. This limits resource
spikes on the swarm.
Pruning removes the following:
- All stopped containers
@ -130,29 +161,40 @@ Pruning removes the following:
- All unused networks
##### EnableCloudWatchLogs
Enable if you want Docker to send your container logs to CloudWatch. ("yes", "no") Defaults to yes.
Enable if you want Docker to send your container logs to CloudWatch. ("yes",
"no") Defaults to yes.
##### WorkerDiskSize
Size of Workers' ephemeral storage volume in GiB (20 - 1024).
##### WorkerDiskType
Worker ephemeral storage volume type ("standard", "gp2").
##### ManagerDiskSize
Size of Manager's ephemeral storage volume in GiB (20 - 1024)
##### ManagerDiskType
Manager ephemeral storage volume type ("standard", "gp2")
#### Installing with the AWS Management Console
The simplest way to use the template is with the CloudFormation section of the AWS Management Console.
Go to the [Release Notes](release-notes.md) page, and click on the "launch stack" button to start the deployment process.
The simplest way to use the template is to use one of the
[Quickstart](#docker-community-edition-ce-for-aws) options with the
CloudFormation section of the AWS Management Console.
![create a stack](img/aws-select-template.png)
#### Installing with the CLI
You can also invoke the Docker for AWS CloudFormation template from the AWS CLI:
Here is an example of how to use the CLI. Make sure you populate all of the parameters and their values from the above list:
Here is an example of how to use the CLI. Make sure you populate all of the
parameters and their values from the above list:
```bash
$ aws cloudformation create-stack --stack-name teststack --template-url <templateurl> --parameters ParameterKey=<keyname>,ParameterValue=<keyvalue> ParameterKey=InstanceType,ParameterValue=t2.micro ParameterKey=ManagerInstanceType,ParameterValue=t2.micro ParameterKey=ClusterSize,ParameterValue=1 .... --capabilities CAPABILITY_IAM
@ -162,23 +204,51 @@ To fully automate installs, you can use the [AWS Cloudformation API](http://docs
### How it works
Docker for AWS starts with a CloudFormation template that will create everything that you need from scratch. There are only a few prerequisites that are listed above.
Docker for AWS starts with a CloudFormation template that will create everything
that you need from scratch. There are only a few prerequisites that are listed
above.
The CloudFormation template first creates a new VPC along with subnets and security groups. After the networking set-up completes, two Auto Scaling Groups are created, one for the managers and one for the workers, and the configured capacity setting is applied. Managers start first and create a quorum using Raft, then the workers start and join the swarm one at a time. At this point, the swarm is comprised of X number of managers and Y number of workers, and you can deploy your applications. See the [deployment](deploy.md) docs for your next steps.
The CloudFormation template first creates a new VPC along with subnets and
security groups. After the networking set-up completes, two Auto Scaling Groups
are created, one for the managers and one for the workers, and the configured
capacity setting is applied. Managers start first and create a quorum using
Raft, then the workers start and join the swarm one at a time. At this point,
the swarm is comprised of X number of managers and Y number of workers, and you
can deploy your applications. See the [deployment](deploy.md) docs for your next
steps.
> To [log into your nodes using SSH](/docker-for-aws/deploy.md#connecting-via-ssh),
> use the `docker` user rather than `root` or `ec2-user`.
If you increase the number of instances running in your worker Auto Scaling Group (via the AWS console, or updating the CloudFormation configuration), the new nodes that will start up will automatically join the swarm.
If you increase the number of instances running in your worker Auto Scaling
Group (via the AWS console, or updating the CloudFormation configuration), the
new nodes that will start up will automatically join the swarm.
Elastic Load Balancers (ELBs) are set up to help with routing traffic to your swarm.
Elastic Load Balancers (ELBs) are set up to help with routing traffic to your
swarm.
### Logging
Docker for AWS automatically configures logging to Cloudwatch for containers you run on Docker for AWS. A Log Group is created for each Docker for AWS install, and a log stream for each container.
Docker for AWS automatically configures logging to Cloudwatch for containers you
run on Docker for AWS. A Log Group is created for each Docker for AWS install,
and a log stream for each container.
The `docker logs` and `docker service logs` commands are not supported on Docker for AWS when using Cloudwatch for logs. Instead, check container logs in CloudWatch.
The `docker logs` and `docker service logs` commands are not supported on Docker
for AWS when using Cloudwatch for logs. Instead, check container logs in
CloudWatch.
### System containers
Each node will have a few system containers running on them to help run your swarm cluster. In order for everything to run smoothly, please keep those containers running, and don't make any changes. If you make any changes, Docker for AWS will not work correctly.
Each node will have a few system containers running on them to help run your
swarm cluster. In order for everything to run smoothly, please keep those
containers running, and don't make any changes. If you make any changes, Docker
for AWS will not work correctly.
### Uninstalling or removing a stack
To uninstall Docker for AWS, log on to the [AWS
Console](https://aws.amazon.com/){: target="_blank" class="_"}, navigate to
**Management Tools -> CloudFormation -> Actions -> Delete Stack**, and select
the Docker stack you want to remove.
![uninstall](img/aws-delete-stack.png)

View File

@ -14,17 +14,23 @@ When you create a service, any ports that are exposed with `-p` are automaticall
$ docker service create --name nginx -p 80:80 nginx
```
This opens up port 80 on the Elastic Load Balancer (ELB) and direct any traffic on that port, to your swarm service.
This opens up port 80 on the Elastic Load Balancer (ELB) and direct any traffic
on that port, to your swarm service.
## How can I configure my load balancer to support SSL/TLS traffic?
Docker uses [Amazons' ACM service](https://aws.amazon.com/certificate-manager/), which provides free SSL/TLS certificates, and can be used with ELBs. You need to create a new certificate for your domain, and get the ARN for that certificate.
Docker uses [Amazons' ACM service](https://aws.amazon.com/certificate-manager/),
which provides free SSL/TLS certificates, and can be used with ELBs. You need to
create a new certificate for your domain, and get the ARN for that certificate.
You add a label to your service to tell swarm that you want to use a given ACM cert for SSL connections to your service.
You add a label to your service to tell swarm that you want to use a given ACM
cert for SSL connections to your service.
### Examples
Start a service and listen on the ELB with ports `80` and `443`. Port `443` is served using a SSL certificate from ACM, which is referenced by the ARN that is described in the service label `com.docker.aws.lb.arn`
Start a service and listen on the ELB with ports `80` and `443`. Port `443` is
served using a SSL certificate from ACM, which is referenced by the ARN that is
described in the service label `com.docker.aws.lb.arn`
```bash
$ docker service create \
@ -95,7 +101,8 @@ $ docker service create \
### Add a CNAME for your ELB
Once you have your ELB setup, with the correct listeners and certificates, you need to add a DNS CNAME that points to your ELB at your DNS provider.
Once you have your ELB setup, with the correct listeners and certificates, you
need to add a DNS CNAME that points to your ELB at your DNS provider.
### ELB SSL limitations
@ -105,4 +112,8 @@ Once you have your ELB setup, with the correct listeners and certificates, you n
## Can I manually change the ELB configuration?
No. If you make any manual changes to the ELB, they are removed the next time we update the ELB configuration based on any swarm changes. This is because the swarm service configuration is the source of record for service ports. If you add listeners to the ELB manually, they could conflict with what is in swarm, and cause issues.
No. If you make any manual changes to the ELB, they are removed the next time we
update the ELB configuration based on any swarm changes. This is because the
swarm service configuration is the source of record for service ports. If you
add listeners to the ELB manually, they could conflict with what is in swarm,
and cause issues.

View File

@ -6,6 +6,12 @@ title: Open source components and licensing
Docker for AWS and Azure Editions are built using open source software.
Docker for AWS and Azure Editions distribute some components that are licensed under the GNU General Public License. You can download the source for these components [here](https://download.docker.com/opensource/License.tar.gz).
Docker for AWS and Azure Editions distribute some components that are licensed
under the GNU General Public License. You can download the source for these
components [here](https://download.docker.com/opensource/License.tar.gz).
The sources for qemu-img can be obtained [here](http://wiki.qemu-project.org/download/qemu-2.4.1.tar.bz2). The sources for the gettext and glib libraries that qemu-img requires were obtained from [Homebrew](https://brew.sh/) and may be retrieved using `brew install --build-from-source gettext glib`.
The sources for qemu-img can be obtained
[here](http://wiki.qemu-project.org/download/qemu-2.4.1.tar.bz2). The sources
for the gettext and glib libraries that qemu-img requires were obtained from
[Homebrew](https://brew.sh/) and may be retrieved using `brew install
--build-from-source gettext glib`.

View File

@ -153,8 +153,8 @@ $ docker volume create -d "cloudstor:aws" --opt backing=shared mysharedvol1
If EBS is available and enabled, you can use a templatized notation with the
`docker service create` CLI to create and mount a unique `relocatable` Cloudstor
volume backed by a specified type of EBS for each task in a swarm service. New
EBS volumes typically take a few minutes to be created. Besides `backing=relocatable`,
the following volume options are available:
EBS volumes typically take a few minutes to be created. Besides
`backing=relocatable`, the following volume options are available:
| Option | Description |
|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@ -220,8 +220,8 @@ $ docker service create \
{% endraw %}
```
Here, each task has mounted its own volume at `/mydata/` and the files under that
mountpoint are unique to that task.
Here, each task has mounted its own volume at `/mydata/` and the files under
that mountpoint are unique to that task.
When a task with only `shared` EFS volumes mounted is rescheduled on a different
node, Docker interacts with the Cloudstor plugin to create and mount the volume

View File

@ -7,15 +7,15 @@ title: Docker for AWS release notes
{% include d4a_buttons.md %}
## Enterprise Edition
[Docker Enterprise Edition Lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle){: target="_blank"}
[Docker Enterprise Edition Lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle){: target="_blank" class="_"}
[Deploy Docker Enterprise Edition (EE) for AWS](https://store.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn"}
[Deploy Docker Enterprise Edition (EE) for AWS](https://store.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn blank_"}
### 17.06 EE
- Docker engine 17.06 EE
- For Std/Adv external logging has been removed, as it is now handled by [UCP](https://docs.docker.com/datacenter/ucp/2.0/guides/configuration/configure-logs/){: target="_blank"}
- UCP 2.2.0
- For Std/Adv external logging has been removed, as it is now handled by [UCP](https://docs.docker.com/datacenter/ucp/2.0/guides/configuration/configure-logs/){: target="_blank" class="_"}
- UCP 2.2.0
- DTR 2.3.0
### 17.03 EE
@ -35,7 +35,7 @@ Release date: 08/17/2017
**New**
- Docker Engine upgraded to [Docker 17.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.1-ce){: target="_blank"}
- Docker Engine upgraded to [Docker 17.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.1-ce){: target="_blank" class="_"}
- Improvements to CloudStor support
- Added SSL support at the LB level
@ -45,7 +45,7 @@ Release date: 06/28/2017
**New**
- Docker Engine upgraded to [Docker 17.06.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.0-ce){: target="_blank"}
- Docker Engine upgraded to [Docker 17.06.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.0-ce){: target="_blank" class="_"}
- Fixed an issue with load balancer controller that caused the ELB health check to fail.
- Added VPCID output when a VPC is created
- Added CloudStor support (EFS (in regions that support EFS), and EBS) for [persistent storage volumes](persistent-data-volumes.md)
@ -62,7 +62,7 @@ Release date: 03/30/2017
**New**
- Docker Engine upgraded to [Docker 17.03.1 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md){: target="_blank"}
- Docker Engine upgraded to [Docker 17.03.1 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md){: target="_blank" class="_"}
- Updated AZ for Sao Paulo
### 17.03.0 CE
@ -71,7 +71,7 @@ Release date: 03/01/2017
**New**
- Docker Engine upgraded to [Docker 17.03.0 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md){: target="_blank"}
- Docker Engine upgraded to [Docker 17.03.0 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md){: target="_blank" class="_"}
- Added r4 EC2 instance types
- Added `ELBDNSZoneID` output to make it easier to interact with Route53

View File

@ -6,28 +6,50 @@ title: Modify Docker install on AWS
## Scaling workers
You can scale the worker count using the AWS Auto Scaling group. Docker will automatically join or remove new instances to the Swarm.
You can scale the worker count using the AWS Auto Scaling group. Docker will
automatically join or remove new instances to the Swarm.
There are currently two ways to scale your worker group. You can "update" your stack, and change the number of workers in the CloudFormation template parameters, or you can manually update the Auto Scaling group in the AWS console for EC2 auto scaling groups.
There are currently two ways to scale your worker group. You can "update" your
stack, and change the number of workers in the CloudFormation template
parameters, or you can manually update the Auto Scaling group in the AWS console
for EC2 auto scaling groups.
Changing manager count live is _not_ currently supported.
### AWS console
Log in to the AWS console, and go to the EC2 dashboard. On the lower left hand side select the "Auto Scaling Groups" link.
Log in to the AWS console, and go to the EC2 dashboard. On the lower left hand
side select the "Auto Scaling Groups" link.
Look for the Auto Scaling group with the name that looks like $STACK_NAME-NodeASG-* Where `$STACK_NAME` is the name of the stack you created when filling out the CloudFormation template for Docker for AWS. Once you find it, click the checkbox, next to the name. Then Click on the "Edit" button on the lower detail pane.
Look for the Auto Scaling group with the name that looks like
$STACK_NAME-NodeASG-* Where `$STACK_NAME` is the name of the stack you created
when filling out the CloudFormation template for Docker for AWS. Once you find
it, click the checkbox, next to the name. Then Click on the "Edit" button on the
lower detail pane.
<img src="img/autoscale_update.png">
![edit autoscale settings](img/autoscale_update.png)
Change the "Desired" field to the size of the worker pool that you would like, and hit "Save".
<img src="img/autoscale_save.png">
![save autoscale settings](img/autoscale_save.png)
This will take a few minutes and add the new workers to your swarm automatically. To lower the number of workers back down, you just need to update "Desired" again, with the lower number, and it will shrink the worker pool until it reaches the new size.
This will take a few minutes and add the new workers to your swarm
automatically. To lower the number of workers back down, you just need to update
"Desired" again, with the lower number, and it will shrink the worker pool until
it reaches the new size.
### CloudFormation update
Go to the CloudFormation management page, and click the checkbox next to the stack you want to update. Then Click on the action button at the top, and select "Update Stack".
<img src="img/cloudformation_update.png">
Go to the CloudFormation management page, and click the checkbox next to the
stack you want to update. Then Click on the action button at the top, and select
"Update Stack".
Pick "Use current template", and then click "Next". Fill out the same parameters you have specified before, but this time, change your worker count to the new count, click "Next". Answer the rest of the form questions. CloudFormation will show you a preview of the changes it will make. Review the changes and if they look good, click "Update". CloudFormation will change the worker pool size to the new value you specified. It will take a few minutes (longer for a larger increase / decrease of nodes), but when complete you will have your swarm with the new worker pool size.
![update stack on CloudFormation page](img/cloudformation_update.png)
Pick "Use current template", and then click "Next". Fill out the same parameters
you have specified before, but this time, change your worker count to the new
count, click "Next". Answer the rest of the form questions. CloudFormation will
show you a preview of the changes it will make. Review the changes and if they
look good, click "Update". CloudFormation will change the worker pool size to
the new value you specified. It will take a few minutes (longer for a larger
increase / decrease of nodes), but when complete you will have your swarm with
the new worker pool size.

View File

@ -4,27 +4,40 @@ keywords: aws, amazon, iaas, tutorial
title: Docker for AWS upgrades
---
To upgrade, apply a new version of the AWS Cloudformation template that powers Docker for AWS. Depending on changes in the next version, an upgrade involves:
To upgrade, apply a new version of the AWS Cloudformation template that powers
Docker for AWS. Depending on changes in the next version, an upgrade involves:
* Changing the AMI backing manager and worker nodes (the Docker engine ships in the AMI)
* Changing the AMI backing manager and worker nodes (the Docker engine
ships in the AMI)
* Upgrading service containers
* Changing the resource setup in the VPC that hosts Docker for AWS
## Prerequisites
* We recommend only attempting upgrades of swarms with at least 3 managers. A 1-manager swarm may not be able to maintain quorum during the upgrade
* You can only upgrade one version at a time. Skipping a version during an upgrade is not supported. Downgrades are not tested.
* We recommend only attempting upgrades of swarms with at least 3 managers.
A 1-manager swarm may not be able to maintain quorum during the upgrade.
* You can only upgrade one version at a time. Skipping a version during
an upgrade is not supported. Downgrades are not tested.
## Upgrading
New releases are announced on [Release Notes](release-notes.md) page.
To initiate an update, use either the AWS Console or the AWS cli to initiate a stack update. Use the S3 template URL for the new release and complete the update wizard. This will initiate a rolling upgrade of the Docker swarm, and service state will be maintained during and after the upgrade. Appropriately scaled services should not experience downtime during an upgrade.
To initiate an update, use either the AWS Console or the AWS cli to initiate a
stack update. Use the S3 template URL for the new release and complete the
update wizard. This will initiate a rolling upgrade of the Docker swarm, and
service state will be maintained during and after the upgrade. Appropriately
scaled services should not experience downtime during an upgrade.
![Upgrade in AWS console](img/cloudformation_update.png)
Note that single containers started (for example) with `docker run -d` are **not** preserved during an upgrade. This is because they're not Docker Swarm objects, but are known only to the individual Docker engines.
Note that single containers started (for example) with `docker run -d` are
**not** preserved during an upgrade. This is because they're not Docker Swarm
objects, but are known only to the individual Docker engines.
## Changing instance sizes and other template parameters
In addition to upgrading Docker for AWS from one version to the next you can also use the AWS Update Stack feature to change template parameters such as worker count and instance type. Changing manager count is **not** supported.
In addition to upgrading Docker for AWS from one version to the next you can
also use the AWS Update Stack feature to change template parameters such as
worker count and instance type. Changing manager count is **not** supported.