mirror of https://github.com/docker/docs.git
Added updates and Release notes for Docker for AWS Beta 18 (#1782)
* Added updates and Release notes for Docker for AWS Beta 18 Signed-off-by: Ken Cochrane <KenCochrane@gmail.com>
This commit is contained in:
parent
5c8a98accc
commit
fec6553dca
|
|
@ -60,6 +60,8 @@ guides:
|
|||
title: Upgrading
|
||||
- path: /docker-for-aws/deploy/
|
||||
title: Deploy your app
|
||||
- path: /docker-for-aws/persistent-data-volumes/
|
||||
title: Persistent data volumes
|
||||
- path: /docker-for-aws/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-aws/opensource/
|
||||
|
|
@ -76,6 +78,8 @@ guides:
|
|||
title: Upgrading
|
||||
- path: /docker-for-azure/deploy/
|
||||
title: Deploy your app
|
||||
- path: /docker-for-azure/persistent-data-volumes/
|
||||
title: Persistent data volumes
|
||||
- path: /docker-for-azure/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-azure/opensource/
|
||||
|
|
|
|||
|
|
@ -46,9 +46,10 @@ This AWS documentation page will describe how you can tell if you have EC2-Class
|
|||
### Possible fixes to the EC2-Classic region issue:
|
||||
There are a few work arounds that you can try to get Docker for AWS up and running for you.
|
||||
|
||||
1. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue.
|
||||
2. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem.
|
||||
3. You can try and contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs
|
||||
1. Create your own VPC, then [install Docker for AWS with a pre-existing VPC](index.md#install-with-an-existing-vpc).
|
||||
2. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue.
|
||||
3. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem.
|
||||
4. Contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs
|
||||
|
||||
### Helpful links:
|
||||
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html
|
||||
|
|
@ -60,7 +61,46 @@ There are a few work arounds that you can try to get Docker for AWS up and runni
|
|||
|
||||
## Can I use my existing VPC?
|
||||
|
||||
Not at this time, but it is on our roadmap for future releases.
|
||||
Yes, see [install Docker for AWS with a pre-existing VPC](index.md#install-with-an-existing-vpc) for more info.
|
||||
|
||||
## Recommended VPC and subnet setup
|
||||
|
||||
#### VPC
|
||||
|
||||
* **CIDR:** 172.31.0.0/16
|
||||
* **DNS hostnames:** yes
|
||||
* **DNS resolution:** yes
|
||||
* **DHCP option set:** DHCP Options (Below)
|
||||
|
||||
#### Internet gateway
|
||||
* **VPC:** VPC (above)
|
||||
|
||||
#### DHCP option set
|
||||
|
||||
* **domain-name:** ec2.internal
|
||||
* **domain-name-servers:** AmazonProvidedDNS
|
||||
|
||||
#### Subnet1
|
||||
* **CIDR:** 172.31.16.0/20
|
||||
* **Auto-assign public IP:** yes
|
||||
* **Availability-Zone:** A
|
||||
|
||||
#### Subnet2
|
||||
* **CIDR:** 172.31.32.0/20
|
||||
* **Auto-assign public IP:** yes
|
||||
* **Availability-Zone:** B
|
||||
|
||||
#### Subnet3
|
||||
* **CIDR:** 172.31.0.0/20
|
||||
* **Auto-assign public IP:** yes
|
||||
* **Availability-Zone:** C
|
||||
|
||||
#### Route table
|
||||
* **Destination CIDR block:** 0.0.0.0/0
|
||||
* **Subnets:** Subnet1, Subnet2, Subnet3
|
||||
|
||||
##### Subnet note:
|
||||
If you are using the `10.0.0.0/16` CIDR in your VPC. When you create a docker network, you will need to make sure you pick a subnet (using `docker network create —subnet` option) that doesn't conflict with the `10.0.0.0` network.
|
||||
|
||||
## Which AWS regions will this work with?
|
||||
|
||||
|
|
@ -68,7 +108,7 @@ Docker for AWS should work with all regions except for AWS China, which is a lit
|
|||
|
||||
## How many Availability Zones does Docker for AWS use?
|
||||
|
||||
All of Amazons regions have at least 2 AZ's, and some have more. To make sure Docker for AWS works in all regions, only 2 AZ's are used even if more are available.
|
||||
Docker for AWS determines the correct amount of Availability Zone's to use based on the region. In regions that support it, we will use 3 Availability Zones, and 2 for the rest of the regions. We recommend running production workloads only in regions that have at least 3 Availability Zones.
|
||||
|
||||
## What do I do if I get `KeyPair error` on AWS?
|
||||
As part of the prerequisites, you need to have an SSH key uploaded to the AWS region you are trying to deploy to.
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ title: Docker for AWS IAM permissions
|
|||
|
||||
The following IAM permissions are required to use Docker for AWS.
|
||||
|
||||
Before you deploy Docker for AWS, your account needs these permissions for the stack to deploy correctly.
|
||||
Before you deploy Docker for AWS, your account needs these permissions for the stack to deploy correctly.
|
||||
If you create and use an IAM role with these permissions for creating the stack, CloudFormation will use the role's permissions instead of your own, using the AWS CloudFormation Service Role feature.
|
||||
|
||||
This feature is called [AWS CloudFormation Service Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html?icmpid=docs_cfn_console)
|
||||
|
|
@ -114,6 +114,7 @@ follow the link for more information.
|
|||
"ec2:DisassociateRouteTable",
|
||||
"ec2:GetConsoleOutput",
|
||||
"ec2:GetConsoleScreenshot",
|
||||
"ec2:ModifyNetworkInterfaceAttribute",
|
||||
"ec2:ModifyVpcAttribute",
|
||||
"ec2:RebootInstances",
|
||||
"ec2:ReleaseAddress",
|
||||
|
|
@ -309,6 +310,16 @@ follow the link for more information.
|
|||
"Resource": [
|
||||
"*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Sid": "Stmt1487169681000",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"elasticfilesystem:*"
|
||||
],
|
||||
"Resource": [
|
||||
"*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ redirect_from:
|
|||
## Quickstart
|
||||
|
||||
If your account [has the proper
|
||||
permissions](https://docs.docker.com/docker-for-aws/iam-permissions/), you can
|
||||
permissions](/docker-for-aws/iam-permissions.md), you can
|
||||
use the blue button from the stable or beta channel to bootstrap Docker for AWS
|
||||
using CloudFormation. For more about stable and beta channels, see the
|
||||
[FAQs](/docker-for-aws/faqs.md#stable-and-beta-channels).
|
||||
|
|
@ -45,6 +45,31 @@ using CloudFormation. For more about stable and beta channels, see the
|
|||
</tr>
|
||||
</table>
|
||||
|
||||
## Deployment options
|
||||
|
||||
There are two ways to deploy Docker for AWS:
|
||||
|
||||
- With a pre-existing VPC
|
||||
- With a new VPC created by Docker
|
||||
|
||||
We recommend allowing Docker for AWS to create the VPC since it allows Docker to optimize the environment. Installing in an existing VPC requires more work.
|
||||
|
||||
### Create a new VPC
|
||||
This approach creates a new VPC, subnets, gateways and everything else needed in order to run Docker for AWS. It is the easiest way to get started, and requires the least amount of work.
|
||||
|
||||
All you need to do is run the CloudFormation template, answer some questions, and you are good to go.
|
||||
|
||||
### Install with an Existing VPC
|
||||
If you need to install Docker for AWS with an existing VPC, you need to do a few preliminary steps. See [recommended VPC and Subnet setup](faqs.md#recommended-vpc-and-subnet-setup) for more details.
|
||||
|
||||
1. Pick a VPC in a region you want to use.
|
||||
|
||||
2. Make sure the selected VPC is setup with an Internet Gateway, Subnets, and Route Tables
|
||||
|
||||
3. You need to have three different subnets, ideally each in their own availability zone. If you are running in a region with only two Availability Zones, you will need to add more than one subnet into one of the availability zones. For production deployments we recommend only deploying to regions that have three or more Availability Zones.
|
||||
|
||||
4. When you launch the docker for AWS CloudFormation stack, make sure you use the one for existing VPCs. This template will prompt you for the VPC and subnets that you want to use for Docker for AWS.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Access to an AWS account with permissions to use CloudFormation and creating the following objects. [Full set of required permissions](iam-permissions.md).
|
||||
|
|
@ -140,7 +165,7 @@ Elastic Load Balancers (ELBs) are set up to help with routing traffic to your sw
|
|||
|
||||
Docker for AWS automatically configures logging to Cloudwatch for containers you run on Docker for AWS. A Log Group is created for each Docker for AWS install, and a log stream for each container.
|
||||
|
||||
`docker logs` and `docker service logs` are not supported on Docker for AWS. Instead, you should check container in CloudWatch.
|
||||
The `docker logs` and `docker service logs` commands are not supported on Docker for AWS when using Cloudwatch for logs. Instead, check container logs in CloudWatch.
|
||||
|
||||
## System containers
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
description: Persistent data volumes
|
||||
keywords: aws persistent data volumes
|
||||
title: Docker for AWS persistent data volumes
|
||||
---
|
||||
|
||||
## What is Cloudstor?
|
||||
|
||||
Cloudstor a volume plugin managed by Docker. It comes pre-installed and pre-configured in swarms deployed on Docker for AWS. Swarm tasks use a volume created through Cloudstor to mount a persistent data volume that stays attached to the swarm tasks no matter which swarm node they get scheduled or migrated to. Cloudstor relies on shared storage infrastructure provided by AWS to allow swarm tasks to create/mount their persistent volumes on any node in the swarm. In a future release we will introduce support for direct attached storage to satisfy very low latency/high IOPs requirements.
|
||||
|
||||
## Use Cloudstor
|
||||
|
||||
After creating a swarm on Docker for AWS and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group:
|
||||
|
||||
```bash
|
||||
$ docker plugin ls
|
||||
ID NAME DESCRIPTION ENABLED
|
||||
f416c95c0dcc docker4x/cloudstor:aws-v1.13.1-beta18 cloud storage plugin for Docker true
|
||||
```
|
||||
|
||||
**Note**: Make note of the plugin tag name, because it will change between versions, and yours may be different then listed here.
|
||||
|
||||
The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver.
|
||||
|
||||
### Share the same volume between tasks:
|
||||
|
||||
```bash
|
||||
docker service create --replicas 5 --name ping1 \
|
||||
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source=sharedvol1,destination=/shareddata \
|
||||
alpine ping docker.com
|
||||
```
|
||||
|
||||
Here all replicas/tasks of the service `ping1` share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure the common backing store is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time and cause corruptions since the volume is shared.
|
||||
|
||||
With the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in the same node or a different node).
|
||||
|
||||
### Use a unique volume per task:
|
||||
|
||||
```bash
|
||||
docker service create --replicas 5 --name ping2 \
|
||||
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
|
||||
alpine ping docker.com
|
||||
```
|
||||
|
||||
Here the templatized notation is used to indicate to Docker Swarm that a unique volume be created and mounted for each replica/task of the service `ping2`. After initial creation of the volumes corresponding to the tasks they are attached to (in the nodes the tasks are scheduled in), if a task is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. It's highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to.
|
||||
|
||||
In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume.
|
||||
|
||||
### List or remove volumes created by Cloudstor
|
||||
|
||||
You can use `docker volume ls` to enumerate all volumes created on a node including those backed by Cloudstor. Note that if a swarm service task starts off in a node and has a Cloudstor volume associated and later gets rescheduled to a different node, `docker volume ls` in the initial node will continue to list the Cloudstor volume that was created for the task that no longer executes on the node although the volume is mounted elsewhere. Do NOT prune/rm the volumes that gets enumerated on a node without any tasks associated since these actions will result in data loss if the same volume is mounted in another node (i.e. the volume shows up in the `docker volume ls` output on another node in the swarm). We can try to detect this and block/handle in post-Beta.
|
||||
|
||||
### Configure IO performance
|
||||
|
||||
If you want a higher level of IO performance like the maxIO mode for EFS, a perfmode parameter can be specified as volume-opt:
|
||||
|
||||
```bash
|
||||
docker service create --replicas 5 --name ping3 \
|
||||
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol5,destination=/mydata,volume-opt=perfmode=maxio \
|
||||
alpine ping docker.com
|
||||
```
|
||||
|
|
@ -27,6 +27,21 @@ Release date: 01/18/2017
|
|||
|
||||
## Beta Channel
|
||||
|
||||
### 1.13.1-beta18
|
||||
Release date: 02/16/2017
|
||||
|
||||
**New**
|
||||
|
||||
- Docker Engine upgraded to [Docker 1.13.1](https://github.com/docker/docker/blob/master/CHANGELOG.md)
|
||||
- Added a second CloudFormation template that allows you to [install Docker for AWS into a pre-existing VPC](index.md#install-into-an-existing-vpc).
|
||||
- Added Swarm wide support for [persistent storage volumes](persistent-data-volumes.md)
|
||||
- Added the following engine labels
|
||||
- **os** (linux)
|
||||
- **region** (us-east-1, etc)
|
||||
- **availability_zone** (us-east-1a, etc)
|
||||
- **instance_type** (t2.micro, etc)
|
||||
- **node_type** (worker, manager)
|
||||
|
||||
### 1.13.1-rc2-beta17
|
||||
Release date: 02/07/2017
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
description: Persistent data volumes
|
||||
keywords: azure persistent data volumes
|
||||
title: Docker for Azure persistent data volumes
|
||||
---
|
||||
|
||||
## What is Cloudstor?
|
||||
|
||||
Cloudstor a volume plugin managed by Docker. It comes pre-installed and pre-configured in swarms deployed on Docker for Azure. Swarm tasks use a volume created through Cloudstor to mount a persistent data volume that stays attached to the swarm tasks no matter which swarm node they get scheduled or migrated to. Cloudstor relies on shared storage infrastructure provided by AWS to allow swarm tasks to create/mount their persistent volumes on any node in the swarm. In a future release we will introduce support for direct attached storage to satisfy very low latency/high IOPs requirements.
|
||||
|
||||
## Use Cloudstor
|
||||
|
||||
After creating a swarm on Docker for Azure and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group:
|
||||
|
||||
```bash
|
||||
$ docker plugin ls
|
||||
ID NAME DESCRIPTION ENABLED
|
||||
f416c95c0dcc docker4x/cloudstor:azure-v1.13.1-beta18 cloud storage plugin for Docker true
|
||||
```
|
||||
|
||||
**Note**: Make note of the plugin tag name, because it will change between versions, and yours may be different then listed here.
|
||||
|
||||
The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver.
|
||||
|
||||
### Share the same volume between tasks:
|
||||
|
||||
```bash
|
||||
docker service create --replicas 5 --name ping1 \
|
||||
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source=sharedvol1,destination=/shareddata \
|
||||
alpine ping docker.com
|
||||
```
|
||||
|
||||
Here all replicas/tasks of the service `ping1` share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure the common backing store is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time and cause corruptions since the volume is shared.
|
||||
|
||||
With the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in the same node or a different node).
|
||||
|
||||
### Use a unique volume per task:
|
||||
|
||||
```bash
|
||||
docker service create --replicas 5 --name ping2 \
|
||||
--mount type=volume,volume-driver=docker4x/cloudstor:azure-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
|
||||
alpine ping docker.com
|
||||
```
|
||||
|
||||
Here the templatized notation is used to indicate to Docker Swarm that a unique volume be created and mounted for each replica/task of the service `ping2`. After initial creation of the volumes corresponding to the tasks they are attached to (in the nodes the tasks are scheduled in), if a task is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. It's highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to.
|
||||
|
||||
In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume.
|
||||
|
||||
#### List or remove volumes created by Cloudstor
|
||||
|
||||
You can use `docker volume ls` to enumerate all volumes created on a node including those backed by Cloudstor. Note that if a swarm service task starts off in a node and has a Cloudstor volume associated and later gets rescheduled to a different node, `docker volume ls` in the initial node will continue to list the Cloudstor volume that was created for the task that no longer executes on the node although the volume is mounted elsewhere. Do NOT prune/rm the volumes that gets enumerated on a node without any tasks associated since these actions will result in data loss if the same volume is mounted in another node (i.e. the volume shows up in the `docker volume ls` output on another node in the swarm). We can try to detect this and block/handle in post-Beta.
|
||||
Loading…
Reference in New Issue