Remove obsolete products

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
This commit is contained in:
Justin Cormack 2020-03-19 11:08:15 +00:00
parent 919714630f
commit 92a231a979
No known key found for this signature in database
GPG Key ID: 2D9CA5D475D0EE4E
36 changed files with 1 additions and 2590 deletions

View File

@ -1,25 +0,0 @@
---
description: Template Archive
keywords: aws, amazon, iaas, release, edge, stable
title: Docker for AWS template archive
---
CloudFormation templates for past releases. If you are just starting out, use
the latest version.
## CE (Stable)
### 18.03.0
http://editions-us-east-1.s3.amazonaws.com/aws/stable/18.03.0/Docker.tmpl
## EE
### 17.06.2
* Docker EE Basic: https://editions-us-east-1.s3.amazonaws.com/aws/17.06/17.06.2/Docker-EE.tmpl
* Docker EE Standard and Advanced: https://editions-us-east-1.s3.amazonaws.com/aws/17.06/17.06.2/Docker-DDC.tmpl
### 17.03.2
https://editions-us-east-1.s3.amazonaws.com/aws/17.03/17.03.2/Docker.tmpl

View File

@ -1,201 +0,0 @@
---
description: Deploying Apps on Docker for AWS
keywords: aws, amazon, iaas, deploy
title: Deploy your app on Docker for AWS
---
## Connect to your manager nodes
This section walks you through connecting to your installation and deploying
applications. Instructions are included for both AWS and Azure, so be sure to
follow the instructions for the cloud provider of your choice in each section.
First, you obtain the public IP address for a manager node. Any manager
node can be used for administrating the swarm.
### Manager Public IP on AWS
Once you've deployed Docker on AWS, go to the "Outputs" tab for the stack in
CloudFormation.
The "Managers" output is a URL you can use to see the available manager nodes of
the swarm in your AWS console. Once present on this page, you can see the
"Public IP" of each manager node in the table and/or "Description" tab if you
click on the instance.
![managers](img/managers.png)
## Connect via SSH
### Manager nodes
Obtain the public IP and/or port for the manager node as instructed above and
using the provided SSH key to begin administrating your swarm:
```bash
$ ssh -i <path-to-ssh-key> docker@<ssh-host>
Welcome to Docker!
```
Once you are logged into the container you can run Docker commands on the swarm:
```bash
$ docker info
$ docker node ls
```
You can also tunnel the Docker socket over SSH to remotely run commands on the cluster (requires [OpenSSH 6.7](https://lwn.net/Articles/609321/) or later):
```bash
$ ssh -i <path-to-ssh-key> -NL localhost:2374:/var/run/docker.sock docker@<ssh-host> &
$ docker -H localhost:2374 info
```
If you don't want to pass `-H` when using the tunnel, you can set the `DOCKER_HOST` environment variable to point to the localhost tunnel opening.
### Worker nodes
As of Beta 13, the worker nodes also have SSH enabled when connecting from
manager nodes. SSH access is not possible to the worker nodes from the public
Internet. To access the worker nodes, you need to first connect to a
manager node (see above).
On the manager node you can then `ssh` to the worker node, over the private
network. Make sure you have SSH agent forwarding enabled (see below). If you run
the `docker node ls` command you can see the full list of nodes in your swarm.
You can then `ssh docker@<worker-host>` to get access to that node.
#### AWS
Use the `HOSTNAME` reported in `docker node ls` directly.
```
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
a3d4vdn9b277p7bszd0lz8grp * ip-172-31-31-40.us-east-2.compute.internal Ready Active Reachable
...
$ ssh docker@ip-172-31-31-40.us-east-2.compute.internal
```
#### Use SSH agent forwarding
SSH agent forwarding allows you to forward along your ssh keys when connecting
from one node to another. This eliminates the need for installing your private
key on all nodes you might want to connect from.
You can use this feature to SSH into worker nodes from a manager node without
installing keys directly on the manager.
If your haven't added your ssh key to the `ssh-agent` you also need to do
this first.
To see the keys in the agent already, run:
```bash
$ ssh-add -L
```
If you don't see your key, add it like this.
```bash
$ ssh-add ~/.ssh/your_key
```
On macOS, the `ssh-agent` forgets this key on restart. But
you can import your SSH key into your Keychain so that your key survives
restarts.
```bash
$ ssh-add -K ~/.ssh/your_key
```
You can then enable SSH forwarding per-session using the `-A` flag for the ssh
command.
Connect to the Manager.
```bash
$ ssh -A docker@<manager ip>
```
To always have it turned on for a given host, you can edit your ssh config file
(`/etc/ssh_config`, `~/.ssh/config`, etc) to add the `ForwardAgent yes` option.
Example configuration:
```conf
Host manager0
HostName <manager ip>
ForwardAgent yes
```
To SSH in to the manager with the above settings:
```bash
$ ssh docker@manager0
```
## Run apps
You can now start creating containers and services.
$ docker run hello-world
You can run websites too. Ports exposed with `--publish` are automatically exposed
through the platform load balancer:
$ docker service create --name nginx --publish published=80,target=80 nginx
Once up, find the `DefaultDNSTarget` output in either the AWS or Azure portals
to access the site.
### Execute docker commands in all swarm nodes
There are cases (such as installing a volume plugin) wherein a docker command may need to be executed in all the nodes across the cluster. You can use the `swarm-exec` tool to achieve that.
Usage : `swarm-exec {Docker command}`
The following installs a test plugin in all the nodes in the cluster.
Example : `swarm-exec docker plugin install --grant-all-permissions
mavenugo/test-docker-netplugin`
This tool internally makes use of docker global-mode service that runs a task on
each of the nodes in the cluster. This task in turn executes your docker
command. The global-mode service also guarantees that when a new node is added
to the cluster or during upgrades, a new task is executed on that node and hence
the docker command is automatically executed.
### Docker Stack deployment
To deploy complex multi-container apps, you can use the `docker stack deploy` command. You can either deploy a docker compose file on your machine over an SSH tunnel, or copy the `docker-compose.yml` file to a manager node via `scp` for example. You can then SSH into the manager node and run `docker stack deploy` with the `--compose-file` or `-c` option. See [docker stack deploy options](/engine/reference/commandline/stack_deploy/#options) for the list of different options. If you have multiple manager nodes, make sure you are logged in to the one with the stack file copy.
For example:
```bash
docker stack deploy --compose-file docker-compose.yml myapp
```
See [Docker voting app](https://github.com/docker/example-voting-app) for a good sample app to test stack deployments.
By default, apps deployed with stacks do not have ports publicly exposed. Update port mappings for services, and Docker automatically wires up the underlying platform load balancers:
docker service update --publish-add published=80,target=80 <example-service>
### Images in private repos
To create swarm services using images in private repos, first make sure you're
authenticated and have access to the private repo, then create the service with
the `--with-registry-auth` flag (the example below assumes you're using Docker
Hub):
docker login
...
docker service create --with-registry-auth user/private-repo
...
This causes the swarm to cache and use the cached registry credentials when creating containers for the service.

View File

@ -1,205 +0,0 @@
---
description: Frequently asked questions
keywords: aws faqs
title: Docker for AWS frequently asked questions (FAQ)
toc_max: 2
---
## Stable and edge channels
Two different download channels are available for Docker for AWS:
* The **stable channel** provides a general availability release-ready deployment for a fully baked and
tested, more reliable cluster. The stable version of Docker for AWS comes with
the latest released version of Docker Engine. The release schedule is synched
with Docker Engine releases and hotfixes. On the stable channel, you can select
whether to send usage statistics and other data.
* The **edge channel** provides a deployment with new features we are
working on, but is not necessarily fully tested. It comes with the
experimental version of Docker Engine. Bugs, crashes, and issues are
more likely to occur with the edge cluster, but you get a chance to preview
new functionality, experiment, and provide feedback as the deployment
evolve. Releases are typically more frequent than for stable, often one
or more per month. Usage statistics and crash reports are sent by default.
You do not have the option to disable this on the edge schannel.
## Can I use my own AMI?
No, at this time we only support the default Docker for AWS AMI.
## How can I use Docker for AWS with an AWS account in an EC2-Classic region?
If you have an AWS account that was created before **December 4th, 2013** you
have what is known as an **EC2-Classic** account on regions where you have
previously deployed resources. **EC2-Classic** accounts don't have default VPC's
or the associated subnets, etc. This causes a problem when using our
CloudFormation template because we are using the
[Fn:GetAZs](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html)
function they provide to determine which availability zones you have access to.
When used in a region where you have **EC2-Classic**, this function returns
all availability zones for a region, even ones you don't have access to. When
you have an **EC2-VPC** account, it returns only the availability zones you
have access to.
This causes an error like the following:
> "Value (us-east-1a) for parameter availabilityZone is invalid.
Subnets can currently only be created in the following availability
zones: us-east-1d, us-east-1c, us-east-1b, us-east-1e."
If you have an **EC2-Classic** account, and you don't have access to the `a` and
`b` availability zones for that region.
There isn't anything we can do right now to fix this issue. We have contacted
Amazon to provide a solution.
### How to tell if you are in the EC2-Classic region.
[This AWS documentation
page](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html)
describes how you can tell if you have EC2-Classic, EC2-VPC or both.
### Possible fixes to the EC2-Classic region issue:
There are a few workarounds that you can try to get Docker for AWS up and running for you.
1. Create your own VPC, then [install Docker for AWS with a pre-existing VPC](/docker-for-aws/index.md#install-with-an-existing-vpc).
2. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions should be set up with **EC2-VPC** and the issue shouldn't occur.
3. Create an new AWS account, all new accounts are setup using **EC2-VPC** and do not have this problem.
4. Contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs
### Helpful links:
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html
- https://aws.amazon.com/vpc/faqs/#Default_VPCs
- https://aws.amazon.com/blogs/aws/amazon-ec2-update-virtual-private-clouds-for-everyone/
## Can I use my existing VPC?
Yes, see [install Docker for AWS with a pre-existing VPC](/docker-for-aws/index.md#install-with-an-existing-vpc) for more info.
## Recommended VPC and subnet setup
#### VPC
* **CIDR:** 172.31.0.0/16
* **DNS hostnames:** yes
* **DNS resolution:** yes
* **DHCP option set:** DHCP Options (Below)
#### Internet gateway
* **VPC:** VPC (above)
#### DHCP option set
* **domain-name:** ec2.internal
* **domain-name-servers:** AmazonProvidedDNS
#### Subnet1
* **CIDR:** 172.31.16.0/20
* **Auto-assign public IP:** yes
* **Availability-Zone:** A
#### Subnet2
* **CIDR:** 172.31.32.0/20
* **Auto-assign public IP:** yes
* **Availability-Zone:** B
#### Subnet3
* **CIDR:** 172.31.0.0/20
* **Auto-assign public IP:** yes
* **Availability-Zone:** C
#### Route table
* **Destination CIDR block:** 0.0.0.0/0
* **Subnets:** Subnet1, Subnet2, Subnet3
##### Subnet note:
If you are using the `10.0.0.0/16` CIDR in your VPC. When you create a docker network, you need to pick a subnet (using `docker network create —subnet` option) that doesn't conflict with the `10.0.0.0` network.
## Which AWS regions does this work with?
Docker for AWS should work with all regions except for AWS US Gov Cloud (us-gov-west-1) and AWS China, which are a little different than the other regions.
## How many Availability Zones does Docker for AWS use?
Docker for AWS determines the correct amount of Availability Zone's to use based on the region. In regions that support it, we use 3 Availability Zones, and 2 for the rest of the regions. We recommend running production workloads only in regions that have at least 3 Availability Zones.
## What do I do if I get `KeyPair error` on AWS?
As part of the prerequisites, you need to have an SSH key uploaded to the AWS region you are trying to deploy to.
For more information about adding an SSH key pair to your account, refer to the [Amazon EC2 Key Pairs docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
## Where are my container logs?
All container logs are aggregated within [AWS CloudWatch](https://aws.amazon.com/cloudwatch/).
## Best practice to deploy a large cluster
When deploying a cluster of more than 20 workers, it can take a very long time for AWS to deploy all of the instances (1+hrs).
It is best to deploy a cluster of 20 workers then scale it up in the Auto-Scaling Group (ASG) once it's been deployed.
Benchmark of 3 Managers (m4.large) + 200 workers (t2.medium):
* Deploying (~3.1hrs)
* Deployment: 3 Managers + 200 workers = ~190mins
* Scaling (~35mins)
* Deployment: 3 Managers + 20 workers = ~20mins
* Scaling: 20 workers -> 200 workers via ASG = ~15mins
> **Note**: During a Stack upgrade, you need to match the Auto-Scaling Group worker count, otherwise AWS scales it back down (aka type 200 workers in the input box)
## Where do I report problems or bugs?
Search for existing issues, or create a new one, within the [Docker for AWS](https://github.com/docker/for-aws) GitHub repositories.
In AWS, if your stack is misbehaving, run the following diagnostic tool from one of the managers - this collects your docker logs and send them to Docker:
```bash
$ docker-diagnose
OK hostname=manager1
OK hostname=worker1
OK hostname=worker2
Done requesting diagnostics.
Your diagnostics session ID is 1234567890-xxxxxxxxxxxxxx
Please provide this session ID to the maintainer debugging your issue.
```
> **Note**: Your output may be slightly different from the above, depending on your swarm configuration.
## Metrics
Docker for AWS sends anonymized minimal metrics to Docker (heartbeat). These metrics are used to monitor adoption and are critical to improve Docker for AWS.
## How do I run administrative commands?
By default when you SSH into a manager, you are logged in as the regular username: `docker` - It is possible however to run commands with elevated privileges by using `sudo`.
For example to ping one of the nodes, after finding its IP via the Azure/AWS portal, such as 10.0.0.4, you could run:
```bash
$ sudo ping 10.0.0.4
```
> **Note**: Access to Docker for AWS and Azure happens through a shell container that itself runs on Docker.
## What are the Editions containers running after deployment?
In order for our editions to deploy properly and for load balancer integrations to happen, we run a few containers. They are as follow:
| Container name | Description |
|---|---|
| `init` | Sets up the swarm and makes sure that the stack came up properly. (checks manager+worker count).|
| `shell` | This is our shell/ssh container. When you SSH into an instance, you're actually in this container.|
| `meta` | Assist in creating the swarm cluster, giving privileged instances the ability to join the swarm.|
| `l4controller` | Listens for ports exposed at the docker CLI level and opens them in the load balancer. |
## How do I uninstall Docker for AWS?
You can remove the Docker for AWS setup and stacks through the [AWS
Console](https://console.aws.amazon.com/console/home){: target="_blank"
class="_"} on the CloudFormation page. See [Uninstalling or removing a
stack](/docker-for-aws/index.md#uninstalling-or-removing-a-stack).

View File

@ -1,345 +0,0 @@
---
description: IAM permissions
keywords: aws iam permissions
title: Docker for AWS IAM permissions
---
The following IAM permissions are required to use Docker for AWS.
Before you deploy Docker for AWS, your account needs these permissions for the stack to deploy correctly.
If you create and use an IAM role with these permissions for creating the stack, CloudFormation uses the role's permissions instead of your own, using the AWS CloudFormation Service Role feature.
This feature is called [AWS CloudFormation Service Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html?icmpid=docs_cfn_console)
follow the link for more information.
{% raw %}
```none
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1481924239005",
"Effect": "Allow",
"Action": [
"cloudformation:CancelUpdateStack",
"cloudformation:ContinueUpdateRollback",
"cloudformation:CreateChangeSet",
"cloudformation:CreateStack",
"cloudformation:CreateUploadBucket",
"cloudformation:DeleteStack",
"cloudformation:DescribeAccountLimits",
"cloudformation:DescribeChangeSet",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources",
"cloudformation:DescribeStacks",
"cloudformation:EstimateTemplateCost",
"cloudformation:ExecuteChangeSet",
"cloudformation:GetStackPolicy",
"cloudformation:GetTemplate",
"cloudformation:GetTemplateSummary",
"cloudformation:ListChangeSets",
"cloudformation:ListStackResources",
"cloudformation:ListStacks",
"cloudformation:PreviewStackUpdate",
"cloudformation:SetStackPolicy",
"cloudformation:SignalResource",
"cloudformation:UpdateStack",
"cloudformation:ValidateTemplate"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1481924344000",
"Effect": "Allow",
"Action": [
"ec2:AllocateHosts",
"ec2:AssignPrivateIpAddresses",
"ec2:AssociateRouteTable",
"ec2:AttachInternetGateway",
"ec2:AttachNetworkInterface",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateInternetGateway",
"ec2:CreateNatGateway",
"ec2:CreateNetworkAcl",
"ec2:CreateNetworkAclEntry",
"ec2:CreateNetworkInterface",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateVpc",
"ec2:DeleteInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteNetworkAcl",
"ec2:DeleteNetworkAclEntry",
"ec2:DeleteNetworkInterface",
"ec2:DeleteRoute",
"ec2:DeleteRouteTable",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVolume",
"ec2:DeleteVpc",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeHosts",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstances",
"ec2:DescribeInternetGateways",
"ec2:DescribeKeyPairs",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumeAttribute",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcs",
"ec2:DetachInternetGateway",
"ec2:DetachNetworkInterface",
"ec2:DetachVolume",
"ec2:DisassociateAddress",
"ec2:DisassociateRouteTable",
"ec2:GetConsoleOutput",
"ec2:GetConsoleScreenshot",
"ec2:ImportKeyPair",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifyVpcAttribute",
"ec2:ModifySubnetAttribute",
"ec2:RebootInstances",
"ec2:ReleaseAddress",
"ec2:ReleaseHosts",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1481924651000",
"Effect": "Allow",
"Action": [
"autoscaling:AttachInstances",
"autoscaling:AttachLoadBalancers",
"autoscaling:CompleteLifecycleAction",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:CreateLaunchConfiguration",
"autoscaling:CreateOrUpdateTags",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteLaunchConfiguration",
"autoscaling:DeleteLifecycleHook",
"autoscaling:DeleteNotificationConfiguration",
"autoscaling:DeletePolicy",
"autoscaling:DeleteScheduledAction",
"autoscaling:DeleteTags",
"autoscaling:DescribeAccountLimits",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeAutoScalingNotificationTypes",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeLifecycleHookTypes",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:DescribeLoadBalancers",
"autoscaling:DescribeScalingActivities",
"autoscaling:DescribeScheduledActions",
"autoscaling:DescribeTags",
"autoscaling:DetachInstances",
"autoscaling:DetachLoadBalancers",
"autoscaling:DisableMetricsCollection",
"autoscaling:EnableMetricsCollection",
"autoscaling:EnterStandby",
"autoscaling:ExecutePolicy",
"autoscaling:ExitStandby",
"autoscaling:PutLifecycleHook",
"autoscaling:PutNotificationConfiguration",
"autoscaling:PutScalingPolicy",
"autoscaling:PutScheduledUpdateGroupAction",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:ResumeProcesses",
"autoscaling:SetDesiredCapacity",
"autoscaling:SetInstanceHealth",
"autoscaling:SetInstanceProtection",
"autoscaling:SuspendProcesses",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1481924759004",
"Effect": "Allow",
"Action": [
"dynamodb:CreateTable",
"dynamodb:DeleteItem",
"dynamodb:DeleteTable",
"dynamodb:DescribeTable",
"dynamodb:GetItem",
"dynamodb:ListTables",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:UpdateTable"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1481924854000",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DeleteLogGroup",
"logs:DeleteLogStream",
"logs:DescribeLogGroups",
"logs:GetLogEvents",
"logs:PutLogEvents",
"logs:PutRetentionPolicy"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1481924989003",
"Effect": "Allow",
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:CreateQueue",
"sqs:DeleteMessage",
"sqs:DeleteQueue",
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ListQueues",
"sqs:ReceiveMessage",
"sqs:SendMessage",
"sqs:SetQueueAttributes"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1481924989002",
"Effect": "Allow",
"Action": [
"iam:AddRoleToInstanceProfile",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetRole",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:RemoveRoleFromInstanceProfile"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1481924989001",
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateRule",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DeleteLoadBalancerPolicy",
"elasticloadbalancing:DeleteRule",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeLoadBalancerPolicyTypes",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeRules",
"elasticloadbalancing:DescribeSSLPolicies",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:DescribeTargetGroupAttributes",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DisableAvailabilityZonesForLoadBalancer",
"elasticloadbalancing:EnableAvailabilityZonesForLoadBalancer",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:ModifyRule",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:RemoveTags",
"elasticloadbalancing:SetLoadBalancerListenerSSLCertificate",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"elasticloadbalancing:SetRulePriorities",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:SetSubnets"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1487169681000",
"Effect": "Allow",
"Action": [
"elasticfilesystem:*"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1487169681009",
"Effect": "Allow",
"Action": [
"lambda:CreateFunction",
"lambda:DeleteFunction",
"lambda:GetFunctionConfiguration",
"lambda:InvokeFunction",
"lambda:UpdateFunctionCode",
"lambda:UpdateFunctionConfiguration"
],
"Resource": [
"*"
]
}
]
}
```
{% endraw %}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

View File

@ -1,232 +0,0 @@
---
description: Setup & Prerequisites
keywords: aws, amazon, iaas, tutorial
title: Docker for AWS setup & prerequisites
redirect_from:
- /engine/installation/cloud/cloud-ex-aws/
- /engine/installation/amazon/
- /datacenter/install/aws/
---
{% include d4a_buttons.md %}
## Docker Enterprise Edition (EE) for AWS
This deployment is fully baked and tested, and comes with the latest Docker
Enterprise Edition for AWS.
This release is maintained and receives **security and critical bug fixes for
one year**.
[Deploy Docker Enterprise Edition (EE) for AWS](https://hub.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn blank_"}
## Docker Community Edition (CE) for AWS
### Quickstart
If your account [has the proper
permissions](/docker-for-aws/iam-permissions.md), you can
use the blue button to bootstrap Docker for AWS
using CloudFormation.
{{aws_blue_latest}}
{{aws_blue_vpc_latest}}
### Deployment options
There are two ways to deploy Docker for AWS:
- With a pre-existing VPC
- With a new VPC created by Docker
We recommend allowing Docker for AWS to create the VPC since it allows Docker to optimize the environment. Installing in an existing VPC requires more work.
#### Create a new VPC
This approach creates a new VPC, subnets, gateways, and everything else needed to run Docker for AWS. It is the easiest way to get started, and requires the least amount of work.
All you need to do is run the CloudFormation template, answer some questions, and you are good to go.
#### Install with an Existing VPC
If you need to install Docker for AWS with an existing VPC, you need to do a few preliminary steps. See [recommended VPC and Subnet setup](faqs.md#recommended-vpc-and-subnet-setup) for more details.
1. Pick a VPC in a region you want to use.
2. Make sure the selected VPC is setup with an Internet Gateway, Subnets, and Route Tables.
3. You need to have three different subnets, ideally each in their own availability zone. If you are running in a region with only two Availability Zones, you need to add more than one subnet into one of the availability zones. For production deployments we recommend only deploying to regions that have three or more Availability Zones.
4. When you launch the docker for AWS CloudFormation stack, make sure you use the one for existing VPCs. This template prompts you for the VPC and subnets that you want to use for Docker for AWS.
### Prerequisites
- Access to an AWS account with permissions to use CloudFormation and creating the following objects. [Full set of required permissions](iam-permissions.md).
- EC2 instances + Auto Scaling groups
- IAM profiles
- DynamoDB Tables
- SQS Queue
- VPC + subnets and security groups
- ELB
- CloudWatch Log Group
- SSH key in AWS in the region where you want to deploy (required to access the completed Docker install)
- AWS account that support EC2-VPC (See the [FAQ for details about EC2-Classic](faqs.md))
For more information about adding an SSH key pair to your account, refer
to the [Amazon EC2 Key Pairs
docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
The China and US Gov Cloud AWS partitions are not currently supported.
### Configuration
Docker for AWS is installed with a CloudFormation template that configures
Docker in swarm mode, running on instances backed by custom AMIs. There are two
ways you can deploy Docker for AWS. You can use the AWS Management Console
(browser based), or use the AWS CLI. Both have the following configuration
options.
#### Configuration options
##### KeyName
Pick the SSH key to be used when you SSH into the manager nodes.
##### InstanceType
The EC2 instance type for your worker nodes.
##### ManagerInstanceType
The EC2 instance type for your manager nodes. The larger your swarm, the larger
the instance size you should use.
##### ClusterSize
The number of workers you want in your swarm (0-1000).
##### ManagerSize
The number of managers in your swarm. On Docker Engine - Community, you can select either 1,
3 or 5 managers. We only recommend 1 manager for testing and dev setups. There
are no failover guarantees with 1 manager — if the single manager fails the
swarm goes down as well. Additionally, upgrading single-manager swarms is not
currently guaranteed to succeed.
On Docker EE, you can choose to run with 3 or 5 managers.
We recommend at least 3 managers, and if you have a lot of workers, you should
use 5 managers.
##### EnableSystemPrune
Enable if you want Docker for AWS to automatically cleanup unused space on your
swarm nodes.
When enabled, `docker system prune` runs staggered every day, starting at
1:42AM UTC on both workers and managers. The prune times are staggered slightly
so that not all nodes are pruned at the same time. This limits resource
spikes on the swarm.
Pruning removes the following:
- All stopped containers
- All volumes not used by at least one container
- All dangling images
- All unused networks
##### EnableCloudWatchLogs
Enable if you want Docker to send your container logs to CloudWatch. ("yes",
"no") Defaults to yes.
##### WorkerDiskSize
Size of Workers' ephemeral storage volume in GiB (20 - 1024).
##### WorkerDiskType
Worker ephemeral storage volume type ("standard", "gp2").
##### ManagerDiskSize
Size of Manager's ephemeral storage volume in GiB (20 - 1024)
##### ManagerDiskType
Manager ephemeral storage volume type ("standard", "gp2")
#### Installing with the AWS Management Console
The simplest way to use the template is to use one of the
[Quickstart](#docker-community-edition-ce-for-aws) options with the
CloudFormation section of the AWS Management Console.
![create a stack](img/aws-select-template.png)
#### Installing with the CLI
You can also invoke the Docker for AWS CloudFormation template from the AWS CLI:
Here is an example of how to use the CLI. Make sure you populate all of the
parameters and their values from the above list:
```bash
$ aws cloudformation create-stack --stack-name teststack --template-url <templateurl> --parameters ParameterKey=<keyname>,ParameterValue=<keyvalue> ParameterKey=InstanceType,ParameterValue=t2.micro ParameterKey=ManagerInstanceType,ParameterValue=t2.micro ParameterKey=ClusterSize,ParameterValue=1 .... --capabilities CAPABILITY_IAM
```
To fully automate installs, you can use the [AWS Cloudformation API](http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/Welcome.html).
### How it works
Docker for AWS starts with a CloudFormation template that creates everything
that you need from scratch. There are only a few prerequisites that are listed
above.
The CloudFormation template first creates a new VPC along with subnets and
security groups. After the networking set-up completes, two Auto Scaling Groups
are created, one for the managers and one for the workers, and the configured
capacity setting is applied. Managers start first and create a quorum using
Raft, then the workers start and join the swarm one at a time. At this point,
the swarm is comprised of X number of managers and Y number of workers, and you
can deploy your applications. See the [deployment](deploy.md) docs for your next
steps.
> To [log into your nodes using SSH](/docker-for-aws/deploy.md#connecting-via-ssh),
> use the `docker` user rather than `root` or `ec2-user`.
If you increase the number of instances running in your worker Auto Scaling
Group (via the AWS console, or updating the CloudFormation configuration), the
new nodes that start up automatically join the swarm.
Elastic Load Balancers (ELBs) are set up to help with routing traffic to your
swarm.
### Logging
Docker for AWS automatically configures logging to Cloudwatch for containers you
run on Docker for AWS. A Log Group is created for each Docker for AWS install,
and a log stream for each container.
The `docker logs` and `docker service logs` commands are not supported on Docker
for AWS when using Cloudwatch for logs. Instead, check container logs in
CloudWatch.
### System containers
Each node has a few system containers running on it to help run your
swarm cluster. For everything to run smoothly, keep those
containers running, and don't make any changes. If you make any changes, Docker
for AWS does not work correctly.
### Uninstalling or removing a stack
To uninstall Docker for AWS, log on to the [AWS
Console](https://aws.amazon.com/){: target="_blank" class="_"}, navigate to
**Management Tools -> CloudFormation -> Actions -> Delete Stack**, and select
the Docker stack you want to remove.
![uninstall](img/aws-delete-stack.png)
Stack removal does not remove EBS and EFS volumes created by the cloudstor
volume plugin or the S3 bucket associated with DTR. Those resources must be
removed manually. See the [cloudstor](/docker-for-aws/persistent-data-volumes/#list-or-remove-volumes-created-by-cloudstor)
docs for instructions on removing volumes.

View File

@ -1,156 +0,0 @@
---
description: Load Balancer
keywords: aws load balancer elb
title: Configure the Docker for AWS load balancer
---
{% include d4a_buttons.md %}
## How does it work?
When you create a service, any ports that are exposed with `-p` are automatically exposed through the platform load balancer:
```bash
$ docker service create --name nginx --publish published=80,target=80 nginx
```
This opens up port 80 on the Elastic Load Balancer (ELB) and direct any traffic
on that port, to your swarm service.
## How can I configure my load balancer to support SSL/TLS traffic?
Docker uses [Amazons' ACM service](https://aws.amazon.com/certificate-manager/),
which provides free SSL/TLS certificates, and can be used with ELBs. You need to
create a new certificate for your domain, and get the ARN for that certificate.
You add a label to your service to tell swarm that you want to use a given ACM
cert for SSL connections to your service.
### Examples
Start a service and listen on the ELB with ports `80` and `443`. Port `443` is
served using a SSL certificate from ACM, which is referenced by the ARN that is
described in the service label `com.docker.aws.lb.arn`
```bash
$ docker service create \
--name demo \
--detach=true \
--publish published=80,target=80 \
--publish published=443,target=80 \
--label com.docker.aws.lb.arn="arn:aws:acm:us-east-1:0123456789:certificate/c02117b6-2b5f-4507-8115-87726f4ab963" \
yourname/your-image:latest
```
By default when you add an ACM ARN as a label, it listens on port `443`. If you want to change which port to listen too you append an `@` symbol and a list of ports you want to expose.
#### links SSL to port 443
```none
com.docker.aws.lb.arn="arn:..."
```
#### links SSL to port 444
```none
com.docker.aws.lb.arn="arn:...@444"
```
#### links SSL to ports 444 and 8080
```none
com.docker.aws.lb.arn="arn:...@444,8080"
```
### More full examples
Listen for HTTP on ports 80 and HTTPS on 444
```bash
$ docker service create \
--name demo \
--detach=true \
--publish published=80,target=80 \
--publish published=444,target=80 \
--label com.docker.aws.lb.arn="arn:aws:acm:us-east-1:0123456789:certificate/c02117b6-2b5f-4507-8115-87726f4ab963@444" \
yourname/your-image:latest
```
#### SSL listen on port 444 and 443
```bash
$ docker service create \
--name demo \
--detach=true \
--publish published=80,target=80 \
--publish published=444,target=80 \
--label com.docker.aws.lb.arn="arn:aws:acm:us-east-1:0123456789:certificate/c02117b6-2b5f-4507-8115-87726f4ab963@443,444" \
yourname/your-image:latest
```
#### SSL listen on port 8080
```bash
$ docker service create \
--name demo \
--detach=true \
--publish published=8080,target=80 \
--label com.docker.aws.lb.arn="arn:aws:acm:us-east-1:0123456789:certificate/c02117b6-2b5f-4507-8115-87726f4ab963@8080" \
yourname/your-image:latest
```
### HTTPS vs SSL load balancer protocols
Docker for AWS version 17.07.0 and later also support the `HTTPS` listener protocol when using ACM certificates.
Use the `HTTPS` protocol if your app relies on checking the [X-Forwarded-For](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html) header for resolving the client IP address. The client IP is also available with `SSL` by using the [Proxy Protocol](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html#proxy-protocol), but many apps and app frameworks don't support this.
The only valid options are `HTTPS` and `SSL`. Specifying any other value causes `SSL` to be selected. For backwards compatibility the default protocol is `SSL`.
#### A HTTPS listener on port 443
```none
com.docker.aws.lb.arn="arn:...@HTTPS:443"
```
#### A SSL (TCP) listener on port 443
```none
com.docker.aws.lb.arn="arn:...@443"
```
```none
com.docker.aws.lb.arn="arn:...@SSL:443"
```
#### A HTTPS listener on port 443, and a SSL (TCP) listener on port 8080
```none
com.docker.aws.lb.arn="arn:...@HTTPS:443,8080"
```
#### A SSL (TCP) listener on port 443 and 8080
Since BAD isn't a valid option, it reverts back to a SSL (TCP) port for 443.
```none
com.docker.aws.lb.arn="arn:...@BAD:443,8080"
```
### Add a CNAME for your ELB
Once you have your ELB setup, with the correct listeners and certificates, you
need to add a DNS CNAME that points to your ELB at your DNS provider.
### ELB SSL limitations
- If you remove the service that has the `com.docker.aws.lb.arn` label, that listener and certificate is removed from the ELB.
- If you edit the ELB config directly from the dashboard, the changes are removed after the next update.
## Can I manually change the ELB configuration?
No. If you make any manual changes to the ELB, they are removed the next time we
update the ELB configuration based on any swarm changes. This is because the
swarm service configuration is the source of record for service ports. If you
add listeners to the ELB manually, they could conflict with what is in swarm,
and cause issues.

View File

@ -1,17 +0,0 @@
---
description: Docker's use of Open Source
keywords: docker, opensource
title: Open source components and licensing
---
Docker for AWS and Azure Editions are built using open source software.
Docker for AWS and Azure Editions distribute some components that are licensed
under the GNU General Public License. You can download the source for these
components [here](https://download.docker.com/opensource/License.tar.gz).
The sources for qemu-img can be obtained
[here](http://wiki.qemu-project.org/download/qemu-2.4.1.tar.bz2). The sources
for the gettext and glib libraries that qemu-img requires were obtained from
[Homebrew](https://brew.sh/) and may be retrieved using `brew install
--build-from-source gettext glib`.

View File

@ -1,255 +0,0 @@
---
description: Persistent data volumes
keywords: aws persistent data volumes
title: Docker for AWS persistent data volumes
---
{% include d4a_buttons.md %}
## What is Cloudstor?
Cloudstor is a modern volume plugin built by Docker. It comes pre-installed and
pre-configured in Docker swarms deployed through Docker for AWS. Docker swarm
mode tasks and regular Docker containers can use a volume created with
Cloudstor to mount a persistent data volume. In Docker for AWS, Cloudstor has
two `backing` options:
- `relocatable` data volumes are backed by EBS.
- `shared` data volumes are backed by EFS.
When you use the Docker CLI to create a swarm service along with the persistent
volumes used by the service tasks, you can create three different types of
persistent volumes:
- Unique `relocatable` Cloudstor volumes mounted by each task in a swarm service.
- Global `shared` Cloudstor volumes mounted by all tasks in a swarm service.
- Unique `shared` Cloudstor volumes mounted by each task in a swarm service.
Examples of each type of volume are described below.
## Relocatable Cloudstor volumes
Workloads running in a Docker service that require access to low latency/high
IOPs persistent storage, such as a database engine, can use a `relocatable`
Cloudstor volume backed by EBS. When you create the volume, you can specify the
type of EBS volume appropriate for the workload (such as `gp2`, `io1`, `st1`,
`sc1`). Each `relocatable` Cloudstor volume is backed by a single EBS volume.
If a swarm task using a `relocatable` Cloudstor volume gets rescheduled to
another node within the same availability zone as the original node where the
task was running, Cloudstor detaches the backing EBS volume from the original
node and attaches it to the new target node automatically.
If the swarm task gets rescheduled to a node in a different availability zone,
Cloudstor transfers the contents of the backing EBS volume to the destination
availability zone using a snapshot, and cleans up the EBS volume in the
original availability zone. To minimize the time necessary to create the
snapshot to transfer data across availability zones, Cloudstor periodically
takes snapshots of EBS volumes to ensure there is never a large number of
writes that need to be transferred as part of the final snapshot when
transferring the EBS volume across availability zones.
Typically the snapshot-based transfer process across availability zones takes
between 2 and 5 minutes unless the work load is write-heavy. For extremely
write-heavy workloads generating several GBs of fresh/new data every few
minutes, the transfer may take longer than 5 minutes. The time required to
snapshot and transfer increases sharply beyond 10 minutes if more than 20 GB of
writes have been generated since the last snapshot interval. A swarm task is
not started until the volume it mounts becomes available.
Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks
is not a supported scenario and leads to data loss. If you need a Cloudstor
volume to share data between tasks, choose the appropriate EFS backed `shared`
volume option. Using a `relocatable` Cloudstor volume backed by EBS is
supported on all AWS regions that support EBS. The default `backing` option is
`relocatable` if EFS support is not selected during setup/installation or if
EFS is not supported in a region.
## Shared Cloudstor volumes
When multiple swarm service tasks need to share data in a persistent storage
volume, you can use a `shared` Cloudstor volume backed by EFS. Such a volume and
its contents can be mounted by multiple swarm service tasks without the risk of
data loss, since EFS makes the data available to all swarm nodes over NFS.
When swarm tasks using a `shared` Cloudstor volume get rescheduled from one node
to another within the same or across different availability zones, the
persistent data backed by EFS volumes is always available. `shared` Cloudstor
volumes only work in those AWS regions where EFS is supported. If EFS
Support is selected during setup/installation, the default "backing" option for
Cloudstor volumes is set to `shared` so that EFS is used by default.
`shared` Cloudstor volumes backed by EFS (or even EFS MaxIO) may not be ideal
for workloads that require very low latency and high IOPSs. For performance
details of EFS backed `shared` Cloudstor volumes, see [the AWS performance
guidelines](http://docs.aws.amazon.com/efs/latest/ug/performance.html).
## Use Cloudstor
After initializing or joining a swarm on Docker for AWS, connect to any swarm
manager using SSH. Verify that the CloudStor plugin is already installed and
configured for the stack or resource group:
```bash
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
f416c95c0dcc cloudstor:aws cloud storage plugin for Docker true
```
The following examples show how to create swarm services that require data
persistence using the `--mount` flag and specifying Cloudstor as the volume
driver.
### Share the same volume among tasks using EFS
In those regions where EFS is supported and EFS support is enabled during
deployment of the Cloud Formation template, you can use `shared` Cloudstor
volumes to share access to persistent data across all tasks in a swarm service
running in multiple nodes, as in the following example:
```bash
$ docker service create \
--replicas 5 \
--name ping1 \
--mount type=volume,volume-driver=cloudstor:aws,source=sharedvol1,destination=/shareddata \
alpine ping docker.com
```
All replicas/tasks of the service `ping1` share the same persistent volume
`sharedvol1` mounted at `/shareddata` path within the container. Docker takes
care of interacting with the Cloudstor plugin to ensure that EFS is mounted on
all nodes in the swarm where service tasks are scheduled. Your application needs
to be designed to ensure that tasks do not write concurrently on the same file
at the same time, to protect against data corruption.
You can verify that the same volume is shared among all the tasks by logging
into one of the task containers, writing a file under `/shareddata/`, and
logging into another task container to verify that the file is available there
as well.
The only option available for EFS is `perfmode`. You can set `perfmode` to
`maxio` for high IO throughput:
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping3 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol5,destination=/mydata,volume-opt=perfmode=maxio \
alpine ping docker.com
```
{% endraw %}
You can also create `shared` Cloudstor volumes using the
`docker volume create` CLI:
```bash
$ docker volume create -d "cloudstor:aws" --opt backing=shared mysharedvol1
```
### Use a unique volume per task using EBS
If EBS is available and enabled, you can use a templatized notation with the
`docker service create` CLI to create and mount a unique `relocatable` Cloudstor
volume backed by a specified type of EBS for each task in a swarm service. New
EBS volumes typically take a few minutes to be created. Besides
`backing=relocatable`, the following volume options are available:
| Option | Description |
|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `size` | Required parameter that indicates the size of the EBS volumes to create in GB. |
| `ebstype` | Optional parameter that indicates the type of the EBS volumes to create (`gp2`, `io1`, `st1`, `sc1`}. The default `ebstype` is Standard/Magnetic. For further details about EBS volume types, see the [EBS volume type documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). |
| `iops` | Required if `ebstype` specified is `io1`, which enables provisioned IOPs. Needs to be in the appropriate range as required by EBS. |
Example usage:
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping3 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata,volume-opt=backing=relocatable,volume-opt=size=25,volume-opt=ebstype=gp2 \
alpine ping docker.com
```
{% endraw %}
The above example creates and mounts a distinct Cloudstor volume backed by 25 GB EBS
volumes of type `gp2` for each task of the `ping3` service. Each task mounts its
own volume at `/mydata/` and all files under that mountpoint are unique to the
task mounting the volume.
It is highly recommended that you use the `.Task.Slot` template to ensure that
task `N` always gets access to volume `N`, no matter which node it is executing
on/scheduled to. The total number of EBS volumes in the swarm should be kept
below `12 * (minimum number of nodes that are expected to be present at any
time)` to ensure that EC2 can properly attach EBS volumes to a node when another
node fails. Use EBS volumes only for those workloads where low latency and high
IOPs is absolutely necessary.
You can also create EBS backed volumes using the `docker volume create` CLI:
```bash
$ docker volume create \
-d "cloudstor:aws" \
--opt ebstype=io1 \
--opt size=25 \
--opt iops=1000 \
--opt backing=relocatable \
mylocalvol1
```
Sharing the same `relocatable` Cloudstor volume across multiple tasks of a
service or across multiple independent containers is not supported when
`backing=relocatable` is specified. Attempting to do so results in IO errors.
### Use a unique volume per task using EFS
If EFS is available and enabled, you can use templatized notation to create and
mount a unique EFS-backed volume into each task of a service. This is useful if
you already have too many EBS volumes or want to reduce the amount of time it
takes to transfer volume data across availability zones.
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping2 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
```
{% endraw %}
Here, each task has mounted its own volume at `/mydata/` and the files under
that mountpoint are unique to that task.
When a task with only `shared` EFS volumes mounted is rescheduled on a different
node, Docker interacts with the Cloudstor plugin to create and mount the volume
corresponding to the task on the node where the task is rescheduled. Since data
on EFS is available to all swarm nodes and can be quickly mounted and accessed,
the rescheduling process for tasks using EFS-backed volumes typically takes a
few seconds, as compared to several minutes when using EBS.
It is highly recommended that you use the `.Task.Slot` template to ensure that
task `N` always gets access to volume `N` no matter which node it is executing
on/scheduled to.
### List or remove volumes created by Cloudstor
You can use `docker volume ls` on any node to enumerate all volumes created by
Cloudstor across the swarm.
You can use `docker volume rm [volume name]` to remove a Cloudstor volume from
any node. If you remove a volume from one node, make sure it is not being used
by another active node, since those tasks/containers in another node lose
access to their data.
Before deleting a Docker4AWS stack through CloudFormation, you should remove all
`relocatable` Cloudstor volumes using `docker volume rm` from within the stack.
EBS volumes corresponding to `relocatable` Cloudstor volumes are not
automatically deleted as part of the CloudFormation stack deletion. To list any
`relocatable` Cloudstor volumes and delete them after removing the Docker4AWS
stack where the volumes were created, go to the AWS portal or CLI and set a
filter with tag key set to `StackID` and the tag value set to the md5 hash of
the CloudFormation Stack ID (typical format:
`arn:aws:cloudformation:us-west-2:ID:stack/swarmname/GUID`).

View File

@ -1,145 +0,0 @@
---
description: Release notes
keywords: aws, amazon, iaas, release, edge, stable
title: Docker for AWS release notes
---
{% include d4a_buttons.md %}
> **Note** Starting with 18.02.0-CE EFS encryption option has been removed to prevent the [recreation of the EFS volume](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html){: target="_blank" class="_"}.
## Stable channel
{{aws_blue_latest}}
### 18.09.2
Release date: 2/24/2019
- Docker Engine upgraded to [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2){: target="_blank" class="_"}
### 18.06.1 CE
Release date: 8/24/2018
- Docker Engine upgraded to [Docker 18.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v18.06.1-ce){: target="_blank" class="_"}
### 18.03 CE
Release date: 3/21/2018
- Docker Engine upgraded to [Docker 18.03.0 CE](https://github.com/docker/docker-ce/releases/tag/v18.03.0-ce){: target="_blank" class="_"}
- [Elastic Network Interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html){: target="_blank" class="_"} enabled in the AMI kernel
### 17.12.1 CE
Release date: 3/1/2018
- Docker Engine upgraded to [Docker 17.12.1 CE](https://github.com/docker/docker-ce/releases/tag/v17.12.1-ce){: target="_blank" class="_"}
- Added baked-in rules for ECR IAM role
### 17.12 CE
Release date: 1/9/2018
- Docker Engine upgraded to [Docker 17.12.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.12.0-ce){: target="_blank" class="_"}
- Kernel patch to mitigates Meltdown attacks ( CVE-2017-5754) and enable KPTI
> **Note** There was an issue in LinuxKit that prevented containers from [starting after a machine reboot](https://github.com/moby/moby/issues/36189){: target="_blank" class="_"}.
### 17.09 CE
Release date: 10/6/2017
- Docker Engine upgraded to [Docker 17.09.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.09.0-ce){: target="_blank" class="_"}
- CloudStor EBS updates
- Moby mounts for early reboot support
### 17.06.1 CE
Release date: 08/17/2017
**New**
- Docker Engine upgraded to [Docker 17.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.1-ce){: target="_blank" class="_"}
- Improvements to CloudStor support
- Added SSL support at the LB level
### 17.06.0 CE
Release date: 06/28/2017
**New**
- Docker Engine upgraded to [Docker 17.06.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.0-ce){: target="_blank" class="_"}
- Fixed an issue with load balancer controller that caused the ELB health check to fail.
- Added VPCID output when a VPC is created
- Added CloudStor support (EFS (in regions that support EFS), and EBS) for [persistent storage volumes](persistent-data-volumes.md)
- Added CloudFormation parameter to enable/disable CloudStor
- Changed the AutoScaleGroup Manager max size to 6, so that it correctly upgrades with 5 managers
- Added lambda support for Mumbai
- Removed the ELB Name to allow for longer stack names
- Added i3 EC2 instance types
- [Bring your own VPC] Added a VPC CIDR Parameter
### 17.03.1 CE
Release date: 03/30/2017
**New**
- Docker Engine upgraded to [Docker 17.03.1 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md){: target="_blank" class="_"}
- Updated AZ for Sao Paulo
### 17.03.0 CE
Release date: 03/01/2017
**New**
- Docker Engine upgraded to [Docker 17.03.0 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md){: target="_blank" class="_"}
- Added r4 EC2 instance types
- Added `ELBDNSZoneID` output to make it easier to interact with Route53
## Edge channel
### 18.01 CE
{{aws_blue_edge}}
**New**
Release date: 1/18/2018
- Docker Engine upgraded to [Docker 18.01.0 CE](https://github.com/docker/docker-ce/releases/tag/v18.01.0-ce){: target="_blank" class="_"}
### 17.10 CE
**New**
Release date: 10/18/2017
- Docker Engine upgraded to [Docker 17.10.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.10.0-ce){: target="_blank" class="_"}
- Editions container log to stdout instead of disk, preventing hdd fill-up
## Template archive
If you are looking for templates from older releases, check out the [template archive](/docker-for-aws/archive.md).
## Enterprise Edition
[Docker Enterprise Edition Lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle){: target="_blank" class="_"}
[Deploy Docker Enterprise Edition (EE) for AWS](https://hub.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn blank_"}
### 17.06 EE
- Docker engine 17.06 EE
- For Std/Adv external logging has been removed, as it is now handled by [UCP](https://docs.docker.com/datacenter/ucp/2.0/guides/configuration/configure-logs/){: target="_blank" class="_"}
- UCP 2.2.3
- DTR 2.3.3
### 17.03 EE
- Docker engine 17.03 EE
- UCP 2.1.5
- DTR 2.2.7

View File

@ -1,55 +0,0 @@
---
description: Scaling your stack
keywords: aws, amazon, iaas, tutorial
title: Modify Docker install on AWS
---
## Scaling workers
You can scale the worker count using the AWS Auto Scaling group. Docker
automatically joins or removes new instances to the Swarm.
There are currently two ways to scale your worker group. You can "update" your
stack, and change the number of workers in the CloudFormation template
parameters, or you can manually update the Auto Scaling group in the AWS console
for EC2 auto scaling groups.
Changing manager count live is _not_ currently supported.
### AWS console
Log in to the AWS console, and go to the EC2 dashboard. On the lower left hand
side select the "Auto Scaling Groups" link.
Look for the Auto Scaling group with the name that looks like
$STACK_NAME-NodeASG-* Where `$STACK_NAME` is the name of the stack you created
when filling out the CloudFormation template for Docker for AWS. Once you find
it, click the checkbox, next to the name. Then Click on the "Edit" button on the
lower detail pane.
![edit autoscale settings](img/autoscale_update.png)
Change the "Desired" field to the size of the worker pool that you would like, and hit "Save".
![save autoscale settings](img/autoscale_save.png)
This takes a few minutes and add the new workers to your swarm
automatically. To lower the number of workers back down, you just need to update
"Desired" again, with the lower number, and it shrinks the worker pool until
it reaches the new size.
### CloudFormation update
Go to the CloudFormation management page, and click the checkbox next to the
stack you want to update. Then Click on the action button at the top, and select
"Update Stack".
![update stack on CloudFormation page](img/cloudformation_update.png)
Pick "Use current template", and then click "Next". Fill out the same parameters
you have specified before, but this time, change your worker count to the new
count, click "Next". Answer the rest of the form questions. CloudFormation
shows you a preview of the changes. Review the changes and if they
look good, click "Update". CloudFormation changes the worker pool size to
the new value you specified. It takes a few minutes (longer for a larger
increase / decrease of nodes), but when complete the swarm has
the new worker pool size.

View File

@ -1,45 +0,0 @@
---
description: Upgrading your stack
keywords: aws, amazon, iaas, tutorial
title: Docker for AWS upgrades
---
To upgrade, apply a new version of the AWS Cloudformation template that powers
Docker for AWS. Depending on changes in the next version, an upgrade involves:
* Changing the AMI backing manager and worker nodes (the Docker engine
ships in the AMI)
* Upgrading service containers
* Changing the resource setup in the VPC that hosts Docker for AWS
## Prerequisites
* We recommend only attempting upgrades of swarms with at least 3 managers.
A 1-manager swarm can't maintain quorum during the upgrade.
* You can only upgrade one version at a time. Skipping a version during
an upgrade is not supported. Downgrades are not tested.
## Upgrading
New releases are announced on [Release Notes](release-notes.md) page.
To initiate an update, use either the AWS Console or the AWS cli to initiate a
stack update. Use the S3 template URL for the new release and complete the
update wizard. This initiates a rolling upgrade of the Docker swarm, and
service state is maintained during and after the upgrade. Appropriately
scaled services should not experience downtime during an upgrade.
![Upgrade in AWS console](img/cloudformation_update.png)
Single containers started (for example) with `docker run -d` are
**not** preserved during an upgrade. This is because they're not Docker Swarm
objects, but are known only to the individual Docker engines.
> **Note** Current Docker versions, up to 18.02.0-ce, will [recreate the EFS volume](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html){: target="_blank" class="_"} when performing a stack upgrade.
## Changing instance sizes and other template parameters
In addition to upgrading Docker for AWS from one version to the next you can
also use the AWS Update Stack feature to change template parameters such as
worker count and instance type. Changing manager count is **not** supported.

View File

@ -1,9 +0,0 @@
---
description: Why Docker for AWS?
keywords: aws, amazon, iaas, why
title: Why Docker for AWS?
---
{% assign cloudprovider_log_dest = 'CloudWatch' %}
{% assign cloudprovider = 'AWS' %}
{% include why_d4a.md %}

View File

@ -1,25 +0,0 @@
---
description: Template Archive
keywords: azure, microsoft, iaas, release, edge, stable
title: Docker for Azure template archive
---
Azure Resource Manager (ARM) templates for past releases. If you are just
starting out, use the latest version.
## CE (Stable)
### 17.06.2
https://download.docker.com/azure/stable/17.06.2/Docker.tmpl
## EE
### 17.06.2
* Docker EE Basic: https://download.docker.com/azure/17.06/17.06.2/Docker-EE.tmpl
* Docker EE Standard and Advanced: https://download.docker.com/azure/17.06/17.06.2/Docker-DDC.tmpl
### 17.03.2
https://download.docker.com/azure/17.03/17.03.2/Docker.tmpl

View File

@ -1,175 +0,0 @@
---
description: Deploying Apps on Docker for Azure
keywords: azure, microsoft, iaas, deploy
title: Deploy your app on Docker for Azure
---
## Connecting to your manager nodes using SSH
This section walks you through connecting to your installation and deploying
applications.
First, you obtain the public IP address for a manager node. Any manager
node can be used for administrating the swarm.
##### Manager Public IP and SSH ports on Azure
Once you've deployed Docker on Azure, go to the "Outputs" section of the
resource group deployment.
![SSH targets](img/sshtargets.png)
The "SSH Targets" output is a URL to a blade that describes the IP address
(common across all the manager nodes) and the SSH port (unique for each manager
node) that you can use to log in to each manager node.
![Swarm managers](img/managers.png)
Obtain the public IP and/or port for the manager node as instructed above and
use the provided SSH key to begin administrating your swarm and the unique port associated with a manager:
$ ssh -i <path-to-ssh-key> -p <ssh-port> docker@<ip>
Welcome to Docker!
Once you are logged into the container you can run Docker commands on the swarm:
$ docker info
$ docker node ls
You can also tunnel the Docker socket over SSH to remotely run commands on the cluster (requires [OpenSSH 6.7](https://lwn.net/Articles/609321/) or later):
$ ssh -i <path-to-ssh-key> -p <ssh-port> -fNL localhost:2374:/var/run/docker.sock docker@<ssh-host>
$ docker -H localhost:2374 info
If you don't want to pass `-H` when using the tunnel, you can set the `DOCKER_HOST` environment variable to point to the localhost tunnel opening.
## Connecting to your Linux worker nodes using SSH
The Linux worker nodes have SSH enabled. SSH access is not possible to the worker nodes from the public
Internet directly. To access the worker nodes, you need to first connect to a
manager node (see above) and then `ssh` to the worker node, over the private
network. Make sure you have SSH agent forwarding enabled (see below). If you run
the `docker node ls` command you can see the full list of nodes in your swarm.
You can then `ssh docker@<worker-host>` to get access to that node.
##### Configuring SSH agent forwarding
SSH agent forwarding allows you to forward along your ssh keys when connecting from one node to another. This eliminates the need for installing your private key on all nodes you might want to connect from.
You can use this feature to SSH into worker nodes from a manager node without
installing keys directly on the manager.
If your haven't added your ssh key to the `ssh-agent` you also need to do this first.
To see the keys in the agent already, run:
```
$ ssh-add -L
```
If you don't see your key, add it like this.
```
$ ssh-add ~/.ssh/your_key
```
On Mac OS X, the `ssh-agent` forgets this key, once it gets restarted. But you can import your SSH key into your Keychain like this so your key can survive restarts.
```
$ ssh-add -K ~/.ssh/your_key
```
You can then enable SSH forwarding per-session using the `-A` flag for the ssh command.
Connecting to the Manager.
```
$ ssh -A docker@<manager ip>
```
To always have it turned on for a given host, you can edit your ssh config file
(`/etc/ssh_config`, `~/.ssh/config`, etc) to add the `ForwardAgent yes` option.
Example configuration:
```
Host manager0
HostName <manager ip>
ForwardAgent yes
```
To SSH in to the manager with the above settings:
```
$ ssh docker@manager0
```
## Connecting to your Windows worker nodes using RDP
The Windows worker nodes have RDP enabled. RDP access is not possible to the worker nodes from the public
Internet. To access the worker nodes using RDP, you need to first connect to a
manager node (see above) over `ssh`, establish a SSH tunnel and then use RDP to connect to the worker node over the SSH tunnel.
To get started, first login to a manager node and determine the private IP address of the Windows worker VM:
```
$ docker node inspect <windows-worker-node-id> | jq -r ".[0].Status.Addr"
```
Next, in your local machine, establish the SSH tunnel to the manager for the RDP connection to the worker. The local port can be any free port and typically a high value, such as 9001.
```
$ ssh -L <local-port>:<windows-worker-ip>:3389 -p <manager-ssh-port> docker@<manager-ip>
```
Finally you can use a RDP client on your local machine to connect to `local-host`:`local-port` and the connection is forwarded to the RDP server running in the Windows worker node over the SSH tunnel created above.
## Running apps
You can now start creating containers and services.
$ docker run hello-world
You can run websites too. Ports exposed with `--publish` are automatically exposed through the platform load balancer:
$ docker service create --name nginx --publish published=80,target=80 nginx
Once up, find the `DefaultDNSTarget` output in either the AWS or Azure portals to access the site.
### Execute docker commands in all swarm nodes
There are cases (such as installing a volume plugin) wherein a docker command may need to be executed in all the nodes across the cluster. You can use the `swarm-exec` tool to achieve that.
Usage : `swarm-exec {Docker command}`
The following installs a test plugin in all the nodes in the cluster.
Example : `swarm-exec docker plugin install --grant-all-permissions mavenugo/test-docker-netplugin`
This tool internally makes use of docker global-mode service that runs a task on each of the nodes in the cluster. This task in turn executes your docker command. The global-mode service also guarantees that when a new node is added to the cluster or during upgrades, a new task is executed on that node and hence the docker command is automatically executed.
### Docker Stack deployment
To deploy complex multi-container apps, you can use the docker stack deploy command. You can either deploy a compose file on your machine over an SSH tunnel, or copy the docker-compose.yml file to a manager node via scp for example. You can then SSH into the manager node and run docker stack deploy with the --compose-file or -c option. See docker stack deploy options for the list of different options. If you have multiple manager nodes, make sure you are logged in to the one with the stack file copy.
For example:
```bash
docker stack deploy --compose-file docker-compose.yml myapp
```
See [Docker voting app](https://github.com/docker/example-voting-app) for a good sample app to test stack deployments.
By default, apps deployed with stacks do not have ports publicly exposed. Update port mappings for services, and Docker automatically wires up the underlying platform load balancers:
docker service update --publish-add published=80,target=80 <example-service>
### Images in private repos
To create swarm services using images in private repos, first make sure you're authenticated and have access to the private repo, then create the service with the `--with-registry-auth` flag (the example below assumes you're using Docker Hub):
docker login
...
docker service create --with-registry-auth user/private-repo
...
This causes swarm to cache and use the cached registry credentials when creating containers for the service.

View File

@ -1,125 +0,0 @@
---
description: Frequently asked questions
keywords: azure faqs
title: Docker for Azure frequently asked questions (FAQ)
toc_max: 2
---
## Stable and edge channels
Two different download channels are available for Docker for Azure:
* The **stable channel** provides a general availability release-ready deployment
for a fully baked and tested, more reliable cluster. The stable version of Docker
for Azure comes with the latest released version of Docker Engine. The release
schedule is synched with Docker Engine releases and hotfixes. On the stable
channel, you can select whether to send usage statistics and other data.
* The **edge channel** provides a deployment with new features we are working on,
but is not necessarily fully tested. It comes with the experimental version of
Docker Engine. Bugs, crashes, and issues are more likely to occur with the edge
cluster, but you get a chance to preview new functionality, experiment, and provide
feedback as the deployment evolve. Releases are typically more frequent than for
stable, often one or more per month. Usage statistics and crash reports are sent
by default. You do not have the option to disable this on the edge channel.
## Can I use my own VHD?
No, at this time we only support the default Docker for Azure VHD.
## Can I specify the type of Storage Account I use for my VM instances?
Not at this time, but it is on our roadmap for future releases.
## Which Azure regions does Docker for Azure work with?
Docker for Azure should work with all supported Azure Marketplace regions.
## Where are my container logs?
All container logs are aggregated within the `xxxxlog` storage account.
## Where do I report problems or bugs?
Search for existing issues, or create a new one, within the [Docker for Azure](https://github.com/docker/for-azure) GitHub repositories.
In Azure, if your resource group is misbehaving, run the following diagnostic tool from one of the managers - this collects your docker logs and sends them to Docker:
```bash
$ docker-diagnose
OK hostname=manager1
OK hostname=worker1
OK hostname=worker2
Done requesting diagnostics.
Your diagnostics session ID is 1234567890-xxxxxxxxxxxxxx
Please provide this session ID to the maintainer debugging your issue.
```
> **Note**: Your output may be slightly different from the above, depending on your swarm configuration.
## Metrics
Docker for Azure sends anonymized minimal metrics to Docker (heartbeat). These metrics are used to monitor adoption and are critical to improve Docker for Azure.
## How do I run administrative commands?
By default when you SSH into a manager, you are logged in as the regular username: `docker` - It is possible however to run commands with elevated privileges by using `sudo`.
For example to ping one of the nodes, after finding its IP via the Azure/Azure portal, such as, 10.0.0.4, you could run:
```bash
$ sudo ping 10.0.0.4
```
> **Note**: Access to Docker for Azure and Azure happens through a shell container that itself runs on Docker.
## What are the Editions containers running after deployment?
In order for our editions to deploy properly and for load balancer integrations to happen, we run a few containers. They are as follow:
| Container name | Description |
|---|---|
| `init` | Sets up the swarm and makes sure that the stack came up properly. (checks manager+worker count).|
| `agent` | This is our shell/ssh container. When you SSH into an instance, you're actually in this container.|
| `meta` | Assist in creating the swarm cluster, giving privileged instances the ability to join the swarm.|
| `l4controller` | Listens for ports exposed at the docker CLI level and opens them in the load balancer.|
| `logger` | Our log aggregator. This allows us to send all docker logs to the storage account.|
## What are the different Azure Regions?
All regions can be found here: [Microsoft Azure Regions](https://azure.microsoft.com/en-us/regions/).
An excerpt of the above regions to use when you create your service principal are:
```none
australiacentral
australiacentral2
australiaeast
australiasoutheast
brazilsouth
canadacentral
canadaeast
centralindia
centralus
eastasia
eastus
eastus2
francecentral
francesouth
japaneast
japanwest
koreacentral
koreasouth
northcentralus
northeurope
southcentralus
southeastasia
southindia
uksouth
ukwest
usgoviowa
usgovvirginia
westcentralus
westeurope
westindia
westus
westus2
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

View File

@ -1,145 +0,0 @@
---
description: Setup & Prerequisites
keywords: azure, microsoft, iaas, tutorial
title: Docker for Azure setup & prerequisites
redirect_from:
- /engine/installation/azure/
- /datacenter/install/azure/
---
{% include d4a_buttons.md %}
## Docker Enterprise Edition (EE) for Azure
This deployment is fully baked and tested, and comes with the latest Enterprise Edition version of Docker. <br/>This release is maintained and receives <strong>security and critical bugfixes for one year</strong>.
[Deploy Docker Enterprise Edition (EE) for Azure](https://hub.docker.com/editions/enterprise/docker-ee-azure?tab=description){: target=“_blank” class=“button outline-btn _blank_”}
## Docker Community Edition (CE) for Azure
### Quickstart
If your account has the [proper permissions](#prerequisites), you can generate the [Service Principal](#service-principal) and
then bootstrap Docker for Azure using Azure Resource Manager.
{{azure_blue_latest}}
### Prerequisites
- Access to an Azure account with admin privileges
- SSH key that you want to use when accessing your completed Docker install on Azure
### Configuration
Docker for Azure is installed with an Azure template that configures Docker in
swarm mode, running on VMs backed by a custom virtual hard drive (VHD). There
are two ways you can deploy Docker for Azure. You can use the Azure Portal
(browser based), or use the Azure CLI. Both have the following configuration
options.
#### Configuration options
##### Manager count
The number of Managers in your swarm. You can pick either 1, 3 or 5 managers. We only recommend 1 manager for testing and dev setups. There are no failover guarantees with 1 manager — if the single manager fails the swarm goes down as well. Additionally, upgrading single-manager swarms is not currently guaranteed to succeed.
We recommend at least 3 managers, and if you have a lot of workers, you should pick 5 managers.
##### Manager VM size
The VM type for your manager nodes. The larger your swarm, the larger the VM size you should use.
##### Worker VM size
The VM type for your worker nodes.
##### Worker count
The number of workers you want in your swarm (1-100).
#### Service principal
A
[Service Principal](https://azure.microsoft.com/en-us/documentation/articles/active-directory-application-objects/)
is required to set up Docker for Azure. The Service Principal is used to invoke Azure APIs as you scale the number of nodes up
and down or deploy apps on your swarm that require configuration of the Azure Load Balancer. Docker provides a
containerized helper script called `docker4x/create-sp-azure` to help you create the Service Principal.
1. On a Linux machine, download the latest version of `docker4x/create-sp-azure` to your local environment:
```bash
docker pull docker4x/create-sp-azure:latest
```
2. Run the `sp-azure` script with the following arguments:
```bash
$ docker run -ti docker4x/create-sp-azure sp-name [rg-name rg-region]
...
Your access credentials =============================
AD App ID: <app-id>
AD App Secret: <secret>
AD Tenant ID: <tenant-id>
```
If you have multiple Azure subscriptions, make sure to create the
Service Principal with the subscription ID that you use
to deploy Docker for Azure. The arguments are provided below.
| Argument | Description | Example values |
|----------|-------------|---------|
| `sp-name` | The name of the authentication app that the script creates with Azure. The name is not important. Choose something recognizable in the Azure portal. | `sp1` |
| `rg-name` | The name of the new resource group to be created to deploy the resources (VMs, networks, storage accounts) associated with the swarm. The Service Principal is scoped to this resource group. Specify this when deploying Docker Community Edition for Azure. Do not specify this when deploying Docker Enterprise Edition for Azure. | `swarm1` |
| `rg-region` | The name of Azure's region/location where the resource group is to be created. This needs to be one of the regions supported by Azure. Specify this when deploying Docker Community Edition for Azure. Do not specify this when deploying Docker Enterprise Edition for Azure. | `westus`, `centralus`, `eastus`. See our [FAQs](/docker-for-azure/faqs.md#what-are-the-different-azure-regions) for a list of regions. |
If you do not supply the `rg-name` and `rg-region` here, you are prompted
for that information each time you create a new service. The resource group
is created automatically and services are scoped to that resource
group.
If the script fails, it may be because your Azure user account does not have
sufficient privileges. Contact your Azure administrator.
When setting up the Azure Resource Manager (ARM) template, you are prompted
for the App ID (a UUID) and the app secret. **If you are deploying Docker
Community Edition for Azure and specify the resource group name and location
parameters, choose the option to deploy the template into an **existing resource
group** and pass the same name and region/location that you used when running
the `create-sp-azure` helper script.**
<img src="img/service-principal.png" />
#### SSH Key
Docker for Azure uses SSH for accessing the Docker swarm once it's deployed. During setup, you are prompted for a SSH public key. If you don't have a SSH key, you can generate one with `puttygen` or `ssh-keygen`. You only need the public key component to set up Docker for Azure. Here's how to get the public key from a .pem file:
ssh-keygen -y -f my-key.pem
#### Install with the CLI
You can also invoke the Docker for Azure template from the
[Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli):
The
[Docker for Azure Template](https://download.docker.com/azure/stable/Docker.tmpl)
provides default values for the number and type of manager and worker nodes,
but you may need to provide the following values:
- AppID
- AppSecret
- Public SSH Key
Below is an example of how to use the CLI. Make sure you populate all requested parameter values.
The command below assumes there is a resource group called `docker-resource-group` present. This resource group can be created
- Via the Azure Portal web interface
- Via the Azure CLI (`az group create --name docker-resource-group`)
- Via the `docker4x/create-sp-azure` container mentioned above.
If you use the AppID / AppSecret from the `docker4x/create-sp-azure` helper script, it's important it was created for the same resource-group.
```bash
$ az group deployment create --resource-group docker-resource-group --name docker.template --template-uri https://download.docker.com/azure/stable/Docker.tmpl
```
Parameters can be provided interactively, on the command line, or via a parameters file. For more info on how to use the Azure CLI, visit the [Deploy resources with Resource Manager templates and Azure CLI](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy-cli) page.

View File

@ -1,11 +0,0 @@
---
description: Docker's use of Open Source
keywords: docker, opensource
title: Open source components and licensing
---
Docker for AWS and Azure Editions are built using open source software.
Docker for AWS and Azure Editions distribute some components that are licensed under the GNU General Public License. You can download the source for these components [here](https://download.docker.com/opensource/License.tar.gz).
The sources for qemu-img can be obtained [here](http://wiki.qemu-project.org/download/qemu-2.4.1.tar.bz2). The sources for the gettext and glib libraries that qemu-img requires were obtained from [Homebrew](https://brew.sh/) and may be retrieved using `brew install --build-from-source gettext glib`.

View File

@ -1,156 +0,0 @@
---
description: Persistent data volumes
keywords: azure persistent data volumes
title: Docker for Azure persistent data volumes
---
{% include d4a_buttons.md %}
## What is Cloudstor?
Cloudstor is a modern volume plugin built by Docker. It comes pre-installed and
pre-configured in Docker Swarms deployed on Docker for Azure. Docker swarm mode
tasks and regular Docker containers can use a volume created with Cloudstor to
mount a persistent data volume. The volume stays attached to the swarm tasks no
matter which swarm node they get scheduled on or migrated to. Cloudstor relies
on shared storage infrastructure provided by Azure (specifically File Storage
shares exposed over SMB) to allow swarm tasks to create/mount their persistent
volumes on any node in the swarm.
> **Note**: Direct attached storage, which is used to satisfy very low latency /
> high IOPS requirements, is not yet supported.
You can share the same volume among tasks running the same service, or you can
use a unique volume for each task.
## Use Cloudstor
After initializing or joining a swarm on Docker for Azure, connect to any swarm
manager using SSH. Verify that the CloudStor plugin is already installed and
configured for the stack or resource group:
```bash
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
f416c95c0dcc cloudstor:azure cloud storage plugin for Docker true
```
The following examples show how to create swarm services that require data
persistence using the `--mount` flag and specifying Cloudstor as the volume
driver.
### Share the same volume among tasks
If you specify a static value for the `source` option to the `--mount` flag, a
single volume is shared among the tasks participating in the service.
Cloudstor volumes can be created to share access to persistent data across all tasks in a swarm service running in multiple nodes. Example:
```bash
$ docker service create \
--replicas 5 \
--name ping1 \
--mount type=volume,volume-driver=cloudstor:azure,source=sharedvol1,destination=/shareddata \
alpine ping docker.com
```
In this example, all replicas/tasks of the service `ping1` share the same
persistent volume `sharedvol1` mounted at `/shareddata` path within the
container. Docker takes care of interacting with the Cloudstor plugin to ensure
that the common backing store is mounted on all nodes in the swarm where service
tasks are scheduled. Your application needs to be designed to ensure that tasks
do not write concurrently on the same file at the same time, to protect against
data corruption.
You can verify that the same volume is shared among all the tasks by logging
into one of the task containers, writing a file under `/shareddata/`, and
logging into another task container to verify that the file is available there
as well.
### Use a unique volume per task
You can use a templatized notation with the `docker service create` CLI to
create and mount a unique Cloudstor volume for each task in a swarm service.
It is possible to use the templatized notation to indicate to Docker Swarm that a unique Cloudstor volume be created and mounted for each replica/task of a service. This may be useful if the tasks write to the same file under the same path which may lead to corruption in case of shared storage. Example:
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping2 \
--mount type=volume,volume-driver=cloudstor:azure,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
```
{% endraw %}
A unique volume is created and mounted for each task participating in the
`ping2` service. Each task mounts its own volume at `/mydata/` and all files
under that mountpoint are unique to the task mounting the volume.
If a task is rescheduled on a different node after the volume is created and
mounted, Docker interacts with the Cloudstor plugin to create and mount the
volume corresponding to the task on the new node for the task.
It is highly recommended that you use the `.Task.Slot` template to ensure that
task `N` always gets access to volume `N`, no matter which node it is executing
on/scheduled to.
### Volume options
Cloudstor creates a new File Share in Azure File Storage for each volume and
uses SMB to mount these File Shares. SMB has limited compatibility with generic
Unix file ownership and permissions-related operations. Certain containers, such
as `jenkins` and `gitlab`, define specific users and groups which perform different
file operations. These types of workloads require the Cloudstor volume to be
mounted with the UID/GID of the user specified in the Dockerfile or setup scripts. Cloudstor allows for this scenario and
provides greater control over default file permissions by exposing the following
volume options that map to SMB parameters used for mounting the backing file
share.
| Option | Description | Default |
|:-----------|:--------------------------------------------------------------------------------------------------------|:------------------------|
| `uid` | User ID that owns all files on the volume. | `0` = `root` |
| `gid` | Group ID that owns all files on the volume. | `0` = `root` |
| `filemode` | Permissions for all files on the volume. | `0777` |
| `dirmode` | Permissions for all directories on the volume. | `0777` |
| `share` | Name to associate with file share so that the share can be easily located in the Azure Storage Account. | MD5 hash of volume name |
This example sets `uid` to 1000 and `share` to `sharedvol` rather than a md5 hash:
```bash
$ docker service create \
--replicas 5 \
--name ping1 \
--mount type=volume,volume-driver=cloudstor:azure,source=sharedvol1,destination=/shareddata,volume-opt=uid=1000,volume-opt=share=sharedvol \
alpine ping docker.com
```
#### List or remove volumes created by Cloudstor
You can use `docker volume ls` on any node to enumerate all volumes created by
Cloudstor across the swarm.
You can use `docker volume rm [volume name]` to remove a Cloudstor volume from
any node. If you remove a volume from one node, make sure it is not being used
by another active node, since those tasks/containers in another node lose
access to their data.
## Use a different storage endpoint
If you need to use a different storage endpoint, such as `core.cloudapi.de`,
you can specify it when you install the plugin:
```basn
$ docker plugin install --alias cloudstor:azure \
--grant-all-permissions docker4x/cloudstor:17.06.1-ce-azure1 \
CLOUD_PLATFORM=AZURE \
AZURE_STORAGE_ACCOUNT_KEY="$SA_KEY" \
AZURE_STORAGE_ACCOUNT="$SWARM_INFO_STORAGE_ACCOUNT" \
AZURE_STORAGE_ENDPOINT="core.cloudapi.de" \
DEBUG=1
```
By default, the Public Azure Storage endpoint is used.

View File

@ -1,166 +0,0 @@
---
description: Release notes
keywords: azure, microsoft, iaas, tutorial, edge, stable
title: Docker for Azure Release Notes
---
{% include d4a_buttons.md %}
## Enterprise Edition
[Docker Enterprise Edition Lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle){: target="_blank"}<!--_-->
### 17.06.2-ee-19 EE
- Docker engine 17.06.2-ee-19 EE
- UCP 2.2.16
- DTR 2.3.10
### 17.06 EE
- Docker engine 17.06 EE
- For Std/Adv external logging has been removed, as it is now handled by [UCP](https://docs.docker.com/datacenter/ucp/2.0/guides/configuration/configure-logs/){: target="_blank"}
- UCP 2.2.3
- DTR 2.3.3
### 17.03 EE
- Docker engine 17.03 EE
- UCP 2.1.5
- DTR 2.2.7
## Stable channel
{{azure_blue_latest}}
### 18.09.2
Release date: 2/24/2019
- Docker Engine upgraded to [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2){: target="_blank" class="_"}
### 18.06.1 CE
Release date: 8/24/2018
- Docker Engine upgraded to [Docker 18.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v18.06.1-ce){: target="_blank" class="_"}
### 18.03 CE
Release date: 3/21/2018
- Docker Engine upgraded to [Docker 18.03.0 CE](https://github.com/docker/docker-ce/releases/tag/v18.03.0-ce){: target="_blank" class="_"}
### 17.12.1 CE
Release date: 3/1/2018
- Docker Engine upgraded to [Docker 17.12.1 CE](https://github.com/docker/docker-ce/releases/tag/v17.12.0-ce){: target="_blank" class="_"}
### 17.12 CE
Release date: 1/9/2018
- Docker Engine upgraded to [Docker 17.12.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.12.0-ce){: target="_blank" class="_"}
- Kernel patch to mitigates Meltdown attacks ( CVE-2017-5754) and enable KPTI
> **Note** There was an issue in LinuxKit that prevented containers from [starting after a machine reboot](https://github.com/moby/moby/issues/36189){: target="_blank" class="_"}.
### 17.09 CE
Release date: 10/6/2017
- Docker Engine upgraded to [Docker 17.09.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.09.0-ce){: target="_blank" class="_"}
- Moby mounts for early reboot support
- Docker binary bundled where needed to allow easier host interchange
- Azure VHD use full hard drive space
### 17.06.2 CE
Release date: 09/08/2017
**New**
- Docker Engine upgraded to [Docker 17.06.2 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.2-ce){: target="_blank"}
- VMSS APIs updated to use 2017-03-30 to allow upgrades when customdata has changed
### 17.06.1 CE
Release date: 08/17/2017
**New**
- Docker Engine upgraded to [Docker 17.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v17.06.1-ce){: target="_blank"}
- Improvements to CloudStor support
- Azure agent logs are limited to 50mb with proper logrotation
### 17.06.0 CE
Release date: 06/28/2017
**New**
- Docker Engine upgraded to [Docker 17.06.0 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md)
- Introduced a new way to kick off upgrades through a container. The old upgrade.sh is no longer supported.
- Introduced new SMB mount related parameters for Cloudstor volumes [persistent storage volumes](persistent-data-volumes.md).
### 17.03.1 CE
Release date: 03/30/2017
**New**
- Docker Engine upgraded to [Docker 17.03.1 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md)
- Fixed bugs in the way container logs are uploaded to File Storage in the storage account for logs
### 17.03.0 CE
Release date: 02/08/2017
**New**
- Docker Engine upgraded to [Docker 17.03.0 CE](https://github.com/docker/docker/blob/master/CHANGELOG.md)
### 1.13.1-2
Release date: 02/08/2017
**New**
- Docker Engine upgraded to [Docker 1.13.1](https://github.com/docker/docker/blob/master/CHANGELOG.md)
### 1.13.0-1
Release date: 01/18/2017
**New**
- Docker Engine upgraded to [Docker 1.13.0](https://github.com/docker/docker/blob/master/CHANGELOG.md)
- Writing to home directory no longer requires `sudo`
- Added support to perform fine grained monitoring of health status of swarm nodes, destroy unhealthy nodes and create replacement nodes
- Added support to scale the number of nodes in manager and worker vm scale sets through Azure UI/CLI for managing the number of nodes in a scale set
- Improved logging and remote diagnostics mechanisms for system containers
## Edge channel
### 18.01 CE
{{azure_blue_edge}}
**New**
Release date: 1/18/2018
- Docker Engine upgraded to [Docker 18.01.0 CE](https://github.com/docker/docker-ce/releases/tag/v18.01.0-ce){: target="_blank" class="_"}
### 17.10 CE
**New**
Release date: 10/18/2017
- Docker Engine upgraded to [Docker 17.10.0 CE](https://github.com/docker/docker-ce/releases/tag/v17.10.0-ce){: target="_blank" class="_"}
- Editions container log to stdout instead of disk, preventing hdd fill-up
- Azure VHD mounts instance HDD allowing for smaller boot disks
## Template archive
If you are looking for templates from older releases, check out the [template archive](/docker-for-azure/archive.md).

View File

@ -1,81 +0,0 @@
---
description: Upgrading your stack
keywords: azure, microsoft, iaas, tutorial
title: Docker for Azure upgrades
---
Docker for Azure supports upgrading from one version to the next within a specific channel. Upgrading across channels (for example, from `edge` to `stable` or `test` to `edge`) is not supported. To upgrade to a specific version, run the upgrade container corresponding to the target version for the upgrade. An upgrade of Docker for Azure involves:
* Upgrading the VHD backing the manager and worker nodes (Docker ships in the VHD)
* Upgrading service containers in the manager and worker nodes
* Changing any other resources in the Azure Resource Group that hosts Docker for Azure
## Prerequisites
* We recommend only attempting upgrades of swarms with at least 3 managers. A 1-manager swarm can't maintain quorum during an upgrade.
* You can only upgrade one version at a time. Skipping a version during an upgrade is not supported.
* Downgrades are not tested.
* Upgrading across channels (`stable`, `edge`, or `testing`) is not supported.
* If the swarm contains nodes in the `down` state, remove them from the swarm before attempting the upgrade, using `docker node rm <node-id>`.
## Upgrading
New releases are announced on the [Release Notes](release-notes.md) page.
To initiate an upgrade, connect a manager node using SSH and run the container corresponding to the version you want to upgrade to:
```bash
$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-ti \
docker4x/upgrade-azure:version-tag
```
For example, this command upgrades from 17.03 CE or 17.06.0 CE stable release to 17.06.1 CE stable:
```bash
$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-ti \
docker4x/upgrade-azure:17.06.1-ce-azure1
```
If you are already on a version more recent than 17.06.1 CE or 17.07.0 CE, you can use `upgrade.sh` to initiate the upgrade:
```bash
$ upgrade.sh YY.MM.X-ce-azureC
```
This example upgrades Docker for Azure from 17.06.1 CE Edge to 17.07.0 CE Edge:
```bash
$ upgrade.sh 17.07.0-ce-azure1
```
This initiates a rolling upgrade of the Docker swarm. Service state is maintained during and after the upgrade. Appropriately scaled services should not experience downtime during an upgrade. Single containers which are not part of services (for example, containers started with `docker run`) are **not** preserved during an upgrade. This is because they are not Docker services but are known only to the individual Docker engine where they are running.
## Monitoring
The upgrade process may take several minutes to complete. You can follow the progress of the upgrade either from the output of the upgrade command or from the Azure UI by going to the Azure UI blades corresponding to the Virtual Machine Scale Sets hosting your manager and worker nodes. The URL for the Azure UI blades corresponding to your subscription and resource group is printed in the output of upgrade command above. They follow the following format:
```none
https://portal.azure.com/#resource/subscriptions/[subscription-id]/resourceGroups/[resource-group]/providers/Microsoft.Compute/virtualMachineScaleSets/swarm-manager-vmss/instances
```
```none
https://portal.azure.com/#resource/subscriptions/[subscription-id]/resourceGroups/[resource-group]/providers/Microsoft.Compute/virtualMachineScaleSets/swarm-worker-vmss/instances
```
`[subscription-id]` and `[resource-group]` are placeholders which are replaced by
real values.
In the last stage of the upgrade, the manager node where the upgrade is initiated from needs to be shut down, upgraded and reimaged. During this time, you can't access the node and if you were logged in, your SSH connection drop.
## Post upgrade
After the upgrade, the IP address and the port needed to SSH into the manager nodes do not change. However, the host identity of the manager nodes does change as the VMs get reimaged. So when you try to SSH in after a successful upgrade, your SSH client may suspect a Man-In-The-Middle attack. You need to delete the old entries in your SSH client's store [typically `~/.ssh/known_hosts`] for new SSH connections to succeed to the upgraded manager nodes.
## Changing instance sizes and other template parameters
This is not supported for Azure at the moment.

View File

@ -1,9 +0,0 @@
---
description: Why Docker for Azure?
keywords: azure, microsoft, iaas, why
title: Why Docker for Azure?
---
{% assign cloudprovider_log_dest = 'a storage account in the created resource group' %}
{% assign cloudprovider = 'Azure' %}
{% include why_d4a.md %}

View File

@ -122,11 +122,6 @@ cluster:
version: '1.0'
```
For more information on Docker Cloudstor see:
- [Cloudstor for AWS](/docker-for-aws/persistent-data-volumes/)
- [Cloudstor for Azure](/docker-for-azure/persistent-data-volumes/)
The following optional elements can be specified:
- `version`: (Required) The version of Docker Cloudstor to install. The default
@ -601,4 +596,4 @@ by:
### Where to go next
* [UCP CLI reference](https://docs.docker.com/reference/)
* [DTR CLI reference](https://docs.docker.com/reference/)

View File

@ -292,7 +292,6 @@ They display corresponding entries for the change in leadership.
## Additional resources
- [Installing Docker Engine on a cloud provider](/docker-for-aws/)
- [High availability in Docker Swarm](multi-manager-setup.md)
- [Discovery](discovery.md)
- [High-availability cluster using a trio of consul nodes](https://hub.docker.com/r/progrium/consul/)