Added latest docs from Editions AWS & Azure (#887)
* Added Docker for AWS and Azure and moved navigation Signed-off-by: French Ben <frenchben@docker.com> * Fixed image links Signed-off-by: French Ben <frenchben@docker.com>
|
@ -513,42 +513,70 @@ toc:
|
|||
title: Deprecated Engine Features
|
||||
- path: /engine/faq/
|
||||
title: FAQ
|
||||
- sectiontitle: Docker for Mac
|
||||
- sectiontitle: Get Docker
|
||||
section:
|
||||
- path: /docker-for-mac/
|
||||
title: Getting Started
|
||||
- path: /docker-for-mac/docker-toolbox/
|
||||
title: Docker for Mac vs. Docker Toolbox
|
||||
- path: /docker-for-mac/multi-arch/
|
||||
title: Leveraging Multi-CPU Architecture Support
|
||||
- path: /docker-for-mac/networking/
|
||||
title: Networking
|
||||
- path: /docker-for-mac/osxfs/
|
||||
title: File system sharing
|
||||
- path: /docker-for-mac/troubleshoot/
|
||||
title: Logs and Troubleshooting
|
||||
- path: /docker-for-mac/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-mac/examples/
|
||||
title: Example Applications
|
||||
- path: /docker-for-mac/opensource/
|
||||
title: Open Source Licensing
|
||||
- path: /docker-for-mac/release-notes/
|
||||
title: Release Notes
|
||||
- sectiontitle: Docker for Windows
|
||||
section:
|
||||
- path: /docker-for-windows/
|
||||
title: Getting Started
|
||||
- path: /docker-for-windows/troubleshoot/
|
||||
title: Logs and Troubleshooting
|
||||
- path: /docker-for-windows/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-windows/examples/
|
||||
title: Example Applications
|
||||
- path: /docker-for-windows/opensource/
|
||||
title: Open Source Licensing
|
||||
- path: /docker-for-windows/release-notes/
|
||||
title: Release Notes
|
||||
- sectiontitle: Docker for Mac
|
||||
section:
|
||||
- path: /docker-for-mac/
|
||||
title: Getting Started
|
||||
- path: /docker-for-mac/docker-toolbox/
|
||||
title: Docker for Mac vs. Docker Toolbox
|
||||
- path: /docker-for-mac/multi-arch/
|
||||
title: Leveraging Multi-CPU Architecture Support
|
||||
- path: /docker-for-mac/networking/
|
||||
title: Networking
|
||||
- path: /docker-for-mac/osxfs/
|
||||
title: File system sharing
|
||||
- path: /docker-for-mac/troubleshoot/
|
||||
title: Logs and Troubleshooting
|
||||
- path: /docker-for-mac/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-mac/examples/
|
||||
title: Example Applications
|
||||
- path: /docker-for-mac/opensource/
|
||||
title: Open Source Licensing
|
||||
- path: /docker-for-mac/release-notes/
|
||||
title: Release Notes
|
||||
- sectiontitle: Docker for Windows
|
||||
section:
|
||||
- path: /docker-for-windows/
|
||||
title: Getting Started
|
||||
- path: /docker-for-windows/troubleshoot/
|
||||
title: Logs and Troubleshooting
|
||||
- path: /docker-for-windows/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-windows/examples/
|
||||
title: Example Applications
|
||||
- path: /docker-for-windows/opensource/
|
||||
title: Open Source Licensing
|
||||
- path: /docker-for-windows/release-notes/
|
||||
title: Release Notes
|
||||
- sectiontitle: Docker for AWS
|
||||
section:
|
||||
- path: /docker-for-aws/
|
||||
title: Setup & Prerequisites
|
||||
- path: /docker-for-aws/scaling/
|
||||
title: Scaling
|
||||
- path: /docker-for-aws/upgrade/
|
||||
title: Upgrading
|
||||
- path: /docker-for-aws/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-aws/opensource/
|
||||
title: Open Source Licensing
|
||||
- path: /docker-for-aws/release-notes/
|
||||
title: Release Notes
|
||||
- sectiontitle: Docker for Azure
|
||||
section:
|
||||
- path: /docker-for-azure/
|
||||
title: Setup & Prerequisites
|
||||
- path: /docker-for-azure/upgrade/
|
||||
title: Upgrading
|
||||
- path: /docker-for-azure/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-azure/opensource/
|
||||
title: Open Source Licensing
|
||||
- path: /docker-for-azure/release-notes/
|
||||
title: Release Notes
|
||||
- sectiontitle: Docker Compose
|
||||
section:
|
||||
- path: /compose/overview/
|
||||
|
|
|
@ -0,0 +1,169 @@
|
|||
---
|
||||
description: Deploying Apps on Docker for AWS
|
||||
keywords: aws, amazon, iaas, deploy
|
||||
title: Deploy your app on Docker for AWS
|
||||
---
|
||||
|
||||
## Connecting to your manager nodes
|
||||
|
||||
This section will walk you through connecting to your installation and deploying
|
||||
applications. Instructions are included for both AWS and Azure, so be sure to
|
||||
follow the instructions for the cloud provider of your choice in each section.
|
||||
|
||||
First, you will obtain the public IP address for a manager node. Any manager
|
||||
node can be used for administrating the swarm.
|
||||
|
||||
##### Manager Public IP on AWS
|
||||
|
||||
Once you've deployed Docker on AWS, go to the "Outputs" tab for the stack in
|
||||
CloudFormation.
|
||||
|
||||
The "Managers" output is a URL you can use to see the available manager nodes of
|
||||
the swarm in your AWS console. Once present on this page, you can see the
|
||||
"Public IP" of each manager node in the table and/or "Description" tab if you
|
||||
click on the instance.
|
||||
|
||||

|
||||
|
||||
## Connecting via SSH
|
||||
|
||||
#### Manager nodes
|
||||
|
||||
Obtain the public IP and/or port for the manager node as instructed above and
|
||||
using the provided SSH key to begin administrating your swarm:
|
||||
|
||||
$ ssh -i <path-to-ssh-key> docker@<ssh-host>
|
||||
Welcome to Docker!
|
||||
|
||||
Once you are logged into the container you can run Docker commands on the swarm:
|
||||
|
||||
$ docker info
|
||||
$ docker node ls
|
||||
|
||||
You can also tunnel the Docker socket over SSH to remotely run commands on the cluster (requires [OpenSSH 6.7](https://lwn.net/Articles/609321/) or later):
|
||||
|
||||
$ ssh -NL localhost:2374:/var/run/docker.sock docker@<ssh-host> &
|
||||
$ docker -H localhost:2374 info
|
||||
|
||||
If you don't want to pass `-H` when using the tunnel, you can set the `DOCKER_HOST` environment variable to point to the localhost tunnel opening.
|
||||
|
||||
### Worker nodes
|
||||
|
||||
As of Beta 13, the worker nodes also have SSH enabled when connecting from
|
||||
manager nodes. SSH access is not possible to the worker nodes from the public
|
||||
Internet. To access the worker nodes, you will need to first connect to a
|
||||
manager node (see above).
|
||||
|
||||
On the manager node you can then `ssh` to the worker node, over the private
|
||||
network. Make sure you have SSH agent forwarding enabled (see below). If you run
|
||||
the `docker node ls` command you can see the full list of nodes in your swarm.
|
||||
You can then `ssh docker@<worker-host>` to get access to that node.
|
||||
|
||||
##### AWS
|
||||
|
||||
Use the `HOSTNAME` reported in `docker node ls` directly.
|
||||
|
||||
```
|
||||
$ docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
a3d4vdn9b277p7bszd0lz8grp * ip-172-31-31-40.us-east-2.compute.internal Ready Active Reachable
|
||||
...
|
||||
|
||||
$ ssh docker@ip-172-31-31-40.us-east-2.compute.internal
|
||||
```
|
||||
|
||||
##### Using SSH agent forwarding
|
||||
|
||||
SSH agent forwarding allows you to forward along your ssh keys when connecting from one node to another. This eliminates the need for installing your private key on all nodes you might want to connect from.
|
||||
|
||||
You can use this feature to SSH into worker nodes from a manager node without
|
||||
installing keys directly on the manager.
|
||||
|
||||
If your haven't added your ssh key to the `ssh-agent` you will also need to do this first.
|
||||
|
||||
To see the keys in the agent already, run:
|
||||
|
||||
```
|
||||
$ ssh-add -L
|
||||
```
|
||||
|
||||
If you don't see your key, add it like this.
|
||||
|
||||
```
|
||||
$ ssh-add ~/.ssh/your_key
|
||||
```
|
||||
|
||||
On Mac OS X, the `ssh-agent` will forget this key, once it gets restarted. But you can import your SSH key into your Keychain like this. This will have your key survive restarts.
|
||||
|
||||
```
|
||||
$ ssh-add -K ~/.ssh/your_key
|
||||
```
|
||||
|
||||
You can then enable SSH forwarding per-session using the `-A` flag for the ssh command.
|
||||
|
||||
Connecting to the Manager.
|
||||
```
|
||||
$ ssh -A docker@<manager ip>
|
||||
```
|
||||
|
||||
To always have it turned on for a given host, you can edit your ssh config file
|
||||
(`/etc/ssh_config`, `~/.ssh/config`, etc) to add the `ForwardAgent yes` option.
|
||||
|
||||
Example configuration:
|
||||
|
||||
```
|
||||
Host manager0
|
||||
HostName <manager ip>
|
||||
ForwardAgent yes
|
||||
```
|
||||
|
||||
To SSH in to the manager with the above settings:
|
||||
|
||||
```
|
||||
$ ssh docker@manager0
|
||||
```
|
||||
|
||||
## Running apps
|
||||
|
||||
You can now start creating containers and services.
|
||||
|
||||
$ docker run hello-world
|
||||
|
||||
You can run websites too. Ports exposed with `-p` are automatically exposed through the platform load balancer:
|
||||
|
||||
$ docker service create --name nginx -p 80:80 nginx
|
||||
|
||||
Once up, find the `DefaultDNSTarget` output in either the AWS or Azure portals to access the site.
|
||||
|
||||
### Execute docker commands in all swarm nodes
|
||||
|
||||
There are cases (such as installing a volume plugin) wherein a docker command may need to be executed in all the nodes across the cluster. You can use the `swarm-exec` tool to achieve that.
|
||||
|
||||
Usage : `swarm-exec {Docker command}`
|
||||
|
||||
The following will install a test plugin in all the nodes in the cluster
|
||||
|
||||
Example : `swarm-exec docker plugin install --grant-all-permissions mavenugo/test-docker-netplugin`
|
||||
|
||||
This tool internally makes use of docker global-mode service that runs a task on each of the nodes in the cluster. This task in turn executes your docker command. The global-mode service also guarantees that when a new node is added to the cluster or during upgrades, a new task is executed on that node and hence the docker command will be automatically executed.
|
||||
|
||||
### Distributed Application Bundles
|
||||
|
||||
To deploy complex multi-container apps, you can use [distributed application bundles](https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md). You can either run `docker deploy` to deploy a bundle on your machine over an SSH tunnel, or copy the bundle (for example using `scp`) to a manager node, SSH into the manager and then run `docker deploy` (if you have multiple managers, you have to ensure that your session is on one that has the bundle file).
|
||||
|
||||
A good sample app to test application bundles is the [Docker voting app](https://github.com/docker/example-voting-app).
|
||||
|
||||
By default, apps deployed with bundles do not have ports publicly exposed. Update port mappings for services, and Docker will automatically wire up the underlying platform load balancers:
|
||||
|
||||
docker service update --publish-add 80:80 <example-service>
|
||||
|
||||
### Images in private repos
|
||||
|
||||
To create swarm services using images in private repos, first make sure you're authenticated and have access to the private repo, then create the service with the `--with-registry-auth` flag (the example below assumes you're using Docker Hub):
|
||||
|
||||
docker login
|
||||
...
|
||||
docker service create --with-registry-auth user/private-repo
|
||||
...
|
||||
|
||||
This will cause swarm to cache and use the cached registry credentials when creating containers for the service.
|
|
@ -0,0 +1,88 @@
|
|||
---
|
||||
description: Frequently asked questions
|
||||
keywords: aws faqs
|
||||
title: Docker for AWS Frequently asked questions (FAQ)
|
||||
---
|
||||
|
||||
## Can I use my own AMI?
|
||||
|
||||
No, at this time we only support the default Docker for AWS AMI.
|
||||
|
||||
## How to use Docker for AWS with an AWS account in an EC2-Classic region.
|
||||
|
||||
If you have an AWS account that was created before **December 4th, 2013** you have what is known as an **EC2-Classic** account on regions where you have previously deployed resources. **EC2-Classic** accounts don't have default VPC's or the associated subnets, etc. This causes a problem when using our CloudFormation template because we are using the [Fn:GetAZs](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html) function they provide to determine which availability zones you have access too. When used in a region where you have **EC2-Classic**, this function will return all availability zones for a region, even ones you don't have access too. When you have an **EC2-VPC** account, it will return only the availability zones you have access to.
|
||||
|
||||
This will cause an error like the following:
|
||||
|
||||
> "Value (us-east-1a) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-east-1d, us-east-1c, us-east-1b, us-east-1e."
|
||||
|
||||
If you have an **EC2-Classic** account, and you don't have access to the `a` and `b` availability zones for that region.
|
||||
|
||||
There isn't anything we can do right now to fix this issue, we have contacted Amazon, and we are hoping they will be able to provide us with a way to determine if an account is either **EC2-Classic** or **EC2-VPC**, so we can act accordingly.
|
||||
|
||||
#### How to tell if you have this issue.
|
||||
|
||||
This AWS documentation page will describe how you can tell if you have EC2-Classic, EC2-VPC or both. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html
|
||||
|
||||
#### How to fix:
|
||||
There are a few work arounds that you can try to get Docker for AWS up and running for you.
|
||||
|
||||
1. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue.
|
||||
2. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem.
|
||||
3. You can try and contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs
|
||||
|
||||
#### Helpful links:
|
||||
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html
|
||||
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html
|
||||
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html
|
||||
- https://aws.amazon.com/vpc/faqs/#Default_VPCs
|
||||
- https://aws.amazon.com/blogs/aws/amazon-ec2-update-virtual-private-clouds-for-everyone/
|
||||
|
||||
|
||||
## Can I use my existing VPC?
|
||||
|
||||
Not at this time, but it is on our roadmap for future releases.
|
||||
|
||||
## Which AWS regions will this work with.
|
||||
|
||||
Docker for AWS should work with all regions except for AWS China, which is a little different than the other regions.
|
||||
|
||||
## How many Availability Zones does Docker for AWS use?
|
||||
|
||||
All of Amazons regions have at least 2 AZ's, and some have more. To make sure Docker for AWS works in all regions, only 2 AZ's are used even if more are available.
|
||||
|
||||
## What do I do if I get "KeyPair error" on AWS?
|
||||
As part of the prerequisites, you need to have an SSH key uploaded to the AWS region you are trying to deploy to.
|
||||
For more information about adding an SSH key pair to your account, please refer to the [Amazon EC2 Key Pairs docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
|
||||
|
||||
## I have a problem/bug where do I report it?
|
||||
|
||||
Send an email to <docker-for-iaas@docker.com> or post to the [Docker for AWS](https://github.com/docker/for-aws) GitHub repositories.
|
||||
|
||||
In AWS, if your stack is misbehaving, please run the following diagnostic tool from one of the managers - this will collect your docker logs and send them to Docker:
|
||||
|
||||
```
|
||||
$ docker-diagnose
|
||||
OK hostname=manager1
|
||||
OK hostname=worker1
|
||||
OK hostname=worker2
|
||||
Done requesting diagnostics.
|
||||
Your diagnostics session ID is 1234567890-xxxxxxxxxxxxxx
|
||||
Please provide this session ID to the maintainer debugging your issue.
|
||||
```
|
||||
|
||||
_Please note that your output will be slightly different from the above, depending on your swarm configuration_
|
||||
|
||||
## Analytics
|
||||
|
||||
The beta versions of Docker for AWS and Azure send anonymized analytics to Docker. These analytics are used to monitor beta adoption and are critical to improve Docker for AWS and Azure.
|
||||
|
||||
## How to run administrative commands?
|
||||
|
||||
By default when you SSH into a manager, you will be logged in as the regular username: `docker` - It is possible however to run commands with elevated privileges by using `sudo`.
|
||||
For example to ping one of the nodes, after finding its IP via the Azure/AWS portal (e.g. 10.0.0.4), you could run:
|
||||
```
|
||||
$ sudo ping 10.0.0.4
|
||||
```
|
||||
|
||||
Note that access to Docker for AWS and Azure happens through a shell container that itself runs on Docker.
|
After Width: | Height: | Size: 76 KiB |
After Width: | Height: | Size: 184 KiB |
After Width: | Height: | Size: 125 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 150 KiB |
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
description: Setup & Prerequisites
|
||||
keywords: aws, amazon, iaas, tutorial
|
||||
title: Docker for AWS Setup & Prerequisites
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Access to an AWS account with permissions to use CloudFormation and creating the following objects
|
||||
- EC2 instances + Auto Scaling groups
|
||||
- IAM profiles
|
||||
- DynamoDB Tables
|
||||
- SQS Queue
|
||||
- VPC + subnets
|
||||
- ELB
|
||||
- CloudWatch Log Group
|
||||
- SSH key in AWS in the region where you want to deploy (required to access the completed Docker install)
|
||||
- AWS account that support EC2-VPC (See the [FAQ for details about EC2-Classic](../faq/aws.md))
|
||||
|
||||
For more information about adding an SSH key pair to your account, please refer to the [Amazon EC2 Key Pairs docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
|
||||
|
||||
## Configuration
|
||||
|
||||
Docker for AWS is installed with a CloudFormation template that configures Docker in swarm-mode, running on instances backed custom AMIs. There are two ways you can deploy Docker for AWS. You can use the AWS Management Console (browser based), or use the AWS CLI. Both have the following configuration options.
|
||||
|
||||
### Configuration options
|
||||
|
||||
#### KeyName
|
||||
Pick the SSH key that will be used when you SSH into the manager nodes.
|
||||
|
||||
#### InstanceType
|
||||
The EC2 instance type for your worker nodes.
|
||||
|
||||
#### ManagerInstanceType
|
||||
The EC2 instance type for your manager nodes. The larger your swarm, the larger the instance size you should use.
|
||||
|
||||
#### ClusterSize
|
||||
The number of workers you want in your swarm (1-1000).
|
||||
|
||||
#### ManagerSize
|
||||
The number of Managers in your swarm. You can pick either 1, 3 or 5 managers. We only recommend 1 manager for testing and dev setups. There are no failover guarantees with 1 manager — if the single manager fails the swarm will go down as well. Additionally, upgrading single-manager swarms is not currently guaranteed to succeed.
|
||||
|
||||
We recommend at least 3 managers, and if you have a lot of workers, you should pick 5 managers.
|
||||
|
||||
#### EnableSystemPrune
|
||||
|
||||
Enable if you want Docker for AWS to automatically cleanup unused space on your swarm nodes.
|
||||
|
||||
When enabled, `docker system prune` will run staggered every day, starting at 1:42AM UTC on both workers and managers. The prune times are staggered slightly so that not all nodes will be pruned at the same time. This limits resource spikes on the swarm.
|
||||
|
||||
Pruning removes the following:
|
||||
- All stopped containers
|
||||
- All volumes not used by at least one container
|
||||
- All dangling images
|
||||
- All unused networks
|
||||
|
||||
#### EnableCloudWatchLogs
|
||||
Enable if you want Docker to send your container logs to CloudWatch. ("yes", "no") Defaults to yes.
|
||||
|
||||
#### WorkerDiskSize
|
||||
Size of Workers's ephemeral storage volume in GiB (20 - 1024).
|
||||
|
||||
#### WorkerDiskType
|
||||
Worker ephemeral storage volume type ("standard", "gp2").
|
||||
|
||||
#### ManagerDiskSize
|
||||
Size of Manager's ephemeral storage volume in GiB (20 - 1024)
|
||||
|
||||
#### ManagerDiskType
|
||||
Manager ephemeral storage volume type ("standard", "gp2")
|
||||
|
||||
### Installing with the AWS Management Console
|
||||
The simplest way to use the template is with the CloudFormation section of the AWS Management Console.
|
||||
|
||||
Go to the [Release Notes](release-notes.md) page, and click on the "launch stack" button to start the deployment process.
|
||||
|
||||
### Installing with the CLI
|
||||
You can also invoke the Docker for AWS CloudFormation template from the AWS CLI:
|
||||
|
||||
Here is an example of how to use the CLI. Make sure you populate all of the parameters and their values:
|
||||
```
|
||||
$ aws cloudformation create-stack --stack-name teststack --template-url <templateurl> --parameters ParameterKey=KeyName,ParameterValue=<keyname> ParameterKey=InstanceType,ParameterValue=t2.micro ParameterKey=ManagerInstanceType,ParameterValue=t2.micro ParameterKey=ClusterSize,ParameterValue=1 --capabilities CAPABILITY_IAM
|
||||
```
|
||||
|
||||
To fully automate installs, you can use the [AWS Cloudformation API](http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/Welcome.html).
|
||||
|
||||
## How it works
|
||||
|
||||
Docker for AWS starts with a CloudFormation template that will create everything that you need from scratch. There are only a few prerequisites that are listed above.
|
||||
|
||||
It first starts off by creating a new VPC along with subnets and security groups. Once the networking is set up, it will create two Auto Scaling Groups, one for the managers and one for the workers, and set the desired capacity that was selected in the CloudFormation setup form. The managers will start up first and create a Swarm manager quorum using Raft. The workers will then start up and join the swarm one by one, until all of the workers are up and running. At this point you will have x number of managers and y number of workers in your swarm, that are ready to handle your application deployments. See the [deployment](../deploy.md) docs for your next steps.
|
||||
|
||||
If you increase the number of instances running in your worker Auto Scaling Group (via the AWS console, or updating the CloudFormation configuration), the new nodes that will start up will automatically join the swarm.
|
||||
|
||||
Elastic Load Balancers (ELBs) are set up to help with routing traffic to your swarm.
|
||||
|
||||
## Logging
|
||||
|
||||
Docker for AWS automatically configures logging to Cloudwatch for containers you run on Docker for AWS. A Log Group is created for each Docker for AWS install, and a log stream for each container.
|
||||
|
||||
`docker logs` and `docker service logs` are not supported on Docker for AWS. Instead, you should check container in CloudWatch.
|
||||
|
||||
## System containers
|
||||
|
||||
Each node will have a few system containers running on them to help run your swarm cluster. In order for everything to run smoothly, please keep those containers running, and don't make any changes. If you make any changes, Docker for AWS will not work correctly.
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
description: Docker's use of Open Source
|
||||
keywords: docker, opensource
|
||||
title: Open source components and licensing
|
||||
---
|
||||
|
||||
Docker for AWS and Azure Editions are built using open source software.
|
||||
|
||||
Docker for AWS and Azure Editions distribute some components that are licensed under the GNU General Public License. You can download the source for these components [here](https://download.docker.com/opensource/License.tar.gz).
|
||||
|
||||
The sources for qemu-img can be obtained [here](http://wiki.qemu-project.org/download/qemu-2.4.1.tar.bz2). The sources for the gettext and glib libraries that qemu-img requires were obtained from [Homebrew](https://brew.sh/) and may be retrieved using `brew install --build-from-source gettext glib`.
|
|
@ -0,0 +1,157 @@
|
|||
---
|
||||
description: Release notes
|
||||
keywords: aws, amazon, iaas, release
|
||||
title: Docker for AWS Release Notes
|
||||
---
|
||||
|
||||
## 1.13.0-rc3-beta13
|
||||
Release date: 12/06/2016
|
||||
|
||||
<a href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://docker-for-aws.s3.amazonaws.com/aws/beta/latest.json" data-rel="Beta-13" target="blank" id="aws-deploy"></a>
|
||||
|
||||
### New
|
||||
- Docker Engine upgraded to [Docker 1.13.0-rc3](https://github.com/docker/docker/blob/master/CHANGELOG.md)
|
||||
- New option to decide if you want to send container logs to CloudWatch. (previously it was always on)
|
||||
- SSH access has been added to the worker nodes
|
||||
- The Docker daemon no longer listens on port 2375
|
||||
- Added a `swarm-exec` to execute a docker command across all of the swarm nodes. See [Executing Docker commands in all swarm nodes](../deploy#execute-docker-commands-in-all-swarm-nodes) for more details.
|
||||
|
||||
## 1.13.0-rc2-beta12
|
||||
Release date: 11/23/2016
|
||||
|
||||
### New
|
||||
- Docker Engine upgraded to [Docker 1.13.0-rc2](https://github.com/docker/docker/blob/master/CHANGELOG.md)
|
||||
- New option to cleanup unused resources on your Swarm using new Docker prune command available in 1.13
|
||||
- New option to pick the size of the ephemeral storage volume size on workers and managers
|
||||
- New option to pick the disk type for the ephemeral storage on workers and managers
|
||||
- Changed the Cloud Watch container log name from container "ID" to "Container Name-ID"
|
||||
|
||||
|
||||
## 1.13.0-rc1-beta11
|
||||
|
||||
Release date: 11/17/2016
|
||||
|
||||
### New
|
||||
|
||||
- Docker Engine upgraded to [Docker 1.13.0-rc1](https://github.com/docker/docker/blob/master/CHANGELOG.md)
|
||||
- Changes to port 2375 access. For security reasons we locked down access to port 2375 in the following ways.
|
||||
- You can't connect to port 2375 on managers from workers (changed)
|
||||
- You can't connect to port 2375 on workers from other workers (changed)
|
||||
- You can't connect to port 2375 on managers and workers from the public internet (no change)
|
||||
- You can connect to port 2375 on workers from managers (no change)
|
||||
- You can connect to port 2375 on managers from other managers (no change)
|
||||
- Added changes to the way we manage swarm tokens to make it more secure.
|
||||
|
||||
### Important
|
||||
- Due to some changes with the IP ranges in the subnets in Beta10, it will not be possible to upgrade from beta 10 to beta 11. You will need to start from scratch using beta11. We are sorry for any issues this might cause. We needed to make the change, and it was decided it was best to do it now, while still in private beta to limit the impact.
|
||||
|
||||
|
||||
## 1.12.3-beta10
|
||||
|
||||
Release date: 10/27/2016
|
||||
|
||||
### New
|
||||
|
||||
- Docker Engine upgraded to Docker 1.12.3
|
||||
- Fixed the shell container that runs on the managers, to remove a ssh host key that was accidentally added to the image.
|
||||
This could have led to a potential man in the middle (MITM) attack. The ssh host key is now generated on host startup, so that each host has its own key.
|
||||
- The SSH ELB for SSH'ing into the managers has been removed because it is no longer possible to SSH into the managers without getting a security warning
|
||||
- Each Manager can be SSH'd into by following our deploy [guide](../deploy)
|
||||
- Added new region us-east-2 (Ohio)
|
||||
- Fixed some bugs related to upgrading the swarm
|
||||
- SSH keypair is now a required field in CloudFormation
|
||||
- Fixed networking dependency issues in CloudFormation template that could result in a stack failure.
|
||||
|
||||
## 1.12.2-beta9
|
||||
|
||||
Release date: 10/12/2016
|
||||
|
||||
### New
|
||||
|
||||
- Docker Engine upgraded to Docker 1.12.2
|
||||
- Can better handle scaling swarm nodes down and back up again
|
||||
- Container logs are now sent to CloudWatch
|
||||
- Added a diagnostic command (docker-diagnose), to more easily send us diagnostic information in case of errors for troubleshooting
|
||||
- Added sudo support to the shell container on manager nodes
|
||||
- Change SQS default message timeout to 12 hours from 4 days
|
||||
- Added support for region 'ap-south-1': Asia Pacific (Mumbai)
|
||||
|
||||
### Deprecated:
|
||||
- Port 2375 will be closed in next release. If you relay on this being open, please plan accordingly.
|
||||
|
||||
## 1.12.2-RC3-beta8
|
||||
|
||||
Release date: 10/06/2016
|
||||
|
||||
* Docker Engine upgraded to 1.12.2-RC3
|
||||
|
||||
## 1.12.2-RC2-beta7
|
||||
|
||||
Release date: 10/04/2016
|
||||
|
||||
* Docker Engine upgraded to 1.12.2-RC2
|
||||
|
||||
## 1.12.2-RC1-beta6
|
||||
|
||||
Release date: 9/29/2016
|
||||
|
||||
### New
|
||||
|
||||
* Docker Engine upgraded to 1.12.2-RC1
|
||||
|
||||
|
||||
## 1.12.1-beta5
|
||||
|
||||
Release date: 8/18/2016
|
||||
|
||||
### New
|
||||
|
||||
* Docker Engine upgraded to 1.12.1
|
||||
|
||||
### Errata
|
||||
|
||||
* Upgrading from previous Docker for AWS versions to 1.12.0-beta4 is not possible because of RC-incompatibilities between Docker 1.12.0 release candidate 5 and previous release candidates.
|
||||
|
||||
## 1.12.0-beta4
|
||||
|
||||
Release date: 7/28/2016
|
||||
|
||||
### New
|
||||
|
||||
* Docker Engine upgraded to 1.12.0
|
||||
|
||||
### Errata
|
||||
|
||||
* Upgrading from previous Docker for AWS versions to 1.12.0-beta4 is not possible because of RC-incompatibilities between Docker 1.12.0 release candidate 5 and previous release candidates.
|
||||
|
||||
## 1.12.0-rc5-beta3
|
||||
|
||||
(internal release)
|
||||
|
||||
## 1.12.0-rc4-beta2
|
||||
|
||||
Release date: 7/13/2016
|
||||
|
||||
### New
|
||||
|
||||
* Docker Engine upgraded to 1.12.0-rc4
|
||||
* EC2 instance tags
|
||||
* Beta Docker for AWS sends anonymous analytics
|
||||
|
||||
### Errata
|
||||
* When upgrading, old Docker nodes may not be removed from the swarm and show up when running `docker node ls`. Marooned nodes can be removed with `docker node rm`
|
||||
|
||||
## 1.12.0-rc3-beta1
|
||||
|
||||
### New
|
||||
|
||||
* First release of Docker for AWS!
|
||||
* CloudFormation based installer
|
||||
* ELB integration for running public-facing services
|
||||
* Swarm access with SSH
|
||||
* Worker scaling using AWS ASG
|
||||
|
||||
### Errata
|
||||
|
||||
* To assist with debugging, the Docker Engine API is available internally in the AWS VPC on TCP port 2375. These ports cannot be accessed from outside the cluster, but could be used from within the cluster to obtain privileged access on other cluster nodes. In future releases, direct remote access to the Docker API will not be available.
|
||||
* Likewise, swarm-mode is configured to auto-accept both manager and worker nodes inside the VPC. This policy will be changed to be more restrictive by default in the future.
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
description: Scaling your stack
|
||||
keywords: aws, amazon, iaas, tutorial
|
||||
title: Modifying Docker install on AWS
|
||||
---
|
||||
|
||||
## Scaling workers
|
||||
|
||||
You can scale the worker count using the AWS Auto Scaling group. Docker will automatically join or remove new instances to the Swarm.
|
||||
|
||||
There are currently two ways to scale your worker group. You can "update" your stack, and change the number of workers in the CloudFormation template parameters, or you can manually update the Auto Scaling group in the AWS console for EC2 auto scaling groups.
|
||||
|
||||
Changing manager count live is _not_ currently supported.
|
||||
|
||||
### AWS Console
|
||||
Login to the AWS console, and go to the EC2 dashboard. On the lower left hand side select the "Auto Scaling Groups" link.
|
||||
|
||||
Look for the Auto Scaling group with the name that looks like $STACK_NAME-NodeASG-* Where `$STACK_NAME` is the name of the stack you created when filling out the CloudFormation template for Docker for AWS. Once you find it, click the checkbox, next to the name. Then Click on the "Edit" button on the lower detail pane.
|
||||
|
||||
<img src="/img/autoscale_update.png">
|
||||
|
||||
Change the "Desired" field to the size of the worker pool that you would like, and hit "Save".
|
||||
|
||||
<img src="/img/autoscale_save.png">
|
||||
|
||||
This will take a few minutes and add the new workers to your swarm automatically. To lower the number of workers back down, you just need to update "Desired" again, with the lower number, and it will shrink the worker pool until it reaches the new size.
|
||||
|
||||
### CloudFormation Update
|
||||
Go to the CloudFormation management page, and click the checkbox next to the stack you want to update. Then Click on the action button at the top, and select "Update Stack".
|
||||
|
||||
<img src="/img/cloudformation_update.png">
|
||||
|
||||
Pick "Use current template", and then click "Next". Fill out the same parameters you have specified before, but this time, change your worker count to the new count, click "Next". Answer the rest of the form questions. CloudFormation will show you a preview of the changes it will make. Review the changes and if they look good, click "Update". CloudFormation will change the worker pool size to the new value you specified. It will take a few minutes (longer for a larger increase / decrease of nodes), but when complete you will have your swarm with the new worker pool size.
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
description: Upgrading your stack
|
||||
keywords: aws, amazon, iaas, tutorial
|
||||
title: Docker for AWS Upgrades
|
||||
---
|
||||
|
||||
Docker for AWS has support upgrading from one beta version to the next. Upgrades are done by applying a new version of the AWS Cloudformation template that powers Docker for Azure. Depending on changes in the next version, an upgrade involves:
|
||||
|
||||
* Changing the AMI backing manager and worker nodes (the Docker engine ships in the AMI)
|
||||
* Upgrading service containers
|
||||
* Changing the resource setup in the VPC that hosts Docker for AWS
|
||||
|
||||
To be notified of updates, submit your email address at [https://beta.docker.com/](https://beta.docker.com/).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* We recommend only attempting upgrades of swarms with at least 3 managers. A 1-manager swarm may not be able to maintain quorum during the upgrade
|
||||
* Upgrades are only supported from one version to the next version, for example beta-11 to beta-12. Skipping a version during an upgrade is not supported. For example, upgrading from beta-10 to beta-12 is not supported. Downgrades are not tested.
|
||||
|
||||
## Upgrading
|
||||
|
||||
If you submit your email address at [https://beta.docker.com/](beta.docker.com) Docker will notify you of new releases by email. New releases are also posted on the [Release Notes](https://beta.docker.com/docs/aws/release-notes/) page.
|
||||
|
||||
To initiate an update, use either the AWS Console of the AWS cli to initiate a stack update. Use the S3 template URL for the new release and complete the update wizard. This will initiate a rolling upgrade of the Docker swarm, and service state will be maintained during and after the upgrade. Appropriately scaled services should not experience downtime during an upgrade.
|
||||
|
||||

|
||||
|
||||
Note that single containers started (for example) with `docker run -d` are **not** preserved during an upgrade. This is because the're not Docker Swarm objects, but are known only to the individual Docker engines.
|
||||
|
||||
## Changing instance sizes and other template parameters
|
||||
|
||||
In addition to upgrading Docker for AWS from one version to the next you can also use the AWS Update Stack feature to change template parameters such as worker count and instance type. Changing manager count is **not** supported.
|
|
@ -0,0 +1,178 @@
|
|||
---
|
||||
description: Deploying Apps on Docker for Azure
|
||||
keywords: azure, microsoft, iaas, deploy
|
||||
title: Deploy your app on Docker for Azure
|
||||
---
|
||||
|
||||
## Connecting to your manager nodes
|
||||
|
||||
This section will walk you through connecting to your installation and deploying
|
||||
applications. Instructions are included for both AWS and Azure, so be sure to
|
||||
follow the instructions for the cloud provider of your choice in each section.
|
||||
|
||||
First, you will obtain the public IP address for a manager node. Any manager
|
||||
node can be used for administrating the swarm.
|
||||
|
||||
##### Manager Public IP and SSH ports on Azure
|
||||
|
||||
Once you've deployed Docker on Azure, go to the "Outputs" section of the
|
||||
resource group deployment.
|
||||
|
||||

|
||||
|
||||
The "SSH Targets" output is a URL to a blade that describes the IP address
|
||||
(common across all the manager nodes) and the SSH port (unique for each manager
|
||||
node) that you can use to log in to each manager node.
|
||||
|
||||

|
||||
|
||||
## Connecting via SSH
|
||||
|
||||
#### Manager nodes
|
||||
|
||||
Obtain the public IP and/or port for the manager node as instructed above and
|
||||
using the provided SSH key to begin administrating your swarm and the unique port associated with a manager:
|
||||
|
||||
$ ssh -i <path-to-ssh-key> -p <ssh-port> docker@<ip>
|
||||
Welcome to Docker!
|
||||
|
||||
Once you are logged into the container you can run Docker commands on the swarm:
|
||||
|
||||
$ docker info
|
||||
$ docker node ls
|
||||
|
||||
You can also tunnel the Docker socket over SSH to remotely run commands on the cluster (requires [OpenSSH 6.7](https://lwn.net/Articles/609321/) or later):
|
||||
|
||||
$ ssh -NL localhost:2374:/var/run/docker.sock docker@<ssh-host> &
|
||||
$ docker -H localhost:2374 info
|
||||
|
||||
If you don't want to pass `-H` when using the tunnel, you can set the `DOCKER_HOST` environment variable to point to the localhost tunnel opening.
|
||||
|
||||
### Worker nodes
|
||||
|
||||
As of Beta 13, the worker nodes also have SSH enabled when connecting from
|
||||
manager nodes. SSH access is not possible to the worker nodes from the public
|
||||
Internet. To access the worker nodes, you will need to first connect to a
|
||||
manager node (see above).
|
||||
|
||||
On the manager node you can then `ssh` to the worker node, over the private
|
||||
network. Make sure you have SSH agent forwarding enabled (see below). If you run
|
||||
the `docker node ls` command you can see the full list of nodes in your swarm.
|
||||
You can then `ssh docker@<worker-host>` to get access to that node.
|
||||
|
||||
##### Azure
|
||||
|
||||
Prepend the domain from `/etc/resolv.conf` to the returned `HOSTNAME` in
|
||||
`docker node ls`.
|
||||
|
||||
```
|
||||
$ docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
e5grdng229oazh79252fpbgcc swarm-worker000000 Ready Active
|
||||
...
|
||||
|
||||
$ cat /etc/resolv.conf
|
||||
# Generated by dhcpcd from eth0.dhcp
|
||||
# /etc/resolv.conf.head can replace this line
|
||||
domain 2ct34bzag3fejkndbh0ypx4nnb.gx.internal.cloudapp.net
|
||||
nameserver 168.63.129.16
|
||||
# /etc/resolv.conf.tail can replace this line
|
||||
|
||||
$ ssh docker@swarm-worker000000.2ct34bzag3fejkndbh0ypx4nnb.gx.internal.cloudapp.net
|
||||
```
|
||||
|
||||
##### Using SSH agent forwarding
|
||||
|
||||
SSH agent forwarding allows you to forward along your ssh keys when connecting from one node to another. This eliminates the need for installing your private key on all nodes you might want to connect from.
|
||||
|
||||
You can use this feature to SSH into worker nodes from a manager node without
|
||||
installing keys directly on the manager.
|
||||
|
||||
If your haven't added your ssh key to the `ssh-agent` you will also need to do this first.
|
||||
|
||||
To see the keys in the agent already, run:
|
||||
|
||||
```
|
||||
$ ssh-add -L
|
||||
```
|
||||
|
||||
If you don't see your key, add it like this.
|
||||
|
||||
```
|
||||
$ ssh-add ~/.ssh/your_key
|
||||
```
|
||||
|
||||
On Mac OS X, the `ssh-agent` will forget this key, once it gets restarted. But you can import your SSH key into your Keychain like this. This will have your key survive restarts.
|
||||
|
||||
```
|
||||
$ ssh-add -K ~/.ssh/your_key
|
||||
```
|
||||
|
||||
You can then enable SSH forwarding per-session using the `-A` flag for the ssh command.
|
||||
|
||||
Connecting to the Manager.
|
||||
```
|
||||
$ ssh -A docker@<manager ip>
|
||||
```
|
||||
|
||||
To always have it turned on for a given host, you can edit your ssh config file
|
||||
(`/etc/ssh_config`, `~/.ssh/config`, etc) to add the `ForwardAgent yes` option.
|
||||
|
||||
Example configuration:
|
||||
|
||||
```
|
||||
Host manager0
|
||||
HostName <manager ip>
|
||||
ForwardAgent yes
|
||||
```
|
||||
|
||||
To SSH in to the manager with the above settings:
|
||||
|
||||
```
|
||||
$ ssh docker@manager0
|
||||
```
|
||||
|
||||
## Running apps
|
||||
|
||||
You can now start creating containers and services.
|
||||
|
||||
$ docker run hello-world
|
||||
|
||||
You can run websites too. Ports exposed with `-p` are automatically exposed through the platform load balancer:
|
||||
|
||||
$ docker service create --name nginx -p 80:80 nginx
|
||||
|
||||
Once up, find the `DefaultDNSTarget` output in either the AWS or Azure portals to access the site.
|
||||
|
||||
### Execute docker commands in all swarm nodes
|
||||
|
||||
There are cases (such as installing a volume plugin) wherein a docker command may need to be executed in all the nodes across the cluster. You can use the `swarm-exec` tool to achieve that.
|
||||
|
||||
Usage : `swarm-exec {Docker command}`
|
||||
|
||||
The following will install a test plugin in all the nodes in the cluster
|
||||
|
||||
Example : `swarm-exec docker plugin install --grant-all-permissions mavenugo/test-docker-netplugin`
|
||||
|
||||
This tool internally makes use of docker global-mode service that runs a task on each of the nodes in the cluster. This task in turn executes your docker command. The global-mode service also guarantees that when a new node is added to the cluster or during upgrades, a new task is executed on that node and hence the docker command will be automatically executed.
|
||||
|
||||
### Distributed Application Bundles
|
||||
|
||||
To deploy complex multi-container apps, you can use [distributed application bundles](https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md). You can either run `docker deploy` to deploy a bundle on your machine over an SSH tunnel, or copy the bundle (for example using `scp`) to a manager node, SSH into the manager and then run `docker deploy` (if you have multiple managers, you have to ensure that your session is on one that has the bundle file).
|
||||
|
||||
A good sample app to test application bundles is the [Docker voting app](https://github.com/docker/example-voting-app).
|
||||
|
||||
By default, apps deployed with bundles do not have ports publicly exposed. Update port mappings for services, and Docker will automatically wire up the underlying platform load balancers:
|
||||
|
||||
docker service update --publish-add 80:80 <example-service>
|
||||
|
||||
### Images in private repos
|
||||
|
||||
To create swarm services using images in private repos, first make sure you're authenticated and have access to the private repo, then create the service with the `--with-registry-auth` flag (the example below assumes you're using Docker Hub):
|
||||
|
||||
docker login
|
||||
...
|
||||
docker service create --with-registry-auth user/private-repo
|
||||
...
|
||||
|
||||
This will cause swarm to cache and use the cached registry credentials when creating containers for the service.
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
description: Frequently asked questions
|
||||
keywords: azure faqs
|
||||
title: Docker for Azure Frequently asked questions (FAQ)
|
||||
---
|
||||
|
||||
## How long will it take before I get accepted into the Docker for Azure private beta?
|
||||
|
||||
Docker for Azure is built on top of Docker 1.13, but as with all Beta, things are still changing, which means things can break between release candidates.
|
||||
|
||||
We are currently rolling it out slowly to make sure everything is working as it should. This is to ensure that if there are any issues we limit the number of people that are affected.
|
||||
|
||||
## Why do you need my Azure Subscription ID?
|
||||
|
||||
We are using a private custom VHD, and in order to give you access to this VHD, we need your Azure Subscription ID.
|
||||
|
||||
## How do I find my Azure Subscription ID?
|
||||
|
||||
You can find this information your Azure Portal Subscription. For more info, look at the directions on [this page](../index.md).
|
||||
|
||||
## I use more than one Azure Subscription ID, how do I get access to all of them.
|
||||
|
||||
Use the beta sign up form, and put the subscription ID that you need to use most there. Then email us <docker-for-iaas@docker.com> with your information and your other Azure Subscription ID, and we will do our best to add those accounts as well. But due to the large amount of requests, it might take a while before those subscriptions to get added, so be sure to include the important one in the sign up form, so at least you will have that one.
|
||||
|
||||
## Can I use my own VHD?
|
||||
No, at this time we only support the default Docker for Azure VHD.
|
||||
|
||||
## Can I specify the type of Storage Account I use for my VM instances?
|
||||
|
||||
Not at this time, but it is on our roadmap for future releases.
|
||||
|
||||
## Which Azure regions will Docker for Azure work with.
|
||||
|
||||
Docker for Azure should work with all supported Azure Marketplace regions.
|
||||
|
||||
## I have a problem/bug where do I report it?
|
||||
|
||||
Send an email to <docker-for-iaas@docker.com> or post to the [Docker for Azure](https://github.com/docker/for-azure) GitHub repositories.
|
||||
|
||||
In Azure, if your resource group is misbehaving, please run the following diagnostic tool from one of the managers - this will collect your docker logs and send them to Docker:
|
||||
|
||||
```
|
||||
$ docker-diagnose
|
||||
OK hostname=manager1
|
||||
OK hostname=worker1
|
||||
OK hostname=worker2
|
||||
Done requesting diagnostics.
|
||||
Your diagnostics session ID is 1234567890-xxxxxxxxxxxxxx
|
||||
Please provide this session ID to the maintainer debugging your issue.
|
||||
```
|
||||
|
||||
_Please note that your output will be slightly different from the above, depending on your swarm configuration_
|
||||
|
||||
## Analytics
|
||||
|
||||
The beta versions of Docker for AWS and Azure send anonymized analytics to Docker. These analytics are used to monitor beta adoption and are critical to improve Docker for AWS and Azure.
|
||||
|
||||
## How to run administrative commands?
|
||||
|
||||
By default when you SSH into a manager, you will be logged in as the regular username: `docker` - It is possible however to run commands with elevated privileges by using `sudo`.
|
||||
For example to ping one of the nodes, after finding its IP via the Azure/AWS portal (e.g. 10.0.0.4), you could run:
|
||||
```
|
||||
$ sudo ping 10.0.0.4
|
||||
```
|
||||
|
||||
Note that access to Docker for AWS and Azure happens through a shell container that itself runs on Docker.
|
After Width: | Height: | Size: 78 KiB |
After Width: | Height: | Size: 273 KiB |
After Width: | Height: | Size: 75 KiB |
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
description: Setup & Prerequisites
|
||||
keywords: azure, microsoft, iaas, tutorial
|
||||
title: Docker for Azure Setup & Prerequisites
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Access to an Azure account with admin privileges
|
||||
- SSH key that you want to use when accessing your completed Docker install on Azure
|
||||
|
||||
## Configuration
|
||||
|
||||
Docker for Azure is installed with an Azure template that configures Docker in swarm-mode, running on VMs backed by a custom VHD. There are two ways you can deploy Docker for Azure. You can use the Azure Portal (browser based), or use the Azure CLI. Both have the following configuration options.
|
||||
|
||||
### Configuration options
|
||||
|
||||
#### Manager Count
|
||||
The number of Managers in your swarm. You can pick either 1, 3 or 5 managers. We only recommend 1 manager for testing and dev setups. There are no failover guarantees with 1 manager — if the single manager fails the swarm will go down as well. Additionally, upgrading single-manager swarms is not currently guaranteed to succeed.
|
||||
|
||||
We recommend at least 3 managers, and if you have a lot of workers, you should pick 5 managers.
|
||||
|
||||
#### Manager VM size
|
||||
The VM type for your manager nodes. The larger your swarm, the larger the VM size you should use.
|
||||
|
||||
#### Worker VM size
|
||||
The VM type for your worker nodes.
|
||||
|
||||
#### Worker Count
|
||||
The number of workers you want in your swarm (1-100).
|
||||
|
||||
### Service Principal
|
||||
|
||||
To set up Docker for Azure, a [Service Principal](https://azure.microsoft.com/en-us/documentation/articles/active-directory-application-objects/) is required. Docker for Azure uses the principal to operate Azure APIs as you scale up and down or deploy apps on your swarm. Docker provides a containerized helper-script to help create the Service Principal:
|
||||
|
||||
docker run -ti docker4x/create-sp-azure sp-name
|
||||
...
|
||||
Your access credentials =============================
|
||||
AD App ID: <app-id>
|
||||
AD App Secret: <secret>
|
||||
AD Tenant ID: <tenant-id>
|
||||
|
||||
If you have multiple Azure subscriptions, make sure you're creating the Service Principal with subscription ID that you shared with Docker when signing up for the beta.
|
||||
|
||||
`sp-name` is the name of the authentication app that the script creates with Azure. The name is not important, simply choose something you'll recognize in the Azure portal.
|
||||
|
||||
If the script fails, it's typically because your Azure user account doesn't have sufficient privileges. Contact your Azure administrator.
|
||||
|
||||
When setting up the ARM template, you will be prompted for the App ID (a UUID) and the app secret.
|
||||
|
||||
### SSH Key
|
||||
|
||||
Docker for Azure uses SSH for accessing the Docker swarm once it's deployed. During setup, you will be prompted for a SSH public key. If you don't have a SSH key, you can generate one with `puttygen` or `ssh-keygen`. You only need the public key component to set up Docker for Azure. Here's how to get the public key from a .pem file:
|
||||
|
||||
ssh-keygen -y -f my-key.pem
|
||||
|
||||
### Installing with the CLI
|
||||
You can also invoke the Docker for Azure template from the Azure CLI:
|
||||
|
||||
Here is an example of how to use the CLI. Make sure you populate all of the parameters and their values:
|
||||
```
|
||||
$ azure group create --name DockerGroup --location centralus --deployment-name docker.template --template-file <templateurl>
|
||||
```
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
description: Docker's use of Open Source
|
||||
keywords: docker, opensource
|
||||
title: Open source components and licensing
|
||||
---
|
||||
|
||||
Docker for AWS and Azure Editions are built using open source software.
|
||||
|
||||
Docker for AWS and Azure Editions distribute some components that are licensed under the GNU General Public License. You can download the source for these components [here](https://download.docker.com/opensource/License.tar.gz).
|
||||
|
||||
The sources for qemu-img can be obtained [here](http://wiki.qemu-project.org/download/qemu-2.4.1.tar.bz2). The sources for the gettext and glib libraries that qemu-img requires were obtained from [Homebrew](https://brew.sh/) and may be retrieved using `brew install --build-from-source gettext glib`.
|
|
@ -0,0 +1,67 @@
|
|||
---
|
||||
description: Release notes
|
||||
keywords: azure, microsoft, iaas, tutorial
|
||||
title: Docker for Azure Release Notes
|
||||
---
|
||||
|
||||
## 1.13.0-beta12
|
||||
|
||||
Release date: 12/09/2016
|
||||
|
||||
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdocker-for-azure.s3.amazonaws.com%2Fazure%2Fbeta%2Flatest.json" data-rel="Beta-13" target="_blank" id="azure-deploy"></a>
|
||||
|
||||
### New
|
||||
|
||||
- Docker Engine upgraded to [Docker 1.13.0-rc2](https://github.com/docker/docker/blob/master/CHANGELOG.md)
|
||||
- SSH access has been added to the worker nodes
|
||||
- The Docker daemon no longer listens on port 2375
|
||||
- Added a `swarm-exec` to execute a docker command across all of the swarm nodes. See [Executing Docker commands in all swarm nodes](../deploy#execute-docker-commands-in-all-swarm-nodes) for more details.
|
||||
|
||||
|
||||
## 1.12.3-beta10
|
||||
|
||||
Release date: 11/08/2016
|
||||
|
||||
### New
|
||||
|
||||
- Docker Engine upgraded to Docker 1.12.3
|
||||
- Fixed the shell container that runs on the managers, to remove a ssh host key that was accidentally added to the image.
|
||||
This could have led to a potential man in the middle (MITM) attack. The ssh host key is now generated on host startup, so that each host has its own key.
|
||||
- The SSH ELB for SSH'ing into the managers has been removed because it is no longer possible to SSH into the managers without getting a security warning
|
||||
- Multiple managers can be deployed
|
||||
- All container logs can be found in the `xxxxlog` storage account
|
||||
- Each Manager can be SSH'd into by following our deploy [guide](../deploy)
|
||||
|
||||
## 1.12.2-beta9
|
||||
|
||||
Release date: 10/17/2016
|
||||
|
||||
### New
|
||||
|
||||
- Docker Engine upgraded to Docker 1.12.2
|
||||
- Manager behind its own LB
|
||||
- Added sudo support to the shell container on manager nodes
|
||||
|
||||
## 1.12.1-beta5
|
||||
|
||||
Release date: 8/19/2016
|
||||
|
||||
### New
|
||||
|
||||
* Docker Engine upgraded to 1.12.1
|
||||
|
||||
### Errata
|
||||
|
||||
* To assist with debugging, the Docker Engine API is available internally in the Azure VPC on TCP port 2375. These ports cannot be accessed from outside the cluster, but could be used from within the cluster to obtain privileged access on other cluster nodes. In future releases, direct remote access to the Docker API will not be available.
|
||||
|
||||
## 1.12.0-beta4
|
||||
|
||||
Release date: 8/9/2016
|
||||
|
||||
### New
|
||||
|
||||
* First release
|
||||
|
||||
### Errata
|
||||
|
||||
* To assist with debugging, the Docker Engine API is available internally in the Azure VPC on TCP port 2375. These ports cannot be accessed from outside the cluster, but could be used from within the cluster to obtain privileged access on other cluster nodes. In future releases, direct remote access to the Docker API will not be available.
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
description: Upgrading your stack
|
||||
keywords: azure, microsoft, iaas, tutorial
|
||||
title: Docker for Azure Upgrades
|
||||
---
|
||||
|
||||
Docker for Azure supports upgrading from one beta version to the next. Upgrades are done by applying a new version of the Azure ARM template that powers Docker for Azure. An upgrade of Docker for Azure involves:
|
||||
|
||||
* Upgrading the VHD backing the manager and worker nodes (the Docker engine ships in the VHD)
|
||||
* Upgrading service containers in the manager and worker nodes
|
||||
* Changing any other resources in the Azure Resource Group that hosts Docker for Azure
|
||||
|
||||
To be notified of updates, submit your email address at [https://beta.docker.com/](https://beta.docker.com/).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* We recommend only attempting upgrades of swarms with at least 3 managers. A 1-manager swarm may not be able to maintain quorum during the upgrade
|
||||
* Upgrades are only supported from one version to the next version, for example beta-13 to beta-14. Skipping a version during an upgrade is not supported. Downgrades are not tested.
|
||||
* Please make sure there are no nodes in the swarm in "down" status. If there are such nodes in the swarm, please remove them from the swarm using
|
||||
docker node rm node-id
|
||||
|
||||
## Upgrading
|
||||
|
||||
If you submit your email address at [https://beta.docker.com/](beta.docker.com) Docker will notify you of new releases by email. New releases are also posted on the [Release Notes](https://beta.docker.com/docs/azure/release-notes/) page.
|
||||
|
||||
To initiate an upgrade, SSH into a manager node and issue the following command:
|
||||
|
||||
upgrade.sh url_to_template.json
|
||||
|
||||
This will initiate a rolling upgrade of the Docker swarm and service state will be maintained during and after the upgrade. Appropriately scaled services should not experience downtime during an upgrade. Note that single containers started (for example) with `docker run -d` are **not** preserved during an upgrade. This is because they are not Docker Swarm services but are known only to the individual Docker engines.
|
||||
|
||||
## Monitoring
|
||||
|
||||
The upgrade process may take several minutes to complete. You can follow the progress of the upgrade either from the output of the upgrade command or from the Azure UI by going to the Azure UI blades corresponding to the Virtual Machine Scale Sets hosting your manager and worker nodes. The URL for the Azure UI blades corresponding to your subscription and resource group is printed in the output of upgrade command above. They follow the following format:
|
||||
|
||||
https://portal.azure.com/#resource/subscriptions/[subscription-id]/resourceGroups/[resource-group]/providers/Microsoft.Compute/virtualMachineScaleSets/swarm-manager-vmss/instances
|
||||
|
||||
https://portal.azure.com/#resource/subscriptions/[subscription-id]/resourceGroups/[resource-group]/providers/Microsoft.Compute/virtualMachineScaleSets/swarm-worker-vmss/instances
|
||||
|
||||
Note that in the last stage of the upgrade, the manager node where the upgrade is initiated from needs to be shut down, upgraded and reimaged. During this time, you won't be able to access the node and if you were logged in, your SSH connection will drop.
|
||||
|
||||
## Post Upgrade
|
||||
|
||||
After the upgrade, the IP address and the port needed to SSH into the manager nodes do not change. However, the host identity of the manager nodes does change as the VMs get reimaged. So when you try to SSH in after a successful upgrade, your SSH client may suspect a Man-In-The-Middle attack. You will need to delete the old entries in your SSH client's store [typically `~/.ssh/known_hosts`] for new SSH connections to succeed to the upgraded manager nodes.
|
||||
|
||||
## Changing instance sizes and other template parameters
|
||||
|
||||
This is not supported for Azure at the moment.
|