Redoing the example for local machines

Updating images
Adding in the Compose solution
Copy edits
Updating with Dong's comments

Signed-off-by: Mary Anthony <mary@docker.com>
This commit is contained in:
Mary Anthony 2016-03-15 11:56:48 -07:00
parent 58f60ddb09
commit 70a180cb30
22 changed files with 1057 additions and 1007 deletions

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

BIN
docs/images/votes.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

View File

@ -16,14 +16,15 @@ This page teaches you to deploy a high-availability Docker Swarm cluster.
Although the example installation uses the Amazon Web Services (AWS) platform,
you can deploy an equivalent Docker Swarm cluster on many other platforms. In this example, you do the following:
- Verify you have [prerequisites](#prerequisites) necessary to complete the example.
- [Establish basic network security](#step-1-add-network-security-rules) by creating a security group that restricts inbound traffic by port number, type, and origin.
- [Create your hosts](#step-2-create-your-instances) on your network by launching and configuring Elastic Cloud (EC2) instances.
- [Install Docker Engine on each instance](#step-3-install-engine-on-each-node)
- [Create a discovery backend](#step-4-set-up-a-discovery-backend) by running a consul container on one of the hosts.
- [Create a Swarm cluster](#step-5-create-swarm-cluster) by running two Swarm managers in a high-availability configuration.
- [Communicate with the Swarm](#step-6-communicate-with-the-swarm) and run a simple hello world application.
- [Test the high-availability Swarm managers](#step-7-test-swarm-failover) by causing a Swarm manager to fail.
- [Verify you have the prequisites](#prerequisites)
- [Establish basic network security](#step-1-add-network-security-rules)
- [Create your nodes](#step-2-create-your-instances)
- [Install Engine on each node](#step-3-install-engine-on-each-node)
- [Configure a discovery backend](#step-4-set-up-a-discovery-backend)
- [Create Swarm cluster](#step-5-create-swarm-cluster)
- [Communicate with the Swarm](#step-6-communicate-with-the-swarm)
- [Test the high-availability Swarm managers](#step-7-test-swarm-failover)
- [Additional Resources](#additional-resources)
For a gentler introduction to Swarm, try the [Evaluate Swarm in a sandbox](install-w-machine) page.
@ -300,8 +301,7 @@ They will display corresponding entries for the change in leadership.
## Additional Resources
- [Installing Docker Engine on a cloud provider](http://docs.docker.com/engine/installation/cloud/cloud-ex-aws/)
- [Docker Swarm 1.0 with Multi-host Networking: Manual Setup](http://goelzer.com/blog/2015/12/29/docker-swarmoverlay-networks-manual-method/)
- [High availability in Docker Swarm](multi-manager-setup/)
- [Discovery](discovery/)
- [High availability in Docker Swarm](multi-manager-setup.md)
- [Discovery](discovery.md)
- [High-availability cluster using a trio of consul nodes](https://hub.docker.com/r/progrium/consul/)
- [Networking](https://docs.docker.com/swarm/networking/)
- [Networking](networking.md)

View File

@ -1,111 +0,0 @@
<!--[metadata]>
+++
title = "Learn the application architecture"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, archiecture"]
[menu.main]
parent="scale_swarm"
weight=-99
+++
<![end-metadata]-->
# Learn the application architecture
On this page, you learn about the Swarm at scale example. Make sure you have
read through [the introduction](index.md) to get an idea of the skills and time
required first.
## Learn the example back story
Your company is a pet food company that has bought a commercial during the
Superbowl. The commercial drives viewers to a web survey that asks users to vote
&ndash; cats or dogs. You are developing the web survey.
Your survey must ensure that millions of people can vote concurrently without
your website becoming unavailable. You don't need real-time results, a company
press release announces the results. However, you do need confidence that every
vote is counted.
## Understand the application architecture
The voting application is a dockerized microservice application. It uses a
parallel web frontend that sends jobs to asynchronous background workers. The
application's design can accommodate arbitrarily large scale. The diagram below
shows the high level architecture of the application.
![](../images/app-architecture.jpg)
The application is fully dockerized with all services running inside of
containers.
The frontend consists of an Interlock load balancer with *N* frontend web
servers and associated queues. The load balancer can handle an arbitrary number
of web containers behind it (`frontend01`- `frontendN`). The web containers run
a simple Python Flask application. Each web container accepts votes and queues
them to a Redis container on the same node. Each web container and Redis queue
pair operates independently.
The load balancer together with the independent pairs allows the entire
application to scale to an arbitrary size as needed to meet demand.
Behind the frontend is a worker tier which runs on separate nodes. This tier:
* scans the Redis containers
* dequeues votes
* deduplicates votes to prevent double voting
* commits the results to a Postgres container running on a separate node
Just like the front end, the worker tier can also scale arbitrarily. The worker
count and frontend count are independent from each other.
## Swarm Cluster Architecture
To support the application, the design calls for a Swarm cluster with a single
Swarm manager and four nodes as shown below.
![](../images/swarm-cluster-arch.jpg)
All four nodes in the cluster are running the Docker daemon, as is the Swarm
manager and the Interlock load balancer. The Swarm manager exists on a Docker
Engine host that is part of the cluster and is considered out of band for
the application. The Interlock load balancer could be placed inside of the
cluster, but for this demonstration it is not.
A container network is overlayed on top of the Swarm cluster using the container
overlay feature of Docker engine. The dockerized microservices are deployed to
this network. After completing the example and deploying your application, this
is what your environment should look like.
![](../images/final-result.jpg)
As the previous diagram shows, each node in the cluster runs the following containers:
- `frontend01`:
- Container: Python flask web app (frontend01)
- Container: Redis (redis01)
- `frontend02`:
- Container: Python flask web app (frontend02)
- Container: Redis (redis02)
- `worker01`: vote worker app (worker01)
- `store`:
- Container: Postgres (pg)
- Container: results app (results-app)
After you deploy the application, you'll configure your local system so that you
can test the application from your local browser. In production, of course, this
step wouldn't be needed.
## The network infrastructure
The example assumes you are deploying the application to a Docker Swarm cluster
running on top of Amazon Web Services (AWS). AWS is an example only. There is
nothing about this application or deployment that requires it. You could deploy
the application to a Docker Swarm cluster running on; a different cloud provider
such as Microsoft Azure, on premises in your own physical data center, or in a
development environment on your laptop.
## Next step
Now that you understand the application architecture, you need to deploy a
network configuration that can support it. In the next step, you use AWS to
[deploy network infrastructure](02-deploy-infra.md) for use in this sample.

View File

@ -1,222 +0,0 @@
<!--[metadata]>
+++
title = "Deploy network infrastructure"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, certificates"]
[menu.main]
parent="scale_swarm"
weight=-90
+++
<![end-metadata]-->
# Deploy your infrastructure
In this step, you create an AWS Virtual Private Cloud (VPC) to run your
application stack on. Before you continue, make sure you have taken the time to
[learn the application architecture](01-about.md).
This example uses AWS but the AWS provider is only one example of an
infrastructure you can use. You can create the environment design on whatever
infrastructure you wish. For example, you could place the application on another
public cloud platform such as Azure or DigitalOcean, on premises in your data
center, or even in in a test environment on your laptop.
>**Note**: If you are not deploying to AWS, or are not using the CloudFormation
template used in the instructions below, make sure your Docker hosts are running
a 3.16 or higher kernel. This kernel is required by Docker's container
networking feature.
## Overview of the deployment process
To deploy on an AWS infrastructure, you first build a VPC and then apply apply
the [CloudFormation template](https://github.com/docker/swarm-microservice-demo-v1/blob/master/AWS/cloudformation.json) prepared for you. The template describes the hosts in the example's stack. While you
could create the entire VPC and all instances via a CloudFormation template,
splitting the deployment into two steps lets you use the CloudFormation template
to build the stack on an *existing VPCs*.
The diagram below shows the VPC infrastructure required to run the
CloudFormation template.
![](../images/cloud-formation-tmp.jpg)
The configuration is a single VPC with a single public subnet. The VPC
deployment relies on a <a
href="https://raw.githubusercontent.com/docker/swarm-microservice-demo-v1/master/AWS/cloudformation.json">cloudformation.json
template</a> which specifies in the `us-west-1` Region (N. California) or
`us-west-2` (Oregon). The ability to create instances in one of these regions is
**required** for this particular CloudFormation template to work. If you want to
use a different region, edit the template before the import step.
The VPC network address space is `192.168.0.0/16` and a single 24-bit public
subnet is carved out as 192.168.33.0/24. The subnet must be configured with a
default route to the internet via the VPC's internet gateway. All six EC2
instances are deployed into this public subnet.
Once the VPC is created, you deploy the EC2 instances using the
CloudFormation template located
[in the `docker/swarm-microservice-demo-v1` repository](https://github.com/docker/swarm-microservice-demo-v1/blob/master/AWS/cloudformation.json).
## Prerequisites
You'll need to have an Amazon AWS account. This account can be personal or
through a corporate instance. The account must be able to deploy EC2 instances
in the `us-west-1` region (N. California).
Before starting through this procedure, make sure you have an existing EC2 key
pair in the `us-west-1` region and that you have download its `.pem` file. If
you aren't sure, login into AWS. Then, <a
href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html"
target="_blank">follow the AWS documentation</a> to ensure you have the key pair
and have downloaded the `.pem` file.
Git clone the <a href="https://github.com/docker/swarm-microservice-demo-v1"
target="_blank">example application's GitHub repo</a> to your local machine. If
you prefer, you can instead [download a `zip`
file](https://github.com/docker/swarm-microservice-demo-v1/archive/master.zip)
and unzip the code in your local environment.
## Step 1. Build and configure the VPC
This step shows you using the VPC wizard on Amazon. If you prefer to build the
VPC manually, configure your VPC with the following values:
| Field | Value |
|---------------------------|------------------------------------------------------------------------------------------------|
| **VPC Network (CIDR)** | 192.168.0.0/16 |
| **VPC Name** | swarm-scale |
| **Subnet network (CIDR)** | 192.168.33.0/24 |
| **Availability Zone** | N. California (us-west-1a or b) |
| **Subnet name** | publicSwarm |
| **DNS resolution** | Yes |
| **Subnet type** | Public (with route to the internet) |
| **Availability Zone** | Any |
| **Auto-assign public IP** | Yes |
| **Router** | A single router with a route for *local* traffic and default route for traffic to the internet |
| **Internet gateway** | A single internet gateway used as default route for the subnet's routing table |
To build the VPC, with the wizard.
1. Go to the VPC dashboard.
2. Choose **Start VPC Wizard**.
3. Make sure **VPC with a Single Public Subnet** is selected.
![](../images/vpc-create-1.png)
4. Click **Select**.
The browser displays the **Step 2: VPC with a Single Public Subnet** dialog.
5. Complete the dialog as follows:
![](../images/vpc-create-2.png)
6. Click **Create VPC**.
AWS works to build the VPC and then presents you with the **VPC Successfully
Created** page.
7. Click **OK**.
8. Choose **Subnets** from the **VPC Dashboard** menu.
9. Locate your `publicSwarm` subnet.
10. Choose **Subnet Actions > Modify Auto-Assign Public IP**.
![](../images/vpc-create-3.png)
11. Select **Enable auto-assign Public IP** and click **Save**.
In the next step, you configure the remaining AWS settings by using a
CloudFormation template.
## Step 2. Build the network stack
In this step, you use CloudFormation template to build a stack on AWS. Before
you begin, make sure you have the prerequisites:
- access to a private key pair associated with your AWS account.
- a clone or download of the <a
href="https://github.com/docker/swarm-microservice-demo-v1" target="_blank">the
example code</a> on your local machine.
Then, do the following:
1. Go to the AWS console and choose **CloudFormation**.
![](../images/vpc-create-4.png)
2. Click **Create Stack**.
3. Under **Choose a template** click the **Choose file** button.
4. Browse to the download sample code and choose the the `swarm-microservice-demo-v1/AWS/cloudformation.json` CloudFormation template.
![](../images/vpc-create-5.png)
5. Click **Next**.
The system pre-populates most of the **Specify Details** dialog from the template.
6. Name the stack `VotingAppStack`.
You can name the stack something else if you want just make sure it is meaningful.
7. Select your key pair from the **KeyName** dropdown.
8. Select the `publicSwarm` for the **Subnetid** dropdown menu.
9. Select `swarm-scale` from the **Vpcid** dropdown menu.
10. Click **Next** twice to reach the **Review** page.
11. Check the values.
The **Template URL**,**SubnetId** and **VpcId** are always unique, so yours
will not match, but otherwise you should see the following:
![](../images/vpc-create-6.png)
12. Click **Create**.
AWS displays the progress of your stack being created
![](../images/create-stack.gif)
## Step 3. Check your deployment
When completed, the CloudFormation populates your VPC with six EC2 instances.
| Instance | Size | Private IP Address |
|--------------|-----------|--------------------|
| `frontend01` | t2.micro | 192.168.33.20 |
| `frontend02` | t2.micro | 192.168.33.21 |
| `interlock` | t2.micro | 192.168.33.12 |
| `manager` | t2.micro | 192.168.33.11 |
| `store` | m3.medium | 192.168.33.250 |
| `worker01` | t2.micro | 192.168.33.200 |
Navigate to the EC2 dashboard to view them running.
![](../images/vpc-create-7.png)
The underlying AWS infrastructure has this configuration.
![](../images/aws-infrastructure.jpg)
All instances are based on the `ami-56f59e36` AMI. This is an Ubuntu 14.04 image
with a 3.13 kernel and 1.10.2 version of the Docker Engine installed. Each Engine
daemon was pre-configured via the `/etc/default/docker` file using the following
`DOCKER_OPTS` values.
```
--cluster-store=consul://192.168.33.11:8500 --cluster-advertise=eth0:2375 -H=tcp://0.0.0.0:2375 -H=unix:///var/run/docker.sock
```
## Next step
At this point your infrastructure stack is created successfully. You are ready
to progress to the next step and [build the Swarm
cluster](03-create-cluster.md).

View File

@ -1,333 +0,0 @@
<!--[metadata]>
+++
title = "Setup cluster resources"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, certificates"]
[menu.main]
parent="scale_swarm"
weight=-89
+++
<![end-metadata]-->
# Setup cluster resources
Now that [your underlying network infrastructure is built](02-deploy-infra.md),
you can deploy and configure the Swarm cluster. A host in a Swarm cluster is
called a *node*. So, these instructions refer to each AWS EC2 instances as a
node and refers to each node by the **Name** it appears as in your EC2
Dashboard**.
![](../images/vpc-create-7.png)
The steps on this page construct a Swarm cluster by:
* using Consul as the discovery backend
* join the `frontend`, `worker`
* `store` EC2 instances to the cluster
* use the `spread` scheduling strategy
You'll perform all the configuration steps from the Swarm manager node. The
manager node has access to all the instances in the cluster.
## Step 1: Construct the cluster
In this step, you create a Consul container for use as the Swarm discovery
service. The Consul backend is also used as the K/V store for the container
network that you overlay on the Swarm cluster in a later step. After you launch
the Consul container, you launch a Swarm manager container.
1. Select the `manager` node and click **Connect** to display the `ssh` command you'll need.
![](../images/ssh-dialog.gif)
2. Open a terminal on your `manager` node with the `ssh` command.
3. Start a Consul container that listens on TCP port `8500`.
$ docker run --restart=unless-stopped -d -p 8500:8500 -h consul progrium/consul -server -bootstrap
Unable to find image 'progrium/consul:latest' locally
latest: Pulling from progrium/consul
3b4d28ce80e4: Pull complete
...<output snip>...
d9125e9e799b: Pull complete
Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274
Status: Downloaded newer image for progrium/consul:latest
6136de6829916430ee911cc14e3f067624fcf6d3041d404930f6dbcbf5399f7d
4. Confirm the container is running.
$ docker ps
IMAGE COMMAND CREATED STATUS PORTS NAMES
6136de682991 progrium/consul "/bin/start -server -" About a minute ago Up About a minute 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 0.0.0.0:8500->8500/tcp, 8301-8302/udp goofy_jepsen
5. Start a Swarm manager container.
This command maps port `3375` on the `manager` node to port `2375` in the
Swarm manager container.
$ docker run --restart=unless-stopped -d -p 3375:2375 swarm manage consul://192.168.33.11:8500/
Unable to find image 'swarm:latest' locally
latest: Pulling from library/swarm
887115b43fc0: Pull complete
...<output snip>...
f3f134eb6413: Pull complete
Digest: sha256:51a8eba9502f1f89eef83e10b9f457cfc67193efc3edf88b45b1e910dc48c906
Status: Downloaded newer image for swarm:latest
f5f093aa9679410bee74b7f3833fb01a3610b184a530da7585c990274bd65e7e
The Swarm manager container is the heart of your Swarm cluster. It is
responsible for receiving all Docker commands sent to the cluster, and for
scheduling resources against the cluster. In a real-world production
deployment you would configure additional replica Swarm managers as
secondaries for high availability (HA).
6. Get information about your Docker installation.
$ docker info
Containers: 2
Images: 26
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 30
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-57-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 1
Total Memory: 992.2 MiB
Name: manager
ID: IISM:V4KJ:VXCT:ONN3:MFIJ:2ZLD:VI6I:UYB3:FJZ4:3O7J:FHKA:P3XS
WARNING: No swap limit support
Cluster store: consul://192.168.33.11:8500
Cluster advertise: 192.168.33.11:2375
The command information returns the information about the Engine and its
daemon.
7. Confirm that you have the `consul` and `swarm manage` containers running.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f5f093aa9679 swarm "/swarm manage consul" About a minute ago Up About a minute 0.0.0.0:3375->2375/tcp sad_goldstine
6136de682991 progrium/consul "/bin/start -server -" 9 minutes ago Up 9 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8301-8302/udp, 8400/tcp, 0.0.0.0:8500->8500/tcp goofy_jepsen
7. Set the `DOCKER_HOST` environment variable.
This ensures that the default endpoint for Docker Engine CLI commands is the
Engine daemon running on the `manager` node.
$ export DOCKER_HOST="tcp://192.168.33.11:3375"
8. Now that your terminal environment is set to the Swarm port, rerun the
`docker info` command.
$ docker info
Containers: 0
Images: 0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 0
Kernel Version: 3.16.0-57-generic
Operating System: linux
CPUs: 0
Total Memory: 0 B
Name: f5f093aa9679
The command is acting on the Swarm port, so it returns information about the
entire cluster. You have a manager and no nodes.
9. While still on the `manager` node, join each node one-by-one to the cluster.
You can run these commands to join each node from the `manager` node command
line. The `-H` flag with the `docker` command specifies a node IP address
and the Engine port. Each command goes over the cluster to the node's Docker
daemon. The `join` command joins a node to the cluster and registers it with
the Consul discovery service.
**frontend01**:
docker -H=tcp://192.168.33.20:2375 run -d swarm join --advertise=192.168.33.20:2375 consul://192.168.33.11:8500/
**frontend02**:
docker -H=tcp://192.168.33.21:2375 run -d swarm join --advertise=192.168.33.21:2375 consul://192.168.33.11:8500/
**worker01**:
docker -H=tcp://192.168.33.200:2375 run -d swarm join --advertise=192.168.33.200:2375 consul://192.168.33.11:8500/
**store**:
docker -H=tcp://192.168.33.250:2375 run -d swarm join --advertise=192.168.33.250:2375 consul://192.168.33.11:8500/
9. Run the `docker info` command again to view your cluster with all its nodes.
$ docker info
Containers: 4
Images: 4
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 4
frontend01: 192.168.33.20:2375
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.017 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.16.0-57-generic, operatingsystem=Ubuntu 14.04.3 LTS, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-14T19:02:53Z
frontend02: 192.168.33.21:2375
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.017 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.16.0-57-generic, operatingsystem=Ubuntu 14.04.3 LTS, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-14T19:02:58Z
store: 192.168.33.250:2375
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 3.86 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.16.0-57-generic, operatingsystem=Ubuntu 14.04.3 LTS, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-14T19:02:58Z
worker01: 192.168.33.200:2375
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.017 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.16.0-57-generic, operatingsystem=Ubuntu 14.04.3 LTS, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-14T19:03:21Z
Kernel Version: 3.16.0-57-generic
Operating System: linux
CPUs: 4
Total Memory: 6.912 GiB
Name: f5f093aa9679
### Step 2: Review your work
The diagram below shows the Swarm cluster that you created.
![](../images/review-work.jpg)
The `manager` node is running two containers: `consul` and `swarm`. The `consul`
container is providing the Swarm discovery service. This is where nodes and
services register themselves and discover each other. The `swarm` container is
running the `swarm manage` process which makes it act as the cluster manager.
The manager is responsible for accepting Docker commands issued against the
cluster and scheduling resources on the cluster.
You mapped port 3375 on the `manager` node to port 2375 inside the `swarm`
container. As a result, Docker clients (for example the CLI) wishing to issue
commands against the cluster must send them to the `manager` node on port
3375. The `swarm` container then executes those commands against the relevant
node(s) in the cluster over port 2375.
Now that you have your Swarm cluster configured, you'll overlay the container
network that the application containers will be part of.
### Step 2: Overlay a container network
All containers that are part of the voting application should belong to a
container network called `mynet`. Container networking is a Docker Engine
feature. The` mynet` network is a overlay network type that runs on top of the
communication network.
A container network can span multiple hosts on multiple networks. As a result,
the `mynet` container network allows all the voting application containers to
easily communicate irrespective of the underlying communication network that
each node is on.
You can create the network and join the containers from any node in your VPC
that is running Docker Engine. However, best practice when using Docker Swarm is
to execute commands from the `manager` node, as this is where all management
tasks happen.
1. If you haven't already, `ssh` into a terminal on your `manager` node.
2. Get information about the network running on just the `manager` node.
You do this by passing the `-H` flag to restrict the Engine CLI to just
the manager node.
$ docker -H=tcp://192.168.33.11:2375 network ls
NETWORK ID NAME DRIVER
d01e8f0303a9 none null
bab7a9c42a23 host host
12fd2a115d85 bridge bridge
3. Now, run the same command to get cluster information.
Provided you set `export DOCKER_HOST="tcp://192.168.33.11:3375"`, the
command directs to the Swarm port and returns information from each node in
the cluster.
$ docker network ls
NETWORK ID NAME DRIVER
82ce2adce6a7 store/bridge bridge
c3ca43d2622d store/host host
d126c4b1e092 frontend01/host host
6ea89a1a5b6a frontend01/bridge bridge
d3ddb830d7f5 frontend02/host host
44f238353c14 frontend02/bridge bridge
c500baec447e frontend02/none null
77897352f138 store/none null
8060bd575770 frontend01/none null
429e4b8c2c8d worker01/bridge bridge
57c5001aca83 worker01/none null
2101103b8494 worker01/host host
4. Create the overlay network with the `docker network` command
$ docker network create --driver overlay mynet
5. Repeat the two network commands again to see how the network list has changed.
docker network ls
docker -H=tcp://192.168.33.11:2375 network ls
As all Swarm nodes in your environment are configured to use the Consul
discovery service at `consul://192.168.33.11:8500`, they all should see the
new overlay network. Verify this with the next step.
3. Try running a network command on an individual node, for example to run it on
the `frontend01` node:
docker -H=tcp://192.168.33.20:2375 network ls
You should see an entry for the `mynet` network using the `overlay` driver
as shown above. You would get the same output if your ran the command from
node's command line.
## Step 4: Review your work
The diagram below shows the complete cluster configuration including the overlay
container network, `mynet`. The `mynet` is shown as red and is available to all
Docker hosts using the Consul discovery backend. Later in the procedure you will
connect containers to this network.
![](../images/overlay-review.jpg)
The `swarm` and `consul` containers on the `manager` node are not attached to
the `mynet` overlay network. These containers are running on `manager` nodes
default bridge network. To verify this try using these two commands:
docker -H=tcp://192.168.33.11:2375 network inspect bridge
docker -H=tcp://192.168.33.11:2375 network inspect mynet
You should find two containers running on the `manager` node's `bridge` network
but nothing yet on the `mynet` network.
## Next Step
Your Swarm cluster is now built and you are ready to [build and run the voting
application](04-deploy-app.md) on it.

View File

@ -1,306 +0,0 @@
<!--[metadata]>
+++
title = "Deploy the application"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, certificates"]
[menu.main]
parent="scale_swarm"
weight=-80
+++
<![end-metadata]-->
# Deploy the application
You've [built a Swarm cluster](03-create-cluster.md) so now you are ready to
build and deploy the voting application itself.
## Step 1: Learn about the images
Some of the application's containers are launched form existing images pulled
directly from Docker Hub. Other containers are launched from custom images you
must build. The list below shows which containers use custom images and which do
not:
- Load balancer container: stock image (`ehazlett/interlock`)
- Redis containers: stock image (official `redis` image)
- Postgres (PostgreSQL) containers: stock image (official `postgres` image)
- Web containers: custom built image
- Worker containers: custom built image
- Results containers: custom built image
All custom built images are built using Dockerfiles pulled from the
[example application's public GitHub
repository](https://github.com/docker/swarm-microservice-demo-v1).
1. If you haven't already, `ssh` into the Swarm `manager` node.
2. Clone the [application's GitHub repo](https://github.com/docker/swarm-microservice-demo-v1)
$ git clone https://github.com/docker/swarm-microservice-demo-v1
sudo: unable to resolve host master
Cloning into 'swarm-microservice-demo-v1'...
remote: Counting objects: 304, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 304 (delta 5), reused 0 (delta 0), pack-reused 287
Receiving objects: 100% (304/304), 2.24 MiB | 2.88 MiB/s, done.
Resolving deltas: 100% (132/132), done.
Checking connectivity... done.
This command creates a new directory structure inside of your working
directory. The new directory contains all of the files and folders required
to build the voting application images.
The `AWS` directory contains the `cloudformation.json` file used to deploy
the EC2 instances. The `Vagrant` directory contains files and instructions
required to deploy the application using Vagrant. The `results-app`,
`vote-worker`, and `web-vote-app` directories contain the Dockerfiles and
other files required to build the custom images for those particular
components of the application.
3. Change directory into the `swarm-microservice-demo-v1/web-vote-app` directory.
$ cd swarm-microservice-demo-v1/web-vote-app/
4. View the Dockerfile contents.
$ cat Dockerfile
# Using official python runtime base image
FROM python:2.7
# Set the application directory
WORKDIR /app
# Install our requirements.txt
ADD requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
# Copy our code from the current folder to /app inside the container
ADD . /app
# Make port 80 available for links and/or publish
EXPOSE 80
# Define our command to be run when launching the container
CMD ["python", "app.py"]
As you can see, the image is based on the official `Python:2.7` tagged
image, adds a requirements file into the `/app` directory, installs
requirements, copies files from the build context into the container,
exposes port `80` and tells the container which command to run.
5. Spend time investigating the other parts of the application by viewing the `results-app/Dockerfile` and the `vote-worker/Dockerfile` in the application.
## Step 2. Build custom images
1. If you haven't already, `ssh` into the Swarm `manager` node.
2. Make sure you have DOCKER_HOST set
$ export DOCKER_HOST="tcp://192.168.33.11:3375"
3. Change to the root of your `swarm-microservice-demo-v1` clone.
4. Build the `web-votes-app` image both the front end nodes.
**frontend01**:
$ docker -H tcp://192.168.33.20:2375 build -t web-vote-app ./web-vote-app
**frontend02**:
$ docker -H tcp://192.168.33.21:2375 build -t web-vote-app ./web-vote-app
These commands build the `web-vote-app` image on the `frontend01` and
`frontend02` nodes. To accomplish the operation, each command copies the
contents of the `swarm-microservice-demo-v1/web-vote-app` sub-directory from the
`manager` node to each frontend node. The command then instructs the
Docker daemon on each frontend node to build the image and store it locally.
You'll notice this example uses a `-H` flag to pull an image to specific
host. This is to help you conceptualize the architecture for this sample. In
a production deployment, you'd omit this option and rely on the Swarm
manager to distribute the image. The manager would pull the image to every
node; so that any node can step in to run the image as needed.
It may take a minute or so for each image to build. Wait for the builds to finish.
5. Build `vote-worker` image on the `worker01` node
$ docker -H tcp://192.168.33.200:2375 build -t vote-worker ./vote-worker
It may take a minute or so for the image to build. Wait for the build to
finish.
5. Build the `results-app` on the `store` node
$ docker -H tcp://192.168.33.250:2375 build -t results-app ./results-app
Each of the *custom images* required by the application is now built and stored
locally on the nodes that will use them.
## Step 3. Pull images from Docker Hub
For performance reasons, it is always better to pull any required Docker Hub
images locally on each instance that needs them. This ensures that containers
based on those images can start quickly.
1. Log into the Swarm `manager` node.
2. Pull the `redis` image to your frontend nodes.
**frontend01**:
$ docker -H tcp://192.168.33.20:2375 pull redis
**frontend02**:
$ docker -H tcp://192.168.33.21:2375 pull redis
2. Pull the `postgres` image to the `store` node
$ docker -H tcp://192.168.33.250:2375 pull postgres
3. Pull the `ehazlett/interlock` image to the `interlock` node
$ docker -H tcp://192.168.33.12:2375 pull ehazlett/interlock
Each node in the cluster, as well as the `interlock` node, now has the required images stored locally as shown below.
![](../images/interlock.jpg)
Now that all images are built, pulled, and stored locally, the next step is to start the application.
## Step 4. Start the voting application
In the following steps, your launch several containers to the voting application.
1. If you haven't already, `ssh` into the Swarm `manager` node.
2. Start the `interlock` container on the `interlock` node
$ docker -H tcp://192.168.33.12:2375 run --restart=unless-stopped -p 80:80 --name interlock -d ehazlett/interlock --swarm-url tcp://192.168.33.11:3375 --plugin haproxy start
This command is issued against the `interlock` instance and maps port 80 on the instance to port 80 inside the container. This allows the container to load balance connections coming in over port 80 (HTTP). The command also applies the `--restart=unless-stopped` policy to the container, telling Docker to restart the container if it exits unexpectedly.
3. Verify the container is running.
$ docker -H tcp://192.168.33.12:2375 ps
4. Start a `redis` container on your front end nodes.
**frontend01**:
$ docker run --restart=unless-stopped --env="constraint:node==frontend01" -p 6379:6379 --name redis01 --net mynet -d redis
$ docker -H tcp://192.168.33.20:2375 ps
**frontend02**:
$ docker run --restart=unless-stopped --env="constraint:node==frontend02" -p 6379:6379 --name redis02 --net mynet -d redis
$ docker -H tcp://192.168.33.21:2375 ps
These two commands are issued against the Swarm cluster. The commands specify *node constraints*, forcing Swarm to start the containers on `frontend01` and `frontend02`. Port 6379 on each instance is mapped to port 6379 inside of each container for debugging purposes. The command also applies the `--restart=unless-stopped` policy to the containers and attaches them to the `mynet` overlay network.
5. Start a `web-vote-app` container the frontend nodes.
**frontend01**:
$ docker run --restart=unless-stopped --env="constraint:node==frontend01" -d -p 5000:80 -e WEB_VOTE_NUMBER='01' --name frontend01 --net mynet --hostname votingapp.local web-vote-app
**frontend02**:
$ docker run --restart=unless-stopped --env="constraint:node==frontend02" -d -p 5000:80 -e WEB_VOTE_NUMBER='02' --name frontend02 --net mynet --hostname votingapp.local web-vote-app
These two commands are issued against the Swarm cluster. The commands
specify *node constraints*, forcing Swarm to start the containers on
`frontend01` and `frontend02`. Port `5000` on each node is mapped to port
`80` inside of each container. This allows connections to come in to each
node on port `5000` and be forwarded to port `80` inside of each container.
Both containers are attached to the `mynet` overlay network and both
containers are given the `votingapp-local` hostname. The
`--restart=unless-stopped` policy is also applied to these containers.
4. Start the `postgres` container on the `store` node
$ docker run --restart=unless-stopped --env="constraint:node==store" --name pg -e POSTGRES_PASSWORD=pg8675309 --net mynet -p 5432:5432 -d postgres
This command is issued against the Swarm cluster and starts the container on
`store`. It maps port 5432 on the `store` node to port 5432 inside the
container and attaches the container to the `mynet` overlay network. The
command also inserts the database password into the container via the
`POSTGRES_PASSWORD` environment variable and applies the
`--restart=unless-stopped` policy to the container.
Sharing passwords like this is not recommended for production use cases.
5. Start the `worker01` container on the `worker01` node
$ docker run --restart=unless-stopped --env="constraint:node==worker01" -d -e WORKER_NUMBER='01' -e FROM_REDIS_HOST=1 -e TO_REDIS_HOST=2 --name worker01 --net mynet vote-worker
This command is issued against the Swarm manager and uses a constraint to
start the container on the `worker01` node. It passes configuration data
into the container via environment variables, telling the worker container
to clear the queues on `frontend01` and `frontend02`. It adds the container
to the `mynet` overlay network and applies the `--restart=unless-stopped`
policy to the container.
6. Start the `results-app` container on the `store` node
$ docker run --restart=unless-stopped --env="constraint:node==store" -p 80:80 -d --name results-app --net mynet results-app
This command starts the results-app container on the `store` node by means
of a *node constraint*. It maps port 80 on the `store` node to port 80
inside the container. It adds the container to the `mynet` overlay network
and applies the `--restart=unless-stopped` policy to the container.
The application is now fully deployed as shown in the diagram below.
![](../images/fully-deployed.jpg)
## Step 5. Test the application
Now that the application is deployed and running, it's time to test it. To do
this, you configure a DNS mapping on the machine where you are running your web
browser. This maps the "votingapp.local" DNS name to the public IP address of
the `interlock` node.
1. Configure the DNS name resolution on your local machine for browsing.
- On Windows machines this is done by adding `votingapp.local <interlock-public-ip>` to the `C:\Windows\System32\Drivers\etc\hosts file`. Modifying this file requires administrator privileges. To open the file with administrator privileges, right-click `C:\Windows\System32\notepad.exe` and select `Run as administrator`. Once Notepad is open, click `file` > `open` and open the file and make the edit.
- On OSX machines this is done by adding `votingapp.local <interlock-public-ip>` to `/private/etc/hosts`.
- On most Linux machines this is done by adding `votingapp.local <interlock-public-ip>` to `/etc/hosts`.
Be sure to replace `<interlock-public-ip>` with the public IP address of
your `interlock` node. You can find the `interlock` node's Public IP by
selecting your `interlock` EC2 Instance from within the AWS EC2 console.
2. Verify the mapping worked with a `ping` command from your local machine.
ping votingapp.local
Pinging votingapp.local [54.183.164.230] with 32 bytes of data:
Reply from 54.183.164.230: bytes=32 time=164ms TTL=42
Reply from 54.183.164.230: bytes=32 time=163ms TTL=42
Reply from 54.183.164.230: bytes=32 time=169ms TTL=42
3. Point your web browser to [http://votingapp.local](http://votingapp.local)
![](../images/vote-app-test.jpg)
Notice the text at the bottom of the web page. This shows which web
container serviced the request. In the diagram above, this is `frontend02`.
If you refresh your web browser you should see this change as the Interlock
load balancer shares incoming requests across both web containers.
To see more detailed load balancer data from the Interlock service, point your web browser to [http://stats:interlock@votingapp.local/haproxy?stats](http://stats:interlock@votingapp.local/haproxy?stats)
![](../images/proxy-test.jpg)
4. Cast your vote. It is recommended to choose "Dogs" ;-)
5. To see the results of the poll, you can point your web browser at the public IP of the `store` node
![](../images/poll-results.jpg)
## Next steps
Congratulations. You have successfully walked through manually deploying a
microservice-based application to a Swarm cluster. Of course, not every
deployment goes smoothly. Now that you've learned how to successfully deploy an
application at scale, you should learn [what to consider when troubleshooting
large applications running on a Swarm cluster](05-troubleshoot.md).

View File

@ -0,0 +1,109 @@
<!--[metadata]>
+++
aliases = ["/swarm/swarm_at_scale/about/"]
title = "Learn the application architecture"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, archiecture"]
[menu.main]
parent="scale_swarm"
weight=-99
+++
<![end-metadata]-->
# Learn the application architecture
On this page, you learn about the Swarm at scale example. Make sure you have
read through [the introduction](index.md) to get an idea of the skills and time
required first.
## Learn the example back story
Your company is a pet food company that has bought a commercial during the
Superbowl. The commercial drives viewers to a web survey that asks users to vote
&ndash; cats or dogs. You are developing the web survey.
Your survey must ensure that millions of people can vote concurrently without
your website becoming unavailable. You don't need real-time results, a company
press release announces the results. However, you do need confidence that every
vote is counted.
## Understand the application architecture
The voting application is composed of several microservices. It uses a parallel
web frontend that sends jobs to asynchronous background workers. The
application's design can accommodate arbitrarily large scale. The diagram below
shows the appliation's high level architecture:
![](../images/app-architecture.png)
All the servers are running Docker Engine. The entire application is fully
"dockerized" in that all services are running inside of containers.
The frontend consists of an load balancer with *N* frontend instances. Each
frontend consists of a web server and a Redis queue. The load balancer can
handle an arbitrary number of web containers behind it (`frontend01`-
`frontendN`). The web containers run a simple Python application that takes a
vote between two options. It queus the votes to a Redist container running on
the datastore.
Behind the frontend is a worker tier which runs on separate nodes. This tier:
* scans the Redis containers
* dequeues votes
* deduplicates votes to prevent double voting
* commits the results to a Postgres database
Just like the frontend, the worker tier can also scale arbitrarily. The worker
count and frontend count are independent from each other.
The applications dockerized microservices are deployed to a container network.
Container networks are a feature of Docker Engine that allows communication
between multiple containers across multiple Docker hosts.
## Swarm Cluster Architecture
To support the application, the design calls for a Swarm cluster with a single
Swarm manager and four nodes as shown below.
![](../images/swarm-cluster-arch.png)
All four nodes in the cluster are running the Docker daemon, as is the Swarm
manager and the load balancer. The Swarm manager is part of the cluster and is
considered out of band for the application. A single host running the Consul
server acts as a keystore for both Swarm discovery and for the container
network. The load balancer could be placed inside of the cluster, but for this
demonstration it is not.
After completing the example and deploying your application, this
is what your environment should look like.
![](../images/final-result.png)
As the previous diagram shows, each node in the cluster runs the following containers:
- `frontend01`:
- Container: voting-app
- Container: Swarm agent
- `frontend02`:
- Container: voting-app
- Container: Swarm agent
- `worker01`:
- Container: voting-app-worker
- Container: Swarm agent
- `dbstore`:
- Container: voting-app-result-app
- Container: db (Postgres 9.4)
- Container: redis
- Container: Swarm agent
After deploying the application, you'll configure your local system so that you
can test the application from your local browser. In production, of course, this
step wouldn't be needed.
## Next step
Now that you understand the application architecture, you need to deploy a
network configuration that can support it. In the next step, you
[deploy network infrastructure](deploy-infra.md) for use in this sample.

View File

@ -0,0 +1,426 @@
<!--[metadata]>
+++
aliases = ["/swarm/swarm_at_scale/04-deploy-app/"]
title = "Deploy the application"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, certificates"]
[menu.main]
parent="scale_swarm"
weight=-80
+++
<![end-metadata]-->
# Deploy the application
You've [deployed the load balancer, the discovery backend, and a Swarm
cluster](deploy-infra.md) so now you can build and deploy the voting application
itself. You do this by starting a number of "dockerized applications" running in
containers.
The diagram below shows the final application configuration including the overlay
container network, `voteapp`.
![](../images/final-result.png)
In this procedure you will connect containers to this network. The `voteapp`
network is available to all Docker hosts using the Consul discovery backend.
Notice that the `interlock`, `nginx`, `consul`, and `swarm manager` containers
on are not part of the `voteapp` overlay container network.
## Task 1. Set up volume and network
This application relies on both an overlay container network and a container
volume. The Docker Engine provides these two features. You'll create them both
on the Swarm `manager` instance.
1. Direct your local environmen to the Swarm manager host.
```bash
$ eval $(docker-machine env manager)
```
You can create the network on an cluster node at the network is visible on
them all.
2. Create the `voteapp` container network.
```bash
$ docker network create -d overlay voteapp
```
3. Switch to the db store.
```bash
$ eval $(docker-machine env dbstore)
```
4. Verify you can see the new network from the dbstore node.
```bash
$ docker network ls
NETWORK ID NAME DRIVER
e952814f610a voteapp overlay
1f12c5e7bcc4 bridge bridge
3ca38e887cd8 none null
3da57c44586b host host
```
3. Create a container volume called `db-data`.
```bash
$ docker volume create --name db-data
```
## Task 2. Start the containerized microservices
At this point, you are ready to start the component microservices that make up
the application. Some of the application's containers are launched from existing
images pulled directly from Docker Hub. Other containers are launched from
custom images you must build. The list below shows which containers use custom
images and which do not:
- Load balancer container: stock image (`ehazlett/interlock`)
- Redis containers: stock image (official `redis` image)
- Postgres (PostgreSQL) containers: stock image (official `postgres` image)
- Web containers: custom built image
- Worker containers: custom built image
- Results containers: custom built image
You can launch these containers from any host in the cluster using the commands
in this section. Each command includs a `-H `flag so that they execute against
the Swarm manager.
The commands also all use the `-e` flag which is a Swarm constraint. The
constraint tells the manager to look for a node with a matching function label.
You set established the labels when you created the nodes. As you run each
command below, look for the value constraint.
1. Start a Postgres database container.
```bash
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-v db-data:/var/lib/postgresql/data \
-e constraint:com.function==dbstore \
--net="voteapp" \
--name db postgres:9.4
```
6. Start the Redis container.
```bash
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-p 6379:6379 \
-e constraint:com.function==dbstore \
--net="voteapp" \
--name redis redis
```
The `redis` name is important so don't change it.
7. Start the worker application
```bash
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-e constraint:com.function==worker01 \
--net="voteapp" \
--net-alias=workers \
--name worker01 docker/example-voting-app-worker
```
6. Start the results application.
```bash
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-p 80:80 \
--label=interlock.hostname=results \
--label=interlock.domain=myenterprise.com \
-e constraint:com.function==dbstore \
--net="voteapp" \
--name results-app docker/example-voting-app-result-app
```
7. Start voting application twice, on each frontend node.
```bash
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-p 80:80 \
--label=interlock.hostname=vote \
--label=interlock.domain=myenterprise.com \
-e constraint:com.function==frontend01 \
--net="voteapp" \
--name voting-app01 docker/example-voting-app-voting-app
```
And again on the other frontend node.
```bash
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-p 80:80 \
--label=interlock.hostname=vote \
--label=interlock.domain=myenterprise.com \
-e constraint:com.function==frontend02 \
--net="voteapp" \
--name voting-app02 docker/example-voting-app-voting-app
```
## Task 3. Check your work and update /etc/hosts
In this step, you check your work to make sure the Nginx configuration recorded
the containers correctly. You'll update your local systems `/etc/hosts` file to
allow you to take advantage of the loadbalancer.
1. Change to the `loadbalancer` node.
```bash
$ eval $(docker-machine env loadbalancer)
```
2. Check your work by reviewing the configuration of nginx.
```html
$ docker exec interlock cat /etc/conf/nginx.conf
... output snipped ...
upstream results.myenterprise.com {
zone results.myenterprise.com_backend 64k;
server 192.168.99.111:80;
}
server {
listen 80;
server_name results.myenterprise.com;
location / {
proxy_pass http://results.myenterprise.com;
}
}
upstream vote.myenterprise.com {
zone vote.myenterprise.com_backend 64k;
server 192.168.99.109:80;
server 192.168.99.108:80;
}
server {
listen 80;
server_name vote.myenterprise.com;
location / {
proxy_pass http://vote.myenterprise.com;
}
}
include /etc/conf/conf.d/*.conf;
}
```
The `http://vote.myenterprise.com` site configuration should point to either
frontend node. Requests to `http://results.myenterprise.com` go just to the
single `dbstore` node where the `example-voting-app-result-app` is running.
8. On your local host, edit `/etc/hosts` file add the resolution for both these
sites.
9. Save and close the `/etc/hosts` file.
10. Restart the `nginx` container.
Manual restart is required because the current Interlock server is not forcing an
Nginx configuration reload.
```bash
$ docker restart nginx
```
## Task 4. Test the application
Now, you can test your application.
1. Open a browser and navigate to the `http://vote.myenterprise.com` site.
You should see something similar to the following:
![](../images/vote-app-test.png)
2. Click on one of the two voting options.
3. Navigate to the `http://results.myenterprise.com` site to see the results.
4. Try changing your vote.
You'll see both sides change as you switch your vote.
![](../images/votes.gif)
## Extra Credit: Deployment with Docker Compose
Up to this point, you've deployed each application container individually. This
can be cumbersome espeically because their are several different containers and
starting them is order dependent. For example, that database should be running
before the worker.
Docker Compose let's you define your microservice containers and their
dependencies in a Compose file. Then, you can use the Compose file to start all
the containers at once. This extra credit
1. Before you begin, stop all the containers you started.
a. Set the host to the manager.
$ DOCKER_HOST=$(docker-machine ip manager):3376
b. List all the application continers on the Swarm.
c. Stop and remove each container.
2. Try to create Compose file on your own by reviewing the tasks in this tutorial.
<a href="http://192.168.99.100:8000/compose/compose-file/#entrypoint" target="_blank">The
version 2 Compose file format</a> is the best to use. Translate each `docker
run` command into a service in the `docker-compose.yml` file. For example,
this command:
```bash
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-e constraint:com.function==worker01 \
--net="voteapp" \
--net-alias=workers \
--name worker01 docker/example-voting-app-worker
```
Becomes this in a Compose file.
```
worker:
image: docker/example-voting-app-worker
networks:
voteapp:
aliases:
- workers
```
In general, Compose starts services in reverse order they appear in the file.
So, if you want a service to start before all the others, make it the last
service in the file file. This applciation relies on a volume and a network,
declare those at the bottom of the file.
3. Check your work against <a href="../docker-compose.yml" target="_blank">this
result file</a>
4. When you are satisifed, save the `docker-compose.yml` file to your system.
5. Set `DOCKER_HOST` to the Swarm manager.
```bash
$ DOCKER_HOST=$(docker-machine ip manager):3376
```
6. In the same directory as your `docker-compose.yml` file, start the services.
```bash
$ docker-compose up -d
Creating network "scale_voteapp" with the default driver
Creating volume "scale_db-data" with default driver
Pulling db (postgres:9.4)...
worker01: Pulling postgres:9.4... : downloaded
dbstore: Pulling postgres:9.4... : downloaded
frontend01: Pulling postgres:9.4... : downloaded
frontend02: Pulling postgres:9.4... : downloaded
Creating db
Pulling redis (redis:latest)...
dbstore: Pulling redis:latest... : downloaded
frontend01: Pulling redis:latest... : downloaded
frontend02: Pulling redis:latest... : downloaded
worker01: Pulling redis:latest... : downloaded
Creating redis
Pulling worker (docker/example-voting-app-worker:latest)...
dbstore: Pulling docker/example-voting-app-worker:latest... : downloaded
frontend01: Pulling docker/example-voting-app-worker:latest... : downloaded
frontend02: Pulling docker/example-voting-app-worker:latest... : downloaded
worker01: Pulling docker/example-voting-app-worker:latest... : downloaded
Creating scale_worker_1
Pulling voting-app (docker/example-voting-app-voting-app:latest)...
dbstore: Pulling docker/example-voting-app-voting-app:latest... : downloaded
frontend01: Pulling docker/example-voting-app-voting-app:latest... : downloaded
frontend02: Pulling docker/example-voting-app-voting-app:latest... : downloaded
worker01: Pulling docker/example-voting-app-voting-app:latest... : downloaded
Creating scale_voting-app_1
Pulling result-app (docker/example-voting-app-result-app:latest)...
dbstore: Pulling docker/example-voting-app-result-app:latest... : downloaded
frontend01: Pulling docker/example-voting-app-result-app:latest... : downloaded
frontend02: Pulling docker/example-voting-app-result-app:latest... : downloaded
worker01: Pulling docker/example-voting-app-result-app:latest... : downloaded
Creating scale_result-app_1
```
9. Use the `docker ps` command to see the containers on the Swarm cluster.
```bash
$ docker -H $(docker-machine ip manager):3376 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b71555033caa docker/example-voting-app-result-app "node server.js" 6 seconds ago Up 4 seconds 192.168.99.104:32774->80/tcp frontend01/scale_result-app_1
cf29ea21475d docker/example-voting-app-worker "/usr/lib/jvm/java-7-" 6 seconds ago Up 4 seconds worker01/scale_worker_1
98414cd40ab9 redis "/entrypoint.sh redis" 7 seconds ago Up 5 seconds 192.168.99.105:32774->6379/tcp frontend02/redis
1f214acb77ae postgres:9.4 "/docker-entrypoint.s" 7 seconds ago Up 5 seconds 5432/tcp frontend01/db
1a4b8f7ce4a9 docker/example-voting-app-voting-app "python app.py" 7 seconds ago Up 5 seconds 192.168.99.107:32772->80/tcp dbstore/scale_voting-app_1
```
When you started the services manually, you had a
`voting-app` instances running on two frontend servers. How many
do you have now?
10. Scale your application up by adding some `voting-app` instances.
```bash
$ docker-compose scale voting-app=3
Creating and starting 2 ... done
Creating and starting 3 ... done
```
After you scale up, list the containers on the cluster again.
7. Change to the `loadbalancer` node.
```bash
$ eval $(docker-machine env loadbalancer)
```
7. Restart the Nginx server.
```bash
$ docker restart nginx
```
8. Check your work again by visiting the `http://vote.myenterprise.com` and
`http://results.myenterprise.com` again.
9. You can view the logs on an indvidual container.
```bash
$ docker logs scale_voting-app_1
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger pin code: 285-809-660
192.168.99.103 - - [11/Apr/2016 17:15:44] "GET / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:15:44] "GET /static/stylesheets/style.css HTTP/1.0" 304 -
192.168.99.103 - - [11/Apr/2016 17:15:45] "GET /favicon.ico HTTP/1.0" 404 -
192.168.99.103 - - [11/Apr/2016 17:22:24] "POST / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:23:37] "POST / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:23:39] "POST / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:23:40] "POST / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:23:41] "POST / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:23:43] "POST / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:23:44] "POST / HTTP/1.0" 200 -
192.168.99.103 - - [11/Apr/2016 17:23:46] "POST / HTTP/1.0" 200 -
```
This log shows the activity on one of the active voting application containers.
## Next steps
Congratulations. You have successfully walked through manually deploying a
microservice-based application to a Swarm cluster. Of course, not every
deployment goes smoothly. Now that you've learned how to successfully deploy an
application at scale, you should learn [what to consider when troubleshooting
large applications running on a Swarm cluster](troubleshoot.md).

View File

@ -0,0 +1,436 @@
<!--[metadata]>
+++
aliases = ["/swarm/swarm_at_scale/03-create-cluster/",
"/swarm/swarm_at_scale/02-deploy-infra/"
]
title = "Deploy application infrastructure"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, certificates"]
[menu.main]
parent="scale_swarm"
weight=-90
+++
<![end-metadata]-->
# Deploy your infrastructure
In this step, you create several Docker hosts to run your application stack on.
Before you continue, make sure you have taken the time to [learn the application
architecture](about.md).
## About these instructions
This example assumes you are running on a Mac or Windows system and enabling
Docker Engine `docker` commands by provisioning local VirtualBox virtual
machines thru Docker Machine. For this evaluation installation, you'll need 6 (six)
VirtualBox VMs.
While this example uses Docker Machine, this is only one example of an
infrastructure you can use. You can create the environment design on whatever
infrastructure you wish. For example, you could place the application on another
public cloud platform such as Azure or DigitalOcean, on premises in your data
center, or even in in a test environment on your laptop.
Finally, these instructions use some common `bash` command substituion techniques to
resolve some values, for example:
```bash
$ eval $(docker-machine env keystore)
```
In a Windows environment, these substituation fail. If you are running in
Windows, replace the substitution `$(docker-machine env keystore)` with the
actual value.
## Task 1. Create the keystore server
To enable a Docker container network and Swarm discovery, you must
supply deploy a key-value store. As a discovery backend, the keystore
maintains an up-to-date list of cluster members and shares that list with the
Swarm manager. The Swarm manager uses this list to assign tasks to the nodes.
An overlay network requires a key-value store. The key-value store holds
information about the network state which includes discovery, networks,
endpoints, IP addresses, and more.
Several different backends are supported. This example uses <a
href="https://www.consul.io/" target="blank">Consul</a> container.
1. Create a "machine" named `keystore`.
```bash
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \
--engine-opt="label=com.function=consul" keystore
```
You can set options for the Engine daemon with the `--engine-opt` flag. You'll
use it to label this Engine instance.
2. Set your local shell to the `keystore` Docker host.
```bash
$ eval $(docker-machine env keystore)
```
3. Run <a href="https://hub.docker.com/r/progrium/consul/" target="_blank">the
`consul` container</a>.
```bash
$ docker run --restart=unless-stopped -d -p 8500:8500 -h consul progrium/consul -server -bootstrap
```
The `-p` flag publishes port 8500 on the container which is where the Consul
server listens. The server also has several other ports exposed which you can
see by running `docker ps`.
```bash
$ docker ps
CONTAINER ID IMAGE ... PORTS NAMES
372ffcbc96ed progrium/consul ... 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp dreamy_ptolemy
```
4. Use a `curl` command test the server by listing the nodes.
```bash
$ curl $(docker-machine ip keystore):8500/v1/catalog/nodes
[{"Node":"consul","Address":"172.17.0.2"}]
```
## Task 2. Create the Swarm manager
In this step, you create the Swarm manager and connect it to the `keystore`
instance. The Swarm manager container is the heart of your Swarm cluster.
It is responsible for receiving all Docker commands sent to the cluster, and for
scheduling resources against the cluster. In a real-world production deployment,
you should configure additional replica Swarm managers as secondaries for high
availability (HA).
You'll use the `--eng-opt` flag to set the `cluster-store` and
`cluster-advertise` options to refer to the `keystore` server. These options
support the container network you'll create later.
1. Create the `manager` host.
```bash
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \
--engine-opt="label=com.function=manager" \
--engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" manager
```
You also give the daemon a `manager` label.
2. Set your local shell to the `manager` Docker host.
```bash
$ eval $(docker-machine env manager)
```
3. Start the Swarm manager process.
```bash
$ docker run --restart=unless-stopped -d -p 3376:2375 \
-v /var/lib/boot2docker:/certs:ro \
swarm manage --tlsverify \
--tlscacert=/certs/ca.pem \
--tlscert=/certs/server.pem \
--tlskey=/certs/server-key.pem \
consul://$(docker-machine ip keystore):8500
```
This command uses the TLS certificates created for the `boot2docker.iso` or
the manager. This is key for the manager when it connects to other machines
in the cluster.
4. Test your work by using displaying the Docker daemon logs from the host.
```bash
$ docker-machine ssh manager
<-- output snipped -->
docker@manager:~$ tail /var/lib/boot2docker/docker.log
time="2016-04-06T23:11:56.481947896Z" level=debug msg="Calling GET /v1.15/version"
time="2016-04-06T23:11:56.481984742Z" level=debug msg="GET /v1.15/version"
time="2016-04-06T23:12:13.070231761Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
time="2016-04-06T23:12:33.069387215Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
time="2016-04-06T23:12:53.069471308Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
time="2016-04-06T23:13:13.069512320Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
time="2016-04-06T23:13:33.070021418Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
time="2016-04-06T23:13:53.069395005Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
time="2016-04-06T23:14:13.071417551Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
time="2016-04-06T23:14:33.069843647Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
```
The output indicates that the `consul` and the `manager` are communicating correctly.
5. Exit the Docker host.
```bash
docker@manager:~$ exit
```
## Task 3. Add the load balancer
The application uses an <a
href="https://github.com/ehazlett/interlock">Interlock</a> and an Nginx as a
loadblancer. Before you build the load balancer host, you'll create the
cnofiguration you'll use for Nginx.
1. On your local host, create a `config` diretory.
2. Change to `config` directory.
```bash
$ cd config
```
3. Get the IP address of the Swarm manager host.
For example:
```bash
$ docker-machine ip manager
192.168.99.101
```
5. Use your favorte editor to create a `config.toml` file and add this content
to the file:
```json
ListenAddr = ":8080"
DockerURL = "tcp://SWARM_MANAGER_IP:3376"
TLSCACert = "/var/lib/boot2docker/ca.pem"
TLSCert = "/var/lib/boot2docker/server.pem"
TLSKey = "/var/lib/boot2docker/server-key.pem"
[[Extensions]]
Name = "nginx"
ConfigPath = "/etc/conf/nginx.conf"
PidPath = "/etc/conf/nginx.pid"
MaxConn = 1024
Port = 80
```
6. In the configuration, replace the `SWARM_MANAGER_IP` with the `manager` IP you got
in Step 4.
You use this value because the load balancer listens on the manager's event
stream.
7. Save and close the `config.toml` file.
8. Create a machine for the load balancer.
```bash
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \
--engine-opt="label=com.function=interlock" loadbalancer
```
9. Switch the environment to the `loadbalancer`.
```bash
$ eval $(docker-machine env loadbalancer)
```
10. Start an `interlock` container running.
```bash
$ docker run \
-P \
-d \
-ti \
-v nginx:/etc/conf \
-v /var/lib/boot2docker:/var/lib/boot2docker:ro \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/config.toml:/etc/config.toml \
--name interlock \
ehazlett/interlock:1.0.1 \
-D run -c /etc/config.toml
```
This command relies on the `config.toml` file being in the current directory. After running the command, confirm the image is runing:
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d846b801a978 ehazlett/interlock:1.0.1 "/bin/interlock -D ru" 2 minutes ago Up 2 minutes 0.0.0.0:32770->8080/tcp interlock
```
If you don't see the image runing, use `docker ps -a` to list all images to make sure the system attempted to start the image. Then, get the logs to see why the container failed to start.
```bash
$ docker logs interlock
INFO[0000] interlock 1.0.1 (000291d)
DEBU[0000] loading config from: /etc/config.toml
FATA[0000] read /etc/config.toml: is a directory
```
This error usually means you weren't starting the `docker run` from the same
`config` directory where the `config.toml` fie is. If you run the coammand
and get a Conflict error such as:
```bash
docker: Error response from daemon: Conflict. The name "/interlock" is already in use by container d846b801a978c76979d46a839bb05c26d2ab949ff9f4f740b06b5e2564bae958. You have to remove (or rename) that container to be able to reuse that name.
```
Remove the interlock container with the `docker rm interlock` and try again.
11. Start an `nginx` container on the load balancer.
```bash
$ docker run -ti -d \
-p 80:80 \
--label interlock.ext.name=nginx \
--link=interlock:interlock \
-v nginx:/etc/conf \
--name nginx \
nginx nginx -g "daemon off;" -c /etc/conf/nginx.conf
```
## Task 4. Create the other Swarm nodes
A host in a Swarm cluster is called a *node*. You've already created the manager
node. Here, the task is to create each virtual host for each node. There are
three commands required:
* create the host with Docker Machine
* point the local environmnet to the new host
* join the host to the Swarm cluster
If you were building this in a non-Mac/Windows environment, you'd only need to
run the `join` command to add node to Swarm and registers it with the Consul
discovery service. When you create a node, you'll label it also, for example:
```
--engine-opt="label=com.function=frontend01"
```
You'll use these labels later when starting application containers. In the
commands below, notice the label you are applying to each node.
1. Create the `frontend01` host and add it to the Swarm cluster.
```bash
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \
--engine-opt="label=com.function=frontend01" \
--engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" frontend01
$ eval $(docker-machine env frontend01)
$ docker run -d swarm join --addr=$(docker-machine ip frontend01):2376 consul://$(docker-machine ip keystore):8500
```
2. Create the `frontend02` VM.
```bash
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \
--engine-opt="label=com.function=frontend02" \
--engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" frontend02
$ eval $(docker-machine env frontend02)
$ docker run -d swarm join --addr=$(docker-machine ip frontend02):2376 consul://$(docker-machine ip keystore):8500
```
3. Create the `worker01` VM.
```bash
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \
--engine-opt="label=com.function=worker01" \
--engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" worker01
$ eval $(docker-machine env worker01)
$ docker run -d swarm join --addr=$(docker-machine ip worker01):2376 consul://$(docker-machine ip keystore):8500
```
4. Create the `dbstore` VM.
```bash
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \
--engine-opt="label=com.function=dbstore" \
--engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" dbstore
$ eval $(docker-machine env dbstore)
$ docker run -d swarm join --addr=$(docker-machine ip dbstore):2376 consul://$(docker-machine ip keystore):8500
```
5. Check your work.
At this point, you have deployed on the infrastructure you need to run the
application. Test this now by listing the running machines:
```bash
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
dbstore - virtualbox Running tcp://192.168.99.111:2376 v1.10.3
frontend01 - virtualbox Running tcp://192.168.99.108:2376 v1.10.3
frontend02 - virtualbox Running tcp://192.168.99.109:2376 v1.10.3
keystore - virtualbox Running tcp://192.168.99.100:2376 v1.10.3
loadbalancer - virtualbox Running tcp://192.168.99.107:2376 v1.10.3
manager - virtualbox Running tcp://192.168.99.101:2376 v1.10.3
worker01 * virtualbox Running tcp://192.168.99.110:2376 v1.10.3
```
6. Make sure the Swarm manager sees all your nodes.
```
$ docker -H $(docker-machine ip manager):3376 info
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 3
Server Version: swarm/1.1.3
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 4
dbstore: 192.168.99.111:2376
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.004 GiB
└ Labels: com.function=dbstore, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-04-07T18:25:37Z
frontend01: 192.168.99.108:2376
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.004 GiB
└ Labels: com.function=frontend01, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-04-07T18:26:10Z
frontend02: 192.168.99.109:2376
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.004 GiB
└ Labels: com.function=frontend02, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-04-07T18:25:43Z
worker01: 192.168.99.110:2376
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.004 GiB
└ Labels: com.function=worker01, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-04-07T18:25:56Z
Plugins:
Volume:
Network:
Kernel Version: 4.1.19-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 4
Total Memory: 8.017 GiB
Name: bb13b7cf80e8
```
The command is acting on the Swarm port, so it returns information about the
entire cluster. You have a manager and no nodes.
## Next Step
Your key-store, load balancer, and Swarm cluster infrastructure is up. You are
ready to [build and run the voting application](deploy-app.md) on it.

View File

@ -0,0 +1,46 @@
version: "2"
services:
voting-app:
image: docker/example-voting-app-voting-app
ports:
- "80"
networks:
- voteapp
labels:
interlock.hostname: "vote"
interlock.domain: "myenterprise.com"
result-app:
image: docker/example-voting-app-result-app
ports:
- "80"
networks:
- voteapp
labels:
interlock.hostname: "results"
interlock.domain: "myenterprise.com"
worker:
image: docker/example-voting-app-worker
networks:
voteapp:
aliases:
- workers
redis:
image: redis
ports:
- "6379"
networks:
- voteapp
container_name: redis
db:
image: postgres:9.4
volumes:
- "db-data:/var/lib/postgresql/data"
networks:
- voteapp
container_name: db
volumes:
db-data:
networks:
voteapp:

View File

@ -23,13 +23,11 @@ application further. The article also provides a troubleshooting section you can
use while developing or deploying the voting application.
The sample is written for a novice network administrator. You should have a
basic skills on Linux systems, `ssh` experience, and some understanding of the
AWS service from Amazon. Some knowledge of Git is also useful but not strictly
required. This example takes approximately an hour to complete and has the
following steps:
basic skills on Linux systems and `ssh` experience. Some knowledge of Git is
also useful but not strictly required. This example takes approximately an hour
to complete and has the following steps:
- [Learn the application architecture](01-about.md)
- [Deploy your infrastructure](02-deploy-infra.md)
- [Setup cluster resources](03-create-cluster.md)
- [Deploy the application](04-deploy-app.md)
- [Troubleshoot the application](05-troubleshoot.md)
- [Learn the application architecture](about.md)
- [Deploy your infrastructure](deploy-infra.md)
- [Deploy the application](deploy-app.md)
- [Troubleshoot the application](troubleshoot.md)

View File

@ -1,5 +1,6 @@
<!--[metadata]>
+++
aliases = ["/swarm/swarm_at_scale/05-troubleshoot/"]
title = "Troubleshoot the application"
description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, certificates"]
@ -17,7 +18,7 @@ following sections cover different failure scenarios:
- [Swarm manager failures](#swarm-manager-failures)
- [Consul (discovery backend) failures](#consul-discovery-backend-failures)
- [Interlock load balancer failures](#interlock-load-balancer-failures)
- [Web (web-vote-app) failures](#web-web-vote-app-failures)
- [Web (voting-app) failures](#web-voting-app-failures)
- [Redis failures](#redis-failures)
- [Worker (vote-worker) failures](#worker-vote-worker-failures)
- [Postgres failures](#postgres-failures)
@ -93,9 +94,9 @@ drops below 10, the tool will attempt to start more.
In our simple voting-app example, the front-end is scalable and serviced by a
load balancer. In the event that on the of the two web containers fails (or the
AWS instance that is hosting it), the load balancer will stop routing requests
to it and send all requests the surviving web container. This solution is highly
scalable meaning you can have up to *n* web containers behind the load balancer.
node that is hosting it), the load balancer will stop routing requests to it and
send all requests the surviving web container. This solution is highly scalable
meaning you can have up to *n* web containers behind the load balancer.
## Interlock load balancer failures
@ -118,9 +119,9 @@ will continue to service requests.
If you deploy multiple interlock load balancers, you should consider spreading
them across multiple failure domains within your infrastructure.
## Web (web-vote-app) failures
## Web (voting-app) failures
The environment that you have configured has two web-vote-app containers running
The environment that you have configured has two voting-app containers running
on two separate nodes. They operate behind an Interlock load balancer that
distributes incoming connections across both.
@ -136,10 +137,10 @@ infrastructure. You should also consider deploying more.
## Redis failures
If the a `redis` container fails, it's partnered `web-vote-app` container will
If the a `redis` container fails, it's partnered `voting-app` container will
not function correctly. The best solution in this instance might be to configure
health monitoring that verifies the ability to write to each Redis instance. If
an unhealthy `redis` instance is encountered, remove the `web-vote-app` and
an unhealthy `redis` instance is encountered, remove the `voting-app` and
`redis` combination and attempt remedial actions.
## Worker (vote-worker) failures
@ -196,21 +197,24 @@ This will allow us to lose an entire AZ and still have our cluster and
application operate.
But it doesn't have to stop there. Some applications can be balanced across AWS
Regions. In our example we might deploy parts of our cluster and application in
the `us-west-1` Region and the rest in `us-east-1`. It's even becoming possible
to deploy services across cloud providers, or have balance services across
public cloud providers and your on premises date centers!
Regions. It's even becoming possible to deploy services across cloud providers,
or have balance services across public cloud providers and your on premises date
centers!
The diagram below shows parts of the application and infrastructure deployed
across AWS and Microsoft Azure. But you could just as easily replace one of
those cloud providers with your own on premises data center. In these scenarios,
network latency and reliability is key to a smooth and workable solution.
network latency and reliability is key to a smooth and workable solution.
![](../images/deployed-across.jpg)
## Related information
The application in this example could be deployed on Docker Universal Control Plane (UCP) which is currently in Beta release. To try the application on UCP in your environment, [request access to the UCP Beta release](https://www.docker.com/products/docker-universal-control-plane). Other useful documentation:
The application in this example could be deployed on Docker Universal Control
Plane (UCP) which is currently in Beta release. To try the application on UCP in
your environment, [request access to the UCP Beta
release](https://www.docker.com/products/docker-universal-control-plane). Other
useful documentation:
* [Plan for Swarm in production](../plan-for-production.md)
* [Swarm and container networks](../networking.md)