Merge pull request #1753 from SvenDowideit/rolfes-second-doc-branch-carry

new content for install-on-aws and images for other topics
This commit is contained in:
Victor Vieux 2016-02-03 17:47:00 -08:00
commit 46b57a3cbe
8 changed files with 311 additions and 325 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

View File

@ -1,176 +1,209 @@
<!--[metadata]>
+++
title = "Create a swarm for development"
description = "Swarm: a Docker-native clustering system"
keywords = ["docker, swarm, clustering"]
title = "Build a Swarm cluster for production"
description = "Deploying Swarm on AWS EC2 AMI's in a VPC"
keywords = ["docker, swarm, clustering, examples, Amazon, AWS EC2"]
[menu.main]
parent="workw_swarm"
weight=-40
+++
<![end-metadata]-->
# Create a swarm for development
This section tells you how to create a Docker Swarm on your network to use only for debugging, testing, or development purposes. You can also use this type of installation if you are developing custom applications for Docker Swarm or contributing to it.
# Build a Swarm cluster for production
> **Caution**: Only use this set up if your network environment is secured by a firewall or other measures.
This example shows you how to deploy a high-availability Docker Swarm cluster. Although this example uses the Amazon Web Services (AWS) platform, you can deploy an equivalent Docker Swarm cluster on many other platforms.
The Swarm cluster will contain three types of nodes:
- Swarm manager
- Swarm node (aka Swarm agent)
- Discovery backend node running consul
This example will take you through the following steps: You establish basic network security by creating a security group that restricts inbound traffic by port number, type, and origin. Then, you create four hosts on your network by launching Elastic Cloud (EC2) instances, applying the appropriate security group to each one, and installing Docker Engine on each one. You create a discovery backend by running an consul container on one of the hosts. You create the Swarm cluster by running two Swarm managers in a high-availability configuration. One of the Swarm managers shares a host with consul. Then you run two Swarm nodes. You communicate with the Swarm via the primary manager, running a simple hello world application and then checking which node ran the application. To finish, you test high-availability by making one of Swarm managers fail and checking the status of the managers.
For a gentler introduction to Swarm, try the [Evaluate Swarm in a sandbox](install-w-machine) page.
<This example doesn't use Amazon's EC2 Container Service (ECS)>
## Prerequisites
You install Docker Swarm on a single system which is known as your Docker Swarm
manager. You create the cluster, or swarm, on one or more additional nodes on
your network. Each node in your swarm must:
- An Amazon Web Services (AWS) account
- Familiarity with AWS features and tools, such as:
- Elastic Cloud (EC2) Dashboard
- Virtual Private Cloud (VPC) Dashboard
- VPC Security groups
- Connecting to an EC2 instance using SSH
* be accessible by the swarm manager across your network
* have Docker Engine 1.6.0+ installed
* open a TCP port to listen for the manager
* *do not* install on a VM or from an image created through cloning
## Update the network security rules
Docker generates a unique ID for the Engine that is located in the
`/etc/docker/key.json` file. If a VM is cloned from an instance where a
Docker daemon was previously pre-installed, Swarm will be unable to differentiate
among the remote Docker engines. This is because the cloning process copied the
the identical ID to each image and the ID is no longer unique.
AWS uses a "security group" to allow specific types of network traffic on your VPC network. The **default** security group's initial set of rules deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances. You're going to add a couple of rules to allow inbound SSH connections and inbound container images. This set of rules somewhat protects the Engine, Swarm, and Consul ports. For a production environment, you would apply more restrictive security measures. Do not leave Docker Engine ports unprotected.
If you forget this restriction and create a node anyway, Swarm displays a single
Docker engine as registered. To workaround this problem, you can generate a new
ID for each node with affected by this issue. To do this stop the daemon on a node,
delete its `/etc/docker/key.json` file, and restart the daemon.
From your AWS home console, click **VPC - Isolated Cloud Resources**. Then, in the VPC Dashboard that opens, navigate to **Security Groups**. Select the **default** security group that's associated with your default VPC and add the following two rules. (The **Allows** column is just for your reference.)
If you use `ssh` to start the Engine `daemon`, make sure the daemon runs in the
background. You can do this using `disown` or `nohup` in combination with the `&`
(ampersand). Or you could launch the `daemon` through `systemd` or any other
Linux-distribution-specific method. Regardless of which launch method you
choose, make sure the daemon process does not stop when your session on a node
ends.
You can run Docker Swarm on Linux 64-bit architectures. You can also install and
run it on 64-bit Windows and Max OSX but these architectures are *not* regularly
tested for compatibility.
Take a moment and identify the systems on your network that you intend to use.
Ensure each node meets the requirements listed above.
## Pull the swarm image and create a cluster.
The easiest way to get started with Swarm is to use the
[official Docker image](https://registry.hub.docker.com/_/swarm/).
1. Pull the swarm image.
$ docker pull swarm
1. Create a Swarm cluster using the `docker` command.
$ docker run --rm swarm create
6856663cdefdec325839a4b7e1de38e8 #
The `create` command returns a unique cluster ID (`cluster_id`). You'll need
this ID when starting the Docker Swarm agent on a node.
## Create swarm nodes
Each Swarm node will run a Swarm node agent. The agent registers the referenced
Docker daemon, monitors it, and updates the discovery backend with the node's status.
This example uses the Docker Hub based `token` discovery service (only for testing/dev, not for production).
Log into **each node** and do the following.
1. Start the Docker daemon with the `-H` flag. This ensures that the
Docker remote API on *Swarm Agents* is available over TCP for the
*Swarm Manager*, as well as the standard unix socket which is
available in default docker installs.
$ docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
> **Note**: versions of docker prior to 1.8 used the `-d` flag instead of the `docker daemon` subcommand.
2. Register the Swarm agents to the discovery service. The node's IP must be accessible from the Swarm Manager. Use the following command and replace with the proper `node_ip` and `cluster_id` to start an agent:
docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>
For example:
$ docker run -d swarm join --addr=172.31.40.100:2375 token://6856663cdefdec325839a4b7e1de38e8
## Configure a manager
Once you have your nodes established, set up a manager to control the swarm.
1. Start the Swarm manager on any machine or your laptop.
The following command illustrates how to do this:
docker run -d -p <manager_port>:2375 swarm manage token://<cluster_id>
The manager is exposed and listening on `<manager_port>`.
2. Once the manager is running, check your configuration by running `docker info` as follows:
docker -H tcp://<manager_ip:manager_port> info
For example, if you run the manager locally on your machine:
$ docker -H tcp://0.0.0.0:2375 info
Containers: 0
Nodes: 3
agent-2: 172.31.40.102:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
agent-1: 172.31.40.101:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
agent-0: 172.31.40.100:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
If you are running a test cluster without TLS enabled, you may get an error.
In that case, be sure to unset `DOCKER_TLS_VERIFY` with:
$ unset DOCKER_TLS_VERIFY
## Using the docker CLI
You can now use the regular Docker CLI to access your nodes:
docker -H tcp://<manager_ip:manager_port> info
docker -H tcp://<manager_ip:manager_port> run ...
docker -H tcp://<manager_ip:manager_port> ps
docker -H tcp://<manager_ip:manager_port> logs ...
| Type | Protocol | Port Range | Source | Allows |
| -------------- | ----- | ----- | ----- | ----- |
| SSH | TCP | 22 | 0.0.0.0/0 | SSH connection |
| HTTP | TCP | 80 | 0.0.0.0/0 | Container images |
## List nodes in your cluster
## Create your hosts
You can get a list of all your running nodes using the `swarm list` command:
Here, you create five Linux hosts that are part of the "Docker Swarm Example" security group.
docker run --rm swarm list token://<cluster_id>
<node_ip:2375>
Open the EC2 Dashboard and launch four EC2 instances, one at a time:
- During **Step 1: Choose an Amazon Machine Image (AMI)**, pick the *Amazon Linux AMI*.
- During **Step 5: Tag Instance**, under **Value**, give each instance one of these names:
- manager0 & consul0
- manager1
- node0
- node1
- During **Step 6: Configure Security Group**, choose **Select an existing security group** and pick the "default" security group.
Review and launch your instances.
## Install Docker Engine on each instance
Connect to each instance using SSH and install Docker Engine.
Update the yum packages, and keep an eye out for the "y/n/abort" prompt:
$ sudo yum update
Run the installation script:
$ curl -sSL https://get.docker.com/ | sh
Configure and start Docker Engine so it listens for Swarm nodes on port 2375 :
$ sudo docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
Verify that Docker Engine is installed correctly:
$ sudo docker run hello-world
The output should display a "Hello World" message and other text without any error messages.
Give the ec2-user root privileges:
sudo usermod -aG docker ec2-user
Then, enter `logout`.
> Troubleshooting: If entering a `docker` command produces a message asking whether docker is available on this host, it may be because the user doesn't have root privileges. If so, use `sudo` or give the user root privileges.
> For this example, don't create an AMI image from one of your instances running Docker Engine and then re-use it to create the other instances. Doing so will produce errors.
> Troubleshooting: If your host cannot reach Docker Hub, the `docker run` commands that pull container images may fail. In that case, check that your VPC is associated with a security group with a rule that allows inbound traffic (e.g., HTTP/TCP/80/0.0.0.0/0). Also Check
the [Docker Hub status page](http://status.docker.com/) for service
availability.
## Set up an consul discovery backend
Here, you're going to create a minimalist discovery backend. The Swarm managers and nodes use this backend to authenticate themselves as members of the cluster. The Swarm managers also use this information to identify which nodes are available to run containers.
To keep things simple, you are going to run a single consul daemon on the same host as one of the Swarm managers.
To start, copy the following launch command to a text file.
$ docker run -d -p 8500:8500 —name=consul progrium/consul -server -bootstrap
Then, use SSH to connect to the "manager0 & consul0" instance. At the command line, enter `ifconfig`. From the output, copy the `eth0` IP address from `inet addr`.
Using SSH, connect to the "manager0 & etc0" instance. Copy the launch command from the text file and paste it into the command line.
Your consul node is up and running, providing your cluster with a discovery backend. To increase its reliability, you can create a high-availability cluster using a trio of consul nodes using the link mentioned at the end of this page. (Before creating a cluster of console nodes, update the VPC security group with rules to allow inbound traffic on the required port numbers.)
## Create a high-availability Swarm cluster
After creating the discovery backend, you can create the Swarm managers. Here, you are going to create two Swarm managers in a high-availability configuration. The first manager you run becomes the Swarm's *primary manager*. Some documentation still refers to a primary manager as a "master", but that term has been superseded. The second manager you run serves as a *replica*. If the primary manager becomes unavailable, the cluster elects the replica as the primary manager.
To create the primary manager in a high-availability Swarm cluster, use the following syntax:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise <manager0_ip>:4000 consul://<consul_ip>
Because this is particular manager is on the same "manager0 & consul0" instance as the consul node, replace both `<manager0_ip>` and `<consul_ip>` with the same IP address. For example:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise 172.30.0.161:4000 consul://172.30.0.161:8500
Enter `docker ps`. From the output, verify that both a swarm and an consul container are running. Then, disconnect from the "manager0 & consul0" instance.
Connect to the "manager1" instance and use `ifconfig` to get its IP address. Then, enter the following command, replacing `<manager1_ip>`. For example:
$ docker run -d swarm manage -H :4000 --replication --advertise <manager1_ip>:4000 consul://172.30.0.161:8500
Enter `docker ps` and, from the output, verify that a swarm container is running.
Now, connect to each of the "node0" and "node1" instances, get their IP addresses, and run a Swarm node on each one using the following syntax:
$ docker run -d swarm join --advertise=<node_ip>:2375 consul://<consul_ip>:8500
For example:
$ docker run --rm swarm list token://6856663cdefdec325839a4b7e1de38e8
172.31.40.100:2375
172.31.40.101:2375
172.31.40.102:2375
## TLS
Swarm supports TLS authentication between the CLI and Swarm but also between
Swarm and the Docker nodes. _However_, all the Docker daemon certificates and client
certificates **must** be signed using the same CA-certificate.
In order to enable TLS for both client and server, the same command line options
as Docker can be specified:
$ docker run -d swarm join --advertise=172.30.0.69:2375 consul://172.30.0.161:8500
swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]
Your small Swarm cluster is up and running on multiple hosts, providing you with a high-availability virtual Docker Engine. To increase its reliability and capacity, you can add more Swarm managers, nodes, and a high-availability discovery backend.
## Communicate with the Swarm
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/)
for more information on how to set up TLS authentication on Docker and generating
the certificates.
You can communicate with the Swarm to get information about the managers and nodes using the Swarm API, which is nearly the same as the standard Docker API.
In this example, you use SSL to connect to "manager0 & etc0" host again. Then, you address commands to the Swarm manager.
> **Note**: Swarm certificates must be generated with `extendedKeyUsage = clientAuth,serverAuth`.
Get information about the master and nodes in the cluster:
$ docker -H :4000 info
The output gives the manager's role as primary (`Role: primary`) and information about each of the nodes.
Now run an application on the Swarm:
$ docker -H :4000 run hello-world
Check which Swarm node ran the application:
$ docker -H :4000 ps
## Test the high-availability Swarm managers
To see the replica instance take over, you're going to shut down the primary manager. Doing so kicks off an election, and the replica becomes the primary manager. When you start the manager you shut down earlier, it becomes the replica.
Using an SSH connection to the "manager0 & etc0" instance, get the container id or name of the swarm container:
$ docker ps
Shut down the primary master, replacing `<id_name>` with the container id or name (e.g., "8862717fe6d3" or "trusting_lamarr").
$ docker rm -f <id_name>
Start the swarm master. For example:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise 172.30.0.161:4000 consul://172.30.0.161:237
Look at the logs, replacing `<id_name>` with the new container id or name:
$ sudo docker logs <id_name>
The output shows will show two entries like these ones:
time="2016-02-02T02:12:32Z" level=info msg="Leader Election: Cluster leadership lost"
time="2016-02-02T02:12:32Z" level=info msg="New leader elected: 172.30.0.160:4000"
To get information about the master and nodes in the cluster, enter:
$ docker -H :4000 info
You can connect to the "master1" node and run the `info` and `logs` commands. They will display corresponding entries for the change in leadership.
## Additional Resources
- Installing Docker Engine
- [Example: Manual install on a cloud provider](http://docs.docker.com/engine/installation/cloud/cloud-ex-aws/)
- Docker Swarm
- [Docker Swarm 1.0 with Multi-host Networking: Manual Setup](http://goelzer.com/blog/2015/12/29/docker-swarmoverlay-networks-manual-method/)
- [High availability in Docker Swarm](multi-manager-setup/)
- [Discovery](discovery/)
- consul Discovery Backend
- [high-availability cluster using a trio of consul nodes](https://hub.docker.com/r/progrium/consul/)
- Networking
- [Networking](https://docs.docker.com/swarm/networking/)

View File

@ -1,236 +1,189 @@
<!--[metadata]>
+++
title = "Install and use Swarm"
description = "Swarm release notes"
keywords = ["docker, swarm, clustering, discovery, release, notes"]
title = "Evaluate Swarm in a sandbox"
description = "Evaluate Swarm in a sandbox"
keywords = ["docker, swarm, clustering, discovery, examples"]
[menu.main]
parent="workw_swarm"
weight=-50
+++
<![end-metadata]-->
# Install and Create a Docker Swarm
# Evaluate Swarm in a sandbox
You use Docker Swarm to host and schedule a cluster of Docker containers. This section introduces you to Docker Swarm by teaching you how to create a swarm
on your local machine using Docker Machine and VirtualBox.
This getting started example shows you how to create a Docker Swarm, the
native clustering tool for Docker.
## Prerequisites
You'll use Docker Toolbox to install Docker Machine and some other tools on your computer. Then you'll use Docker Machine to provision a set of Docker Engine hosts. Lastly, you'll use Docker client to connect to the hosts, where you'll create a discovery token, create a cluster of one Swarm manager and nodes, and manage the cluster.
Make sure your local system has VirtualBox installed. If you are using Mac OS X
or Windows and have installed Docker, you should have VirtualBox already
installed.
When you finish, you'll have a Docker Swarm up and running in VirtualBox on your
local Mac or Windows computer. You can use this Swarm as personal development
sandbox.
Using the instructions appropriate to your system architecture, [install Docker
Machine](http://docs.docker.com/machine/install-machine).
To use Docker Swarm on Linux, see [Build a Swarm cluster for production](install-manual.md).
## Create a Docker Swarm
## Install Docker Toolbox
Docker Machine gets hosts ready to run Docker containers. Each node in your
Docker Swarm must have access to Docker to pull images and run them in
containers. Docker Machine manages all this provisioning for your swarm.
Download and install [Docker Toolbox](https://www.docker.com/docker-toolbox).
Before you create a swarm with `docker-machine`, you associate each
node with a discovery service. This example uses the token discovery
service hosted by Docker Hub (only for testing/dev, not for production).
This discovery service associates a token with instances of the Docker
Daemon running on each node. Other discovery service backends such as
`etcd`, `consul`, and `zookeeper` are [available](discovery.md).
The toolbox installs a handful of tools on your local Windows or Mac OS X computer. In this exercise, you use three of those tools:
1. List the machines on your system.
- Docker Machine: To deploy virtual machines that run Docker Engine.
- VirtualBox: To host the virtual machines deployed from Docker Machine.
- Docker Client: To connect from your local computer to the Docker Engines on the VMs and issue docker commands to create the Swarm.
The following sections provide more information on each of these tools. The rest of the document uses the abbreviation, VM, for virtual machine.
## Create three VMs running Docker Engine
Here, you use Docker Machine to provision three VMs running Docker Engine.
1. Open a terminal on your computer. Use Docker Machine to list any VMs in VirtualBox.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
docker-vm * virtualbox Running tcp://192.168.99.100:2376
default * virtualbox Running tcp://192.168.99.100:2376
This example was run a Mac OSX system with Docker Toolbox installed. So, the
`docker-vm` virtual machine is in the list.
2. Optional: To conserve system resources, stop any virtual machines you are not using. For example, to stop the VM named `default`, enter:
2. Create a VirtualBox machine called `local` on your system.
$ docker-machine stop default
$ docker-machine create -d virtualbox local
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0005] Waiting for VM to start...
INFO[0050] "local" has been created and is now the active machine.
INFO[0050] To point your Docker client at it, run this in your shell: eval "$(docker-machine env local)"
3. Create and run a VM named `manager`.
3. Load the `local` machine configuration into your shell.
$ docker-machine create -d virtualbox manager
$ eval "$(docker-machine env local)"
4. Create and run a VM named `agent1`.
4. Generate a discovery token using the Docker Swarm image.
$ docker-machine create -d virtualbox agent1
The command below runs the `swarm create` command in a container. If you
haven't got the `swarm:latest` image on your local machine, Docker pulls it
for you.
5. Create and run a VM named `agent2`.
$ docker run swarm create
Unable to find image 'swarm:latest' locally
latest: Pulling from swarm
de939d6ed512: Pull complete
79195899a8a4: Pull complete
79ad4f2cc8e0: Pull complete
0db1696be81b: Pull complete
ae3b6728155e: Pull complete
57ec2f5f3e06: Pull complete
73504b2882a3: Already exists
swarm:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aaaf6c18b8be01a75099cc554b4fb372b8ec677ae81764dcdf85470279a61d6f
Status: Downloaded newer image for swarm:latest
fe0cc96a72cf04dba8c1c4aa79536ec3
The `swarm create` command returned the `fe0cc96a72cf04dba8c1c4aa79536ec3`
token.
**Note**: This command relies on Docker Swarm's hosted discovery service. If
this service is having issues, this command may fail. In this case, see
information on using other types of [discovery backends](./discovery). Check
the [status page](http://status.docker.com/) for service availability.
5. Save the token in a safe place.
You'll use this token in the next step to create a Docker Swarm.
$ docker-machine create -d virtualbox agent2
## Launch the Swarm manager
Each create command checks for a local copy of the *latest* VM image, called boot2docker.iso. If it isn't available, Docker Machine downloads the image from Docker Hub. Then, Docker Machine uses boot2docker.iso to create a VM that automatically runs Docker Engine.
A single system in your network is known as your Docker Swarm manager. The swarm
manager orchestrates and schedules containers on the entire cluster. The swarm
manager rules a set of agents (also called nodes or Docker nodes).
> Troubleshooting: If your computer or hosts cannot reach Docker Hub, the
`docker-machine` or `docker run` commands that pull images may fail. In that
case, check the [Docker Hub status page](http://status.docker.com/) for
service availability. Then, check whether your computer is connected to the Internet. Finally, check whether VirtualBox's network settings allow your hosts to connect to the Internet.
Swarm agents are responsible for hosting containers. They are regular docker
daemons and you can communicate with them using the Docker remote API.
## Create a Swarm discovery token
In this section, you create a swarm manager and two nodes.
Here you use the discovery backend hosted on Docker Hub to create a unique discovery token for your cluster. This discovery backend is only for low-volume development and testing purposes, not for production. Later on, when you run the Swarm manager and nodes, they register with the discovery backend as members of the cluster that's associated with the unique token. The discovery backend maintains an up-to-date list of cluster members and shares that list with the Swarm manager. The Swarm manager uses this list to assign tasks to the nodes.
1. Create a swarm manager under VirtualBox.
1. Connect the Docker Client on your computer to the Docker Engine running on `manager`.
docker-machine create \
-d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
swarm-master
$ eval $(docker-machine env manager)
For example:
The client will send the `docker` commands in the following steps to the Docker Engine on on `manager`.
$ docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-master
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0005] Waiting for VM to start...
INFO[0060] "swarm-master" has been created and is now the active machine.
INFO[0060] To point your Docker client at it, run this in your shell: eval "$(docker-machine env swarm-master)"
2. Create a unique id for the Swarm cluster.
2. Open your VirtualBox Manager, it should contain the `local` machine and the
new `swarm-master` machine.
$ docker run --rm swarm create
.
.
.
Status: Downloaded newer image for swarm:latest
0ac50ef75c9739f5bfeeaf00503d4e6e
![VirtualBox](images/virtual-box.png)
The `docker run` command gets the latest `swarm` image and runs it as a container. The `create` argument makes the Swarm container connect to the Docker Hub discovery service and get a unique Swarm ID, also known as a "discovery token". The token appears in the output, it is not saved to a file on the host. The `--rm` option automatically cleans up the container and removes the file system when the container exits.
3. Create a swarm node.
The discovery service keeps unused tokens for approximately one week.
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
swarm-agent-00
3. Copy the discovery token from the last line of the previous output to a safe place.
For example:
## Create the Swarm manager and nodes
$ docker-machine create -d virtualbox --swarm --swarm-discovery token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-agent-00
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0006] Waiting for VM to start...
INFO[0066] "swarm-agent-00" has been created and is now the active machine.
INFO[0066] To point your Docker client at it, run this in your shell: eval "$(docker-machine env swarm-agent-00)"
Here, you connect to each of the hosts and create a Swarm manager or node.
3. Add another agent called `swarm-agent-01`.
1. Get the IP addresses of the three VMs. For example:
$ docker-machine create -d virtualbox --swarm --swarm-discovery token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-agent-01
You should see the two agents in your VirtualBox Manager.
## Direct your swarm
In this step, you connect to the swarm machine, display information related to
your swarm, and start an image on your swarm.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
agent1 - virtualbox Running tcp://192.168.99.102:2376 v1.9.1
agent2 - virtualbox Running tcp://192.168.99.103:2376 v1.9.1
manager * virtualbox Running tcp://192.168.99.100:2376 v1.9.1
1. Point your Docker environment to the machine running the swarm master.
2. Your client should still be pointing to Docker Engine on `manager`. Use the following syntax to run a Swarm container as the primary Swarm manager on `manager`.
$ eval $(docker-machine env --swarm swarm-master)
$ docker run -d -p <your_selected_port>:2375 -t swarm manage token://<cluster_id> >
For example:
2. Get information on your new swarm using the `docker` command.
$ docker run -d -p 2375:2375 -t swarm manage token://0ac50ef75c9739f5bfeeaf00503d4e6e
$ docker info
Containers: 4
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 3
swarm-agent-00: 192.168.99.105:2376
└ Containers: 1
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 1.023 GiB
swarm-agent-01: 192.168.99.106:2376
└ Containers: 1
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 1.023 GiB
swarm-master: 192.168.99.104:2376
└ Containers: 2
└ Reserved CPUs: 0 / 8
The -p option maps a port on the container to a port on the host.
You can see that each agent and the master all have port `2376` exposed. When you create a swarm, you can use any port you like and even different ports on different nodes. Each swarm node runs the swarm agent container.
3. Connect Docker Client to `agent1`.
The master is running both the swarm manager and a swarm agent container. This isn't recommended in a production environment because it can cause problems with agent failover. However, it is perfectly fine to do this in a learning environment like this one.
$ eval $(docker-machine env agent1)
3. Check the images currently running on your swarm.
4. Use the following syntax to run a Swarm container as an agent on `agent1`. Replace <node_ip> with the IP address of the VM.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78be991b58d1 swarm:latest "/swarm join --addr 3 minutes ago Up 2 minutes 2375/tcp swarm-agent-01/swarm-agent
da5127e4f0f9 swarm:latest "/swarm join --addr 6 minutes ago Up 6 minutes 2375/tcp swarm-agent-00/swarm-agent
ef395f316c59 swarm:latest "/swarm join --addr 16 minutes ago Up 16 minutes 2375/tcp swarm-master/swarm-agent
45821ca5208e swarm:latest "/swarm manage --tls 16 minutes ago Up 16 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-master/swarm-agent-master
$ docker run -d swarm join --addr=<node_ip>:<node_port> token://<cluster_id>
4. Run the Docker `hello-world` test image on your swarm.
For example:
$ docker run -d swarm join --addr=192.168.99.102:2375 token://0ac50ef75c9739f5bfeeaf00503d4e6e
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.
5. Connect Docker Client to `agent2`.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
$ eval $(docker-machine env agent2)
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
6. Run a Swarm container as an agent on `agent2`. For example:
For more examples and ideas, visit:
http://docs.docker.com/userguide/
$ docker run -d swarm join --addr=192.168.99.103:2375 token://0ac50ef75c9739f5bfeeaf00503d4e6e
5. Use the `docker ps` command to find out which node the container ran on.
## Manage your Swarm
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54a8690043dd hello-world:latest "/hello" 22 seconds ago Exited (0) 3 seconds ago swarm-agent-00/modest_goodall
78be991b58d1 swarm:latest "/swarm join --addr 5 minutes ago Up 4 minutes 2375/tcp swarm-agent-01/swarm-agent
da5127e4f0f9 swarm:latest "/swarm join --addr 8 minutes ago Up 8 minutes 2375/tcp swarm-agent-00/swarm-agent
ef395f316c59 swarm:latest "/swarm join --addr 18 minutes ago Up 18 minutes 2375/tcp swarm-master/swarm-agent
45821ca5208e swarm:latest "/swarm manage --tls 18 minutes ago Up 18 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-master/swarm-agent-master
Here, you connect to the cluster and review information about the Swarm manager and nodes. You tell the Swarm run a container and check which node did the work.
1. Connect the Docker Client to the Swarm.
$ eval $(docker-machine env swarm)
Because Docker Swarm uses the standard Docker API, you can connect to it using Docker Client and other tools such as Docker Compose, Dokku, Jenkins, and Krane, among others.
2. Get information about the Swarm.
$ docker info
As you can see, the output displays information about the two agent nodes and the one manager node in the Swarm.
3. Check the images currently running on your Swarm.
$ docker ps
4. Run a container on the Swarm.
$ docker run hello-world
Hello from Docker.
.
.
.
5. Use the `docker ps` command to find out which node the container ran on. For example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b0628349187 hello-world "/hello" 20 minutes ago Exited (0) 20 minutes ago agent1
.
.
.
In this case, the Swarm ran 'hello-world' on the 'swarm1'.
By default, Docker Swarm uses the "spread" strategy to choose which node runs a container. When you run multiple containers, the spread strategy assigns each container to the node with the fewest containers.
## Where to go next
At this point, you've installed Docker Swarm by pulling the latest image of
it from Docker Hub. Then, you built and ran a swarm on your local machine
using VirtualBox. If you want, you can onto read an [overview of Docker Swarm
features](index.md). Alternatively, you can develop a more in-depth view of Swarm by
[manually installing Swarm](install-manual.md) on a network.
At this point, you've done the following:
- Created a Swarm discovery token.
- Created Swarm nodes using Docker Machine.
- Managed a Swarm and run containers on it.
- Learned Swarm-related concepts and terminology.
However, Docker Swarm has many other aspects and capabilities.
For more information, visit [the Swarm landing page](https://www.docker.com/docker-swarm) or read the [Swarm documentation](https://docs.docker.com/swarm/).

View File

@ -43,7 +43,7 @@ Using Docker Machine, you can quickly install a Docker Swarm on cloud providers
Using Docker Machine is the best method for users getting started with Swarm for the first time. To try the recommended method of getting started, see [Get Started with Docker Swarm](install-w-machine.md).
If you are interested manually installing or interested in contributing, see [Create a swarm for development](install-manual.md).
If you are interested manually installing or interested in contributing, see [Build a Swarm cluster for production](install-manual.md).
## Discovery services

View File

@ -5,7 +5,7 @@ description = "Plan for Swarm in production"
keywords = ["docker, swarm, scale, voting, application, plan"]
[menu.main]
parent="workw_swarm"
weight=70
weight=-45
+++
<![end-metadata]-->

View File

@ -26,7 +26,7 @@ company's data center.
If this is the first time you are creating a Swarm cluster, you should first
learn about Swarm and its requirements by [installing a Swarm for
evaluation](install-w-machine.md) or [installing a Swarm for
production](install-on-aws.md). If this is the first time you have used Machine,
production](install-manual.md). If this is the first time you have used Machine,
you should take some time to [understand Machine before
continuing](https://docs.docker.com/machine).
@ -185,6 +185,6 @@ Name: swarm-master
## Related information
* [Evaluate Swarm in a sandbox](install-w-machine.md)
* [Build a Swarm cluster for production](install-on-aws.md)
* [Build a Swarm cluster for production](install-manual.md)
* [Swarm Discovery](discovery.md)
* [Docker Machine](https://docs.docker.com/machine) documentation

View File

@ -163,5 +163,5 @@ facing production workloads exposed to untrusted networks.
## Related information
* [Configure Docker Swarm for TLS](configuire-tls.md)
* [Configure Docker Swarm for TLS](configure-tls.md)
* [Docker security](https://docs.docker.com/engine/security/security/)

View File

@ -5,7 +5,7 @@ description = "Try Swarm at scale"
keywords = ["docker, swarm, scale, voting, application, certificates"]
[menu.main]
parent="workw_swarm"
weight=75
weight=-35
+++
<![end-metadata]-->
@ -146,7 +146,7 @@ CloudFormation template located
### Step 1. Build and configure the VPC
This step assumes you know [how to configure a VPC](link here) either manually
This step assumes you know how to configure a VPC either manually
or using the VPC wizard on Amazon. You can build the VPC manually or by using
using the VPC Wizard. If you use the wizard, be sure to choose the **VPC with a
Single Public Subnet** option.