From c56517543b8a22efd699f7d972a82092220fbaaa Mon Sep 17 00:00:00 2001 From: Wang Jie Date: Thu, 8 Mar 2018 16:13:29 +0800 Subject: [PATCH] Update install-manual.md --- swarm/install-manual.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/swarm/install-manual.md b/swarm/install-manual.md index bdfb946748..8db9427578 100644 --- a/swarm/install-manual.md +++ b/swarm/install-manual.md @@ -38,7 +38,7 @@ VPC network. The **default** security group's initial set of rules deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances. -You're going to add a couple of rules to allow inbound SSH connections and +You're going to add a couple of rules to allow inbound SSH connections and inbound container images. This set of rules somewhat protects the Engine, Swarm, and Consul ports. For a production environment, you would apply more restrictive security measures. Do not leave Docker Engine ports unprotected. @@ -85,7 +85,7 @@ group. When complete, the example deployment contains three types of nodes: | Node Description | Name | |--------------------------------------|-------------------------| -| Swarm primary and secondary managers | `manager0`, `manager1` | +| Swarm primary and secondary managers | `manager0`, `manager1` | | Swarm node | `node0`, `node1` | | Discovery backend | `consul0` | @@ -213,7 +213,7 @@ After creating the discovery backend, you can create the swarm managers. In this $ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise :4000 consul://172.30.0.161:8500 -6. Enter `docker ps`to verify that a swarm container is running. Then disconnect from the `manager1` instance. +6. Enter `docker ps` to verify that a swarm container is running. Then disconnect from the `manager1` instance. 7. Connect to `node0` and `node1` in turn and join them to the cluster. @@ -262,11 +262,11 @@ replica. 1. SSH connection to the `manager0` instance. -2. Get the container id or name of the `swarm` container: +2. Get the container ID or name of the `swarm` container: $ docker ps -3. Shut down the primary manager, replacing `` with the container's id or name (for example, "8862717fe6d3" or "trusting_lamarr"). +3. Shut down the primary manager, replacing `` with the container's ID or name (for example, "8862717fe6d3" or "trusting_lamarr"). docker container rm -f @@ -274,7 +274,7 @@ replica. $ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise 172.30.0.161:4000 consul://172.30.0.161:8500 -5. Review the Engine's daemon logs the logs, replacing `` with the new container's id or name: +5. Review the Engine's daemon logs, replacing `` with the new container's ID or name: $ sudo docker logs @@ -290,7 +290,7 @@ replica. You can connect to the `manager1` node and run the `info` and `logs` commands. They display corresponding entries for the change in leadership. -## Additional Resources +## Additional resources - [Installing Docker Engine on a cloud provider](/docker-for-aws/) - [High availability in Docker Swarm](multi-manager-setup.md)