Merge pull request #2100 from londoncalling/docs-format-clean-up

clean up code and images
This commit is contained in:
Victoria Bialas 2017-03-03 17:06:18 -08:00 committed by GitHub
commit ca45058062
13 changed files with 47 additions and 145 deletions

View File

@ -13,7 +13,7 @@ To shut down the voting app, simply stop the machines on which it is running. If
1. Open a terminal window and run `docker-machine ls` to list the current machines.
```
```none
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager - virtualbox Running tcp://192.168.99.100:2376 v1.13.1
@ -21,7 +21,7 @@ To shut down the voting app, simply stop the machines on which it is running. If
```
2. Use `docker-machine stop` to shut down each machine, beginning with the worker.
```
```none
$ docker-machine stop worker
Stopping "worker"...
Machine "worker" was stopped.
@ -42,7 +42,7 @@ start them per your cloud setup.
1. Open a terminal window and list the machines.
```
```none
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager - virtualbox Stopped Unknown
@ -51,7 +51,7 @@ start them per your cloud setup.
3. Run `docker-machine start` to start each machine, beginning with the manager.
```
```none
$ docker-machine start manager
Starting "manager"...
(manager) Check network to re-create if needed...
@ -73,7 +73,7 @@ start them per your cloud setup.
3. Run the following commands to log into the manager and see if the swarm is up.
```
```none
docker-machine ssh manager
docker@manager:~$ docker stack services vote
@ -93,7 +93,7 @@ At this point, the app is back up. The web pages you looked at in the [test driv
If you prefer to remove your local machines altogether, use `docker-machine rm`
to do so. (Or, `docker-machine rm -f` will force-remove running machines.)
```
```none
$ docker-machine rm worker
About to remove worker
WARNING: This action will delete both local reference and remote instance.

View File

@ -10,7 +10,7 @@ Now, we'll add our Docker machines to a [swarm](/engine/swarm/index.md).
1. Log into the manager.
```
```none
$ docker-machine ssh manager
## .
## ## ## ==
@ -40,14 +40,14 @@ Now, we'll add our Docker machines to a [swarm](/engine/swarm/index.md).
The command to initialize a swarm is:
```
```none
docker swarm init --advertise-addr <MANAGER-IP>
```
Use the IP address of the manager. (See [Verify machines are running and get IP addresses](node-setup.md#verify-machines-are-running-and-get-ip-addresses)).
>**Tip**: To get the IP address of the manager, use a terminal window
that is not `ssh`'ed into a virtual machine (or exit out of a current one), and type either `docker-machine ip manager` or `docker-machine ls`. Look back at [Verify machines are running and get IP addresses](node-setup.md#verify-machines-are-running-and-get-ip-addresses)) for examples.
```
```none
docker@manager:~$ docker swarm init --advertise-addr 192.168.99.100
Swarm initialized: current node (2tjrasfqfu945b7n4753374sw) is now a manager.
@ -64,7 +64,7 @@ Now, we'll add our Docker machines to a [swarm](/engine/swarm/index.md).
1. Log into the worker machine.
```
```none
$ docker-machine ssh worker
## .
## ## ## ==
@ -88,7 +88,7 @@ Now, we'll add our Docker machines to a [swarm](/engine/swarm/index.md).
2. On the worker, run the `join` command given as the output of the `swarm init` command you ran on the manager.
```
```none
docker@worker:~$ docker swarm join \
> --token SWMTKN-1-144pfsupfo25h43zzr6b6bghjson8uedxjsndo5vuehqlyarsk-9k4q84axm008whv9zl4a8m8ct \
> 192.168.99.100:2377
@ -101,7 +101,7 @@ Now, we'll add our Docker machines to a [swarm](/engine/swarm/index.md).
Log into the manager (e.g., `docker-machine ssh manager`) and run `docker node ls`.
```
```none
docker@manager:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
2tjrasfqfu945b7n4753374sw * manager Ready Active Leader

View File

@ -27,13 +27,13 @@ Go back to `docker-stack.yml` and replace the `before` tags on both the `vote` a
Run the same deploy command again to run the app with the new configuration.
```
```none
docker stack deploy --compose-file docker-stack.yml vote
```
The output will look similar to this:
```
```none
docker@manager:~$ docker stack deploy --compose-file docker-stack.yml vote
Updating service vote_redis (id: md73fohylg8q85aryz07852o0)
Updating service vote_db (id: gny9ieqxancnufrg1oeazz9gq)

View File

@ -11,19 +11,9 @@ deploy the voting application to the swarm you just created.
The `docker-stack.yml` file must be located on a manager for the swarm where you want to deploy the application stack.
1. Get `docker-stack.yml` either from the [source code in the lab](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml) or by copying it from the example given [here](index.md#docker-stackyml-deployment-configuration-file).
1. [**Get the `docker-stack.yml`file**](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml) from the source code in the lab.
If you prefer to download the file directly from our GitHub
repository rather than copy it from the documentation, you can use a tool like `curl`. This command downloads the raw file to the current directory on your local host. You can copy-paste it into your shell if you have `curl`:
```
curl -o docker-stack.yml https://raw.githubusercontent.com/docker/example-voting-app/master/docker-stack.yml
```
>**Tips:**
>
* To get the URL for the raw file on GitHub, either use the link in the example command above, or go to the file on GitHub [here](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml), then click **Raw** in the upper right.
* You might already have `curl` installed. If not, you can [get curl here](https://curl.haxx.se/).
Copy-and-paste the contents of that file into a file of the same name on your host.
2. Copy `docker-stack.yml` from your host machine onto the manager.
@ -42,13 +32,11 @@ The `docker-stack.yml` file must be located on a manager for the swarm where you
4. Check to make sure that the `.yml` file is there, using `ls`.
```
```none
docker@manager:~$ ls /home/docker/
docker-stack.yml
```
You can use `vi` or `cat` to inspect the file.
## Deploy the app
We'll deploy the application from the manager.
@ -56,7 +44,7 @@ We'll deploy the application from the manager.
1. Deploy the application stack based on the `.yml` using the command
[`docker stack deploy`](/engine/reference/commandline/stack_deploy.md) as follows.
```
```none
docker stack deploy --compose-file docker-stack.yml vote
```
@ -66,7 +54,7 @@ We'll deploy the application from the manager.
Here is an example of the command and the output.
```
```none
docker@manager:~$ docker stack deploy --compose-file docker-stack.yml vote
Creating network vote_default
Creating network vote_backend
@ -81,7 +69,7 @@ We'll deploy the application from the manager.
2. Verify that the stack deployed as expected with `docker stack services <APP-NAME>`.
```
```none
docker@manager:~$ docker stack services vote
ID NAME MODE REPLICAS IMAGE
1zkatkq7sf8n vote_result replicated 1/1 dockersamples/examplevotingapp_result:after

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

@ -112,111 +112,20 @@ In the Getting Started with Docker tutorial, you wrote a
it to build a single image and run it as a single container.
For this tutorial, the images are pre-built, and we will use `docker-stack.yml`
(a Version 3 Compose file) instead of a Dockerfile
to run the images. When we deploy, each image will run as a service in a
container (or in multiple containers, for those that have replicas defined to
scale the app). This example relies on Compose version 3, which is designed to be compatible with Docker Engine swarm mode.
(a Version 3 Compose file) instead of a Dockerfile to run the images. When we
deploy, each image will run as a service in a container (or in multiple
containers, for those that have replicas defined to scale the app). This example
relies on Compose version 3, which is designed to be compatible with Docker
Engine swarm mode.
To follow along with the example, you need only have Docker running and the copy
of `docker-stack.yml` we provide
[here](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml).
[**here**](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml).
This file defines all the services shown in the
[table above](#services-and-images-overview), their base images,
configuration details such as ports, networks, volumes,
application dependencies, and the swarm configuration.
```
version: "3"
services:
redis:
image: redis:alpine
ports:
- "6379"
networks:
- frontend
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]
vote:
image: dockersamples/examplevotingapp_vote:before
ports:
- 5000:80
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure
result:
image: dockersamples/examplevotingapp_result:before
ports:
- 5001:80
networks:
- backend
depends_on:
- db
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints: [node.role == manager]
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
frontend:
backend:
volumes:
db-data:
```
## Docker stacks and services
To deploy the voting app, we will run the [`docker stack

View File

@ -47,13 +47,13 @@ are are as follows.
#### Mac
```
```none
docker-machine create --driver virtualbox HOST-NAME
```
#### Windows
```
```none
docker-machine create -d hyperv --hyperv-virtual-switch "NETWORK-SWITCH"
MACHINE-NAME
```
@ -70,7 +70,7 @@ Create two machines and name them to anticipate what their roles will be in the
Here is an example of creating the `manager` on Docker for Mac. Create this one, then do the same for `worker`.
```
```none
$ docker-machine create --driver virtualbox manager
Running pre-create checks...
Creating machine...
@ -98,7 +98,7 @@ To see how to connect your Docker Client to the Docker Engine running on this vi
Use `docker-machine ls` to verify that the machines are
running and to get their IP addresses.
```
```none
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager * virtualbox Running tcp://192.168.99.100:2376 v1.13.0-rc6
@ -112,7 +112,7 @@ to become swarm nodes.
You can also get the IP address of a particular machine:
```
```none
$ docker-machine ip manager
192.168.99.100
```
@ -138,7 +138,7 @@ example, we'll set up a shell to talk to our manager machine.
1. Run `docker-machine env manager` to get environment variables for the manager.
```
```none
$ docker-machine env manager
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
@ -153,13 +153,13 @@ example, we'll set up a shell to talk to our manager machine.
On Mac:
```
```none
$ eval $(docker-machine env manager)
```
On Windows PowerShell:
```
```none
& docker-machine.exe env manager | Invoke-Expression
```
@ -169,7 +169,7 @@ Mac and Windows.
3. Run `docker-machine ls` again.
```
```none
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager * virtualbox Running tcp://192.168.99.100:2376 v1.13.0-rc6
@ -190,7 +190,7 @@ open.
Alternatively, you can use the command `docker-machine ssh <MACHINE-NAME>` to
log into a machine.
```
```none
$ docker-machine ssh manager
## .
## ## ## ==

View File

@ -24,6 +24,12 @@ Now, go to `MANAGER-IP:5001` in a web browser to view the voting results tally,
![Results web page](images/vote-results.png)
>**Tip**: To get the IP address of the manager, open a terminal window that
is not `ssh`'ed into a virtual machine (or exit out of a current one), and
type either `docker-machine ip manager` or `docker-machine ls`. Look back at
[Verify machines are running and get IP addresses](node-setup.md#verify-machines-are-running-and-get-ip-addresses)) for
examples.
## Use the visualizer to monitor the app
Go to `<MANAGER-IP:>8080` to get a visual map of how the application is
@ -39,11 +45,11 @@ action here. For example:
* The manager node is running the PostgreSQL container, as configured by setting `[node.role == manager]` as a constraint in the deploy key for the `db` service. This service must be constrained to run on the manager in order to work properly.
![db manager constraint in yml](images/db-manager-constraint.png)
![node role manager](images/db-manager-constraint.png)
* The manager node is also running the visualizer itself, as configured by setting `[node.role == manager]` as a constraint in the deploy key for the `visualizer` service. This service must be constrained to run on the manager in order to work properly. If you remove the constraint, and it ends up on a worker, the web page display will be blank.
![visualizer manager constraint in yml](images/visualizer-manager-constraint.png)
![visualizer role manager](images/visualizer-manager-constraint.png)
* Two of the services are replicated:
@ -53,8 +59,7 @@ action here. For example:
Both of these services are configured as `replicas: 2` under
the `deploy` key. In the current state of this app (shown in the visualizer), one of each of these containers is running on a manager and on a worker. However, since neither are explicitly constrained to either node in `docker-stack.yml`, all or some of these services could be running on either node, depending on workload and re-balancing choices we've left to the swarm orchestration.
![replicas in yml](images/replicas-constraint.png)
![replicas](images/replicas-constraint.png)
## What's next?