Getstarted tweaks (#3294)

* getting started edits, improvements to flow

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* added image of app in browser, more subtopics

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* copyedit of Orientation

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* copyedits, rewording, examples, additions for swarms and services

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* review comments, troubleshooting hints and clarifications

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>

* corrected cloud link to AWS beta swarm topic, added Azure

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>
This commit is contained in:
Victoria Bialas 2017-06-09 17:55:37 -07:00 committed by GitHub
parent b58c86179e
commit 4c05490bab
10 changed files with 452 additions and 280 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

Before

Width:  |  Height:  |  Size: 196 KiB

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

View File

@ -92,10 +92,10 @@ An **image** is a lightweight, stand-alone, executable package that includes
everything needed to run a piece of software, including the code, a runtime, everything needed to run a piece of software, including the code, a runtime,
libraries, environment variables, and config files. libraries, environment variables, and config files.
A **container** is a runtime instance of an image -- what the image becomes in A **container** is a runtime instance of an image&#8212;what the image becomes
memory when actually executed. It runs completely isolated from the host in memory when actually executed. It runs completely isolated from the host
environment by default, only accessing host files and ports if configured to environment by default, only accessing host files and ports if configured to do
do so. so.
Containers run apps natively on the host machine's kernel. They have better Containers run apps natively on the host machine's kernel. They have better
performance characteristics than virtual machines that only get virtual access performance characteristics than virtual machines that only get virtual access
@ -111,21 +111,21 @@ Consider this diagram comparing virtual machines to containers:
![Virtual machine stack example](https://www.docker.com/sites/default/files/VM%402x.png) ![Virtual machine stack example](https://www.docker.com/sites/default/files/VM%402x.png)
Virtual machines run guest operating systems -- note the OS layer in each box. Virtual machines run guest operating systems&#8212;note the OS layer in each
This is resource intensive, and the resulting disk image and application state is box. This is resource intensive, and the resulting disk image and application
an entanglement of OS settings, system-installed dependencies, OS security state is an entanglement of OS settings, system-installed dependencies, OS
patches, and other easy-to-lose, hard-to-replicate ephemera. security patches, and other easy-to-lose, hard-to-replicate ephemera.
### Container diagram ### Container diagram
![Container stack example](https://www.docker.com/sites/default/files/Container%402x.png) ![Container stack example](https://www.docker.com/sites/default/files/Container%402x.png)
Containers can share a single kernel, and the only information that needs to be Containers can share a single kernel, and the only information that needs to be
in a container image is the executable and its package dependencies, which in a container image is the executable and its package dependencies, which never
never need to be installed on the host system. These processes run like native need to be installed on the host system. These processes run like native
processes, and you can manage them individually by running commands like `docker processes, and you can manage them individually by running commands like `docker
ps` -- just like you would run `ps` on Linux to see active processes. Finally, ps`&#8212;just like you would run `ps` on Linux to see active processes.
because they contain all their dependencies, there is no configuration Finally, because they contain all their dependencies, there is no configuration
entanglement; a containerized app "runs anywhere." entanglement; a containerized app "runs anywhere."
## Setup ## Setup
@ -135,11 +135,11 @@ installed.
[Install Docker](/engine/installation/index.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right:100%; clear: left;"} [Install Docker](/engine/installation/index.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right:100%; clear: left;"}
<div style="clear:left"></div> <div style="clear:left"></div>
> **Note** version 1.13 or higher is required > **Note**: version 1.13 or higher is required
You should be able to run `docker run hello-world` and see a response like this: You should be able to run `docker run hello-world` and see a response like this:
``` ```shell
$ docker run hello-world $ docker run hello-world
Hello from Docker! Hello from Docker!
@ -149,22 +149,23 @@ To generate this message, Docker took the following steps:
...(snipped)... ...(snipped)...
``` ```
Now would also be a good time to make sure you are using version 1.13 or higher Now would also be a good time to make sure you are using version 1.13 or higher. Run `docker --version` to check it out.
``` ```shell
$ docker --version $ docker --version
Docker version 17.05.0-ce-rc1, build 2878a85 Docker version 17.05.0-ce-rc1, build 2878a85
``` ```
If you see messages like the ones above, you're ready to begin the journey. If you see messages like the ones above, you are ready to begin your journey.
## Conclusion ## Conclusion
The unit of scale being an individual, portable executable has vast The unit of scale being an individual, portable executable has vast
implications. It means that CI/CD can push updates to one part of a distributed implications. It means CI/CD can push updates to any part of a distributed
application, system dependencies aren't a thing you worry about, resource application, system dependencies are not an issue, and resource density is
density is increased, and orchestrating scaling behavior is a matter of spinning increased. Orchestration of scaling behavior is a matter of spinning up new
up new executables, not new VM hosts. We'll be learning about all of those executables, not new VM hosts.
things, but first let's learn to walk.
We'll be learning about all of these things, but first let's learn to walk.
[On to Part 2 >>](part2.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right: 100%"} [On to Part 2 >>](part2.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right: 100%"}

View File

@ -12,7 +12,7 @@ description: Learn how to write, build, and run a simple app -- the Docker way.
- Read the orientation in [Part 1](index.md). - Read the orientation in [Part 1](index.md).
- Give your environment a quick test run to make sure you're all set up: - Give your environment a quick test run to make sure you're all set up:
``` ```shell
docker run hello-world docker run hello-world
``` ```
@ -136,10 +136,10 @@ isn't running (as we've only installed the Python library, and not Redis
itself), we should expect that the attempt to use it here will fail and produce itself), we should expect that the attempt to use it here will fail and produce
the error message. the error message.
> *Note*: Accessing the name of the host when inside a container retrieves the > **Note**: Accessing the name of the host when inside a container retrieves the
container ID, which is like the process ID for a running executable. container ID, which is like the process ID for a running executable.
## Build the App ## Build the app
That's it! You don't need Python or anything in `requirements.txt` on your That's it! You don't need Python or anything in `requirements.txt` on your
system, nor will building or running this image install them on your system. It system, nor will building or running this image install them on your system. It
@ -179,10 +179,23 @@ docker run -p 4000:80 friendlyhello
``` ```
You should see a notice that Python is serving your app at `http://0.0.0.0:80`. You should see a notice that Python is serving your app at `http://0.0.0.0:80`.
But that message coming from inside the container, which doesn't know you mapped But that message is coming from inside the container, which doesn't know you
port 80 of that container to 4000, making the correct URL mapped port 80 of that container to 4000, making the correct URL
`http://localhost:4000`. Go there, and you'll see the "Hello World" text, the `http://localhost:4000`.
container ID, and the Redis error message.
Go to that URL in a web browser to see the display content served up on a
web page, including "Hello World" text, the container ID, and the Redis error
message.
![Hello World in browser](images/app-in-browser.png)
You can also use the `curl` command in a shell to view the same content.
```shell
$ curl http://localhost:4000
<h3>Hello World!</h3><b>Hostname:</b> 8fc990912a14<br/><b>Visits:</b> <i>cannot connect to Redis, counter disabled</i>
```
> **Note**: This port remapping of `4000:80` is to demonstrate the difference > **Note**: This port remapping of `4000:80` is to demonstrate the difference
between what you `EXPOSE` within the `Dockerfile`, and what you `publish` using between what you `EXPOSE` within the `Dockerfile`, and what you `publish` using
@ -218,55 +231,105 @@ docker stop 1fa4ab2cf395
## Share your image ## Share your image
To demonstrate the portability of what we just created, let's upload our build To demonstrate the portability of what we just created, let's upload our built
and run it somewhere else. After all, you'll need to learn how to push to image and run it somewhere else. After all, you'll need to learn how to push to
registries to make deployment of containers actually happen. registries when you want to deploy containers to production.
A registry is a collection of repositories, and a repository is a collection of A registry is a collection of repositories, and a repository is a collection of
images -- sort of like a GitHub repository, except the code is already built. An images&#8212;sort of like a GitHub repository, except the code is already
account on a registry can create many repositories. The `docker` CLI is built. An account on a registry can create many repositories. The `docker` CLI uses Docker's public registry by default.
preconfigured to use Docker's public registry by default.
> **Note**: We'll be using Docker's public registry here just because it's free > **Note**: We'll be using Docker's public registry here just because it's free
and pre-configured, but there are many public ones to choose from, and you can and pre-configured, but there are many public ones to choose from, and you can
even set up your own private registry using [Docker Trusted even set up your own private registry using [Docker Trusted
Registry](/datacenter/dtr/2.2/guides/). Registry](/datacenter/dtr/2.2/guides/).
If you don't have a Docker account, sign up for one at ### Log in with your Docker ID
[cloud.docker.com](https://cloud.docker.com/). Make note of your username.
Log in your local machine. If you don't have a Docker account, sign up for one at [cloud.docker.com](https://cloud.docker.com/){: target="_blank" class="_" }. Make note of your username.
Log in to the Docker public registry on your local machine.
```shell ```shell
docker login docker login
``` ```
Now, publish your image. The notation for associating a local image with a ### Tag the image
repository on a registry, is `username/repository:tag`. The `:tag` is optional,
but recommended; it's the mechanism that registries use to give Docker images a The notation for associating a local image with a repository on a registry is
version. So, putting all that together, enter your username, and repo `username/repository:tag`. The tag is optional, but recommended, since it is
and tag names, so your existing image will upload to your desired destination: the mechanism that registries use to give Docker images a version. Give the
repository and tag meaningful names for the context, such as
`get-started:part1`. This will put the image in the `get-started` repository and
tag it as `part1`.
Now, put it all together to tag the image. Run `docker tag image` with your
username, repository, and tag names so that the image will upload to your
desired destination. The syntax of the command is:
```shell ```shell
docker tag friendlyhello username/repository:tag docker tag image username/repository:tag
``` ```
Upload your tagged image: For example:
```shell
docker tag friendlyhello john/get-started:part1
```
Run [docker images](/engine/reference/commandline/images/) to see your newly tagged image. (You can also use `docker image ls`.)
```shell
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
friendlyhello latest d9e555c53008 3 minutes ago 195MB
john/get-started part1 d9e555c53008 3 minutes ago 195MB
python 2.7-slim 1c7128a655f6 5 days ago 183MB
...
```
### Publish the image
Upload your tagged image to the repository:
```shell ```shell
docker push username/repository:tag docker push username/repository:tag
``` ```
Once complete, the results of this upload are publicly available. From now on, Once complete, the results of this upload are publicly available. If you log in
you can use `docker run` and run your app on any machine with this command: to [Docker Hub](https://hub.docker.com/), you will see the new image there, with
its pull command.
### Pull and run the image from the remote repository
From now on, you can use `docker run` and run your app on any machine with this
command:
```shell ```shell
docker run -p 4000:80 username/repository:tag docker run -p 4000:80 username/repository:tag
``` ```
> Note: If you don't specify the `:tag` portion of these commands, If the image isn't available locally on the machine, Docker will pull it from the repository.
```shell
$ docker run -p 4000:80 john/get-started:part1
Unable to find image 'john/get-started:part1' locally
part1: Pulling from orangesnap/get-started
10a267c67f42: Already exists
f68a39a6a5e4: Already exists
9beaffc0cf19: Already exists
3c1fe835fb6b: Already exists
4c9f1fa8fcb8: Already exists
ee7d8f576a14: Already exists
fbccdcced46e: Already exists
Digest: sha256:0601c866aab2adcc6498200efd0f754037e909e5fd42069adeff72d1e2439068
Status: Downloaded newer image for john/get-started:part1
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
```
> **Note**: If you don't specify the `:tag` portion of these commands,
the tag of `:latest` will be assumed, both when you build and when you run the tag of `:latest` will be assumed, both when you build and when you run
images. images. Docker will use the last version of the image that ran without a tag specified (not necessarily the most recent image).
No matter where `docker run` executes, it pulls your image, along with Python No matter where `docker run` executes, it pulls your image, along with Python
and all the dependencies from `requirements.txt`, and runs your code. It all and all the dependencies from `requirements.txt`, and runs your code. It all
@ -287,8 +350,8 @@ Here's [a terminal recording of what was covered on this page](https://asciinema
<script type="text/javascript" src="https://asciinema.org/a/blkah0l4ds33tbe06y4vkme6g.js" id="asciicast-blkah0l4ds33tbe06y4vkme6g" speed="2" async></script> <script type="text/javascript" src="https://asciinema.org/a/blkah0l4ds33tbe06y4vkme6g.js" id="asciicast-blkah0l4ds33tbe06y4vkme6g" speed="2" async></script>
Here is a list of the basic commands from this page, and some related ones if Here is a list of the basic Docker commands from this page, and some related
you'd like to explore a bit before moving on. ones if you'd like to explore a bit before moving on.
```shell ```shell
docker build -t friendlyname . # Create image using this directory's Dockerfile docker build -t friendlyname . # Create image using this directory's Dockerfile

View File

@ -1,22 +1,29 @@
--- ---
title: "Get Started, Part 3: Services" title: "Get Started, Part 3: Services"
keywords: services, replicas, scale, ports, compose, compose file, stack, networking keywords: services, replicas, scale, ports, compose, compose file, stack, networking
description: Learn how to define load-balanced and scalable service that runs containers. description: Learn how to define load-balanced and scalable service that runs containers.
--- ---
{% include_relative nav.html selected="3" %} {% include_relative nav.html selected="3" %}
## Prerequisites ## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/). - [Install Docker version 1.13 or higher](/engine/installation/index.md).
- Get [Docker Compose](/compose/overview.md). On [Docker for Mac](/docker-for-mac/index.md) and [Docker for
Windows](/docker-for-windows/index.md) it's pre-installed so you are good-to-go,
but on Linux systems you will need to [install it
directly](https://github.com/docker/compose/releases). On pre Windows 10 systems
_without Hyper-V_, use [Docker
Toolbox](https://docs.docker.com/toolbox/overview.md).
- Read the orientation in [Part 1](index.md). - Read the orientation in [Part 1](index.md).
- Learn how to create containers in [Part 2](part2.md). - Learn how to create containers in [Part 2](part2.md).
- Make sure you have pushed the container you created to a registry, as - Make sure you have published the `friendlyhello` image you created by
instructed; we'll be using it here. [pushing it to a registry](/get-started/part2.md#share-your-image). We will be using that shared image here.
- Ensure your image is working by - Be sure your image works as a deployed container by running this command, and visting `http://localhost/` (slotting in your info for `username`,
running this and visiting `http://localhost/` (slotting in your info for `repo`, and `tag`):
`username`, `repo`, and `tag`):
``` ```shell
docker run -p 80:80 username/repo:tag docker run -p 80:80 username/repo:tag
``` ```
@ -32,15 +39,15 @@ must go one level up in the hierarchy of a distributed application: the
## Understanding services ## Understanding services
In a distributed application, different pieces of the app are called In a distributed application, different pieces of the app are called "services."
"services." For example, if you imagine a video sharing site, there will For example, if you imagine a video sharing site, it probably includes a service
probably be a service for storing application data in a database, a service for storing application data in a database, a service for video transcoding in
for video transcoding in the background after a user uploads something, a the background after a user uploads something, a service for the front-end, and
service for the front-end, and so on. so on.
A service really just means, "containers in production." A service only runs one Services are really just "containers in production." A service only runs one
image, but it codifies the way that image runs -- what ports it should use, how image, but it codifies the way that image runs&8212;what ports it should use,
many replicas of the container should run so the service has the capacity it how many replicas of the container should run so the service has the capacity it
needs, and so on. Scaling a service changes the number of container instances needs, and so on. Scaling a service changes the number of container instances
running that piece of software, assigning more computing resources to the running that piece of software, assigning more computing resources to the
service in the process. service in the process.
@ -56,13 +63,15 @@ should behave in production.
### `docker-compose.yml` ### `docker-compose.yml`
Save this file as `docker-compose.yml` wherever you want. Be sure you have Save this file as `docker-compose.yml` wherever you want. Be sure you have
pushed the image you created in [Part 2](part2.md) to a registry, and use [pushed the image](/get-started/part2.md#share-your-image) you created in [Part
that info to replace `username/repo:tag`: 2](part2.md) to a registry, and update this `.yml` by replacing
`username/repo:tag` with your image details.
```yaml ```yaml
version: "3" version: "3"
services: services:
web: web:
# replace username/repo:tag with your name and image details
image: username/repository:tag image: username/repository:tag
deploy: deploy:
replicas: 5 replicas: 5
@ -82,7 +91,8 @@ networks:
This `docker-compose.yml` file tells Docker to do the following: This `docker-compose.yml` file tells Docker to do the following:
- Run five instances of [the image we uploaded in step 2](part2.md) as a service - Pull [the image we uploaded in step 2](part2.md) from the registry.
- Run five instances of that image as a service
called `web`, limiting each one to use, at most, 10% of the CPU (across all called `web`, limiting each one to use, at most, 10% of the CPU (across all
cores), and 50MB of RAM. cores), and 50MB of RAM.
- Immediately restart containers if one fails. - Immediately restart containers if one fails.
@ -95,9 +105,9 @@ This `docker-compose.yml` file tells Docker to do the following:
## Run your new load-balanced app ## Run your new load-balanced app
Before we can use the `docker stack deploy` command we'll first run Before we can use the `docker stack deploy` command we'll first run
``` ```shell
docker swarm init docker swarm init
``` ```
@ -107,13 +117,13 @@ docker swarm init
Now let's run it. You have to give your app a name -- here it is set to Now let's run it. You have to give your app a name -- here it is set to
`getstartedlab` : `getstartedlab` :
``` ```shell
docker stack deploy -c docker-compose.yml getstartedlab docker stack deploy -c docker-compose.yml getstartedlab
``` ```
See a list of the five containers you just launched: See a list of the five containers you just launched:
``` ```shell
docker stack ps getstartedlab docker stack ps getstartedlab
``` ```
@ -132,29 +142,35 @@ the five replicas is chosen, in a round-robin fashion, to respond.
You can scale the app by changing the `replicas` value in `docker-compose.yml`, You can scale the app by changing the `replicas` value in `docker-compose.yml`,
saving the change, and re-running the `docker stack deploy` command: saving the change, and re-running the `docker stack deploy` command:
``` ```shell
docker stack deploy -c docker-compose.yml getstartedlab docker stack deploy -c docker-compose.yml getstartedlab
``` ```
Docker will do an in-place update, no need to tear the stack down first or kill Docker will do an in-place update, no need to tear the stack down first or kill
any containers. any containers.
### Take down the app Now, re-run the `docker stack ps` command to see the deployed instances reconfigured. For example, if you scaled up the replicas, there will be more
running containers.
### Take down the app and the swarm
Take the app down with `docker stack rm`: Take the app down with `docker stack rm`:
``` ```shell
docker stack rm getstartedlab docker stack rm getstartedlab
``` ```
It's as easy as that to stand up and scale your app with Docker. You've taken This removes the app, but our one-node swarm is still up and running (as shown
a huge step towards learning how to run containers in production. Up next, by `docker node ls`). Take down the swarm with `docker swarm leave --force`.
you will learn how to run this app on a cluster of machines.
> Note: Compose files like this are used to define applications with Docker, and It's as easy as that to stand up and scale your app with Docker. You've taken a
can be uploaded to cloud providers using [Docker Cloud](/docker-cloud/), or on huge step towards learning how to run containers in production. Up next, you
any hardware or cloud provider you choose with [Docker Enterprise will learn how to run this app as a bonafide swarm on a cluster of Docker
Edition](https://www.docker.com/enterprise-edition). machines.
> **Note**: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using [Docker
Cloud](/docker-cloud/), or on any hardware or cloud provider you choose with
[Docker Enterprise Edition](https://www.docker.com/enterprise-edition).
[On to "Part 4" >>](part4.md){: class="button outline-btn" style="margin-bottom: 30px"} [On to "Part 4" >>](part4.md){: class="button outline-btn" style="margin-bottom: 30px"}

View File

@ -7,17 +7,24 @@ description: Learn how to create clusters of Dockerized machines.
## Prerequisites ## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/). - [Install Docker version 1.13 or higher](/engine/installation/index.md).
- Unless you're on a Windows system that has Hyper-V, such as Windows 10, [install VirtualBox](https://www.virtualbox.org/wiki/Downloads) for your host
- Get [Docker Compose](/compose/overview.md) as described in [Part 3 prerequisites](/get-started/part3.md#prerequisites).
- Get [Docker Machine](/machine/overview.md), which is pre-installed with
[Docker for Mac](/docker-for-mac/index.md) and [Docker for
Windows](/docker-for-windows/index.md), but on Linux systems you need to
[install it directly](/machine/install-machine/#installing-machine-directly). On pre Windows 10 systems _without Hyper-V_, use [Docker
Toolbox](https://docs.docker.com/toolbox/overview.md).
- Read the orientation in [Part 1](index.md). - Read the orientation in [Part 1](index.md).
- Learn how to create containers in [Part 2](part2.md). - Learn how to create containers in [Part 2](part2.md).
- Make sure you have pushed the container you created to a registry, as - Make sure you have published the `friendlyhello` image you created by
instructed; we'll be using it here. [pushing it to a registry](/get-started/part2.md#share-your-image). We will be using that shared image here.
- Ensure your image is working by - Be sure your image works as a deployed container by running this command, and visting `http://localhost/` (slotting in your info for `username`,
running this and visiting `http://localhost/` (slotting in your info for `repo`, and `tag`):
`username`, `repo`, and `tag`):
``` ```shell
docker run -p 80:80 username/repo:tag docker run -p 80:80 username/repo:tag
``` ```
- Have a copy of your `docker-compose.yml` from [Part 3](part3.md) handy. - Have a copy of your `docker-compose.yml` from [Part 3](part3.md) handy.
@ -34,7 +41,7 @@ by joining multiple machines into a "Dockerized" cluster called a **swarm**.
## Understanding Swarm clusters ## Understanding Swarm clusters
A swarm is a group of machines that are running Docker and have been joined into A swarm is a group of machines that are running Docker and joined into
a cluster. After that has happened, you continue to run the Docker commands a cluster. After that has happened, you continue to run the Docker commands
you're used to, but now they are executed on a cluster by a **swarm manager**. you're used to, but now they are executed on a cluster by a **swarm manager**.
The machines in a swarm can be physical or virtual. After joining a swarm, they The machines in a swarm can be physical or virtual. After joining a swarm, they
@ -49,24 +56,24 @@ file, just like the one you have already been using.
Swarm managers are the only machines in a swarm that can execute your commands, Swarm managers are the only machines in a swarm that can execute your commands,
or authorize other machines to join the swarm as **workers**. Workers are just or authorize other machines to join the swarm as **workers**. Workers are just
there to provide capacity and do not have the authority to tell any other there to provide capacity and do not have the authority to tell any other
machine what it can and can't do. machine what it can and cannot do.
Up until now you have been using Docker in a single-host mode on your local Up until now, you have been using Docker in a single-host mode on your local
machine. But Docker also can be switched into **swarm mode**, and that's what machine. But Docker also can be switched into **swarm mode**, and that's what
enables the use of swarms. Enabling swarm mode instantly makes the current enables the use of swarms. Enabling swarm mode instantly makes the current
machine a swarm manager. From then on, Docker will run the commands you execute machine a swarm manager. From then on, Docker will run the commands you execute
on the swarm you're managing, rather than just on the current machine. on the swarm you're managing, rather than just on the current machine.
{% capture local-instructions %} {% capture local-instructions %}
You now have two VMs created, named `myvm1` and `myvm2`. The first one will act You now have two VMs created, named `myvm1` and `myvm2` (as `docker machine ls`
as the manager, which executes `docker` commands and authenticates workers to shows). The first one will act as the manager, which executes `docker` commands
join the swarm, and the second will be a worker. and authenticates workers to join the swarm, and the second will be a worker.
You can send commands to your VMs using `docker-machine ssh`. Instruct `myvm1` You can send commands to your VMs using `docker-machine ssh`. Instruct `myvm1`
to become a swarm manager with `docker swarm init` and you'll see output like to become a swarm manager with `docker swarm init` and you'll see output like
this: this:
``` ```shell
$ docker-machine ssh myvm1 "docker swarm init" $ docker-machine ssh myvm1 "docker swarm init"
Swarm initialized: current node <node ID> is now a manager. Swarm initialized: current node <node ID> is now a manager.
@ -77,7 +84,7 @@ To add a worker to this swarm, run the following command:
<ip>:<port> <ip>:<port>
``` ```
> Getting an error about needing to use `--advertise-addr`? > Got an error about needing to use `--advertise-addr`?
> >
> Copy the > Copy the
> IP address for `myvm1` by running `docker-machine ls`, then run the > IP address for `myvm1` by running `docker-machine ls`, then run the
@ -87,13 +94,14 @@ To add a worker to this swarm, run the following command:
> ``` > ```
> docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100:2377" > docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100:2377"
> ``` > ```
{: .note-vanilla}
As you can see, the response to `docker swarm init` contains a pre-configured As you can see, the response to `docker swarm init` contains a pre-configured
`docker swarm join` command for you to run on any nodes you want to add. Copy `docker swarm join` command for you to run on any nodes you want to add. Copy
this command, and send it to `myvm2` via `docker-machine ssh` to have `myvm2` this command, and send it to `myvm2` via `docker-machine ssh` to have `myvm2`
join your new swarm as a worker: join your new swarm as a worker:
``` ```shell
$ docker-machine ssh myvm2 "docker swarm join \ $ docker-machine ssh myvm2 "docker swarm join \
--token <token> \ --token <token> \
<ip>:<port>" <ip>:<port>"
@ -108,6 +116,24 @@ to open a terminal session on that VM. Type `exit` when you're ready to return
to the host shell prompt. It may be easier to paste the join command in that to the host shell prompt. It may be easier to paste the join command in that
way. way.
Use `ssh` to connect to the (`docker-machine ssh myvm1`), and run `docker node ls` to view the nodes in this swarm:
```shell
docker@myvm1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
brtu9urxwfd5j0zrmkubhpkbd myvm2 Ready Active
rihwohkh3ph38fhillhhb84sk * myvm1 Ready Active Leader
```
Type `exit` to get back out of that machine.
Alternatively, wrap commands in `docker-machine ssh` to keep from having to directly log in and out. For example:
```shell
docker-machine ssh myvm1 "docker node ls"
```
{% endcapture %} {% endcapture %}
{% capture local %} {% capture local %}
@ -146,7 +172,7 @@ able to connect to each other.
Now, create a couple of virtual machines using our node management tool, Now, create a couple of virtual machines using our node management tool,
`docker-machine`: `docker-machine`:
```none ```shell
$ docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 $ docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1
$ docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm2 $ docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm2
``` ```
@ -159,7 +185,7 @@ $ docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm2
A swarm is made up of multiple nodes, which can be either physical or virtual A swarm is made up of multiple nodes, which can be either physical or virtual
machines. The basic concept is simple enough: run `docker swarm init` to enable machines. The basic concept is simple enough: run `docker swarm init` to enable
swarm mode and make your current machine a swarm manager, then run swarm mode and make your current machine a swarm manager, then run
`docker swarm join` on other machines to have them join the swarm as a worker. `docker swarm join` on other machines to have them join the swarm as workers.
Choose a tab below to see how this plays out in various contexts. We'll use VMs Choose a tab below to see how this plays out in various contexts. We'll use VMs
to quickly create a two-machine cluster and turn it into a swarm. to quickly create a two-machine cluster and turn it into a swarm.
@ -229,27 +255,51 @@ look:
![routing mesh diagram](/engine/swarm/images/ingress-routing-mesh.png) ![routing mesh diagram](/engine/swarm/images/ingress-routing-mesh.png)
> **Note**: If you're having any connectivity trouble, keep in mind that in > Having connectivity trouble?
> order to use the ingress network in the swarm, you need to have >
> the following ports open between the swarm nodes before you enable swarm mode: > Keep in mind that in order to use the ingress network in the swarm,
> you need to have the following ports open between the swarm nodes
> before you enable swarm mode:
> >
> - Port 7946 TCP/UDP for container network discovery. > - Port 7946 TCP/UDP for container network discovery.
> - Port 4789 UDP for the container ingress network. > - Port 4789 UDP for the container ingress network.
{: .note-vanilla}
## Iterating and scaling your app ## Iterating and scaling your app
From here you can do everything you learned about in part 3: scale the app by From here you can do everything you learned about in part 3.
changing the `docker-compose.yml` file, or change the app behavior by editing
code. In either case, simply running `docker stack deploy` again deploys these Scale the app by changing the `docker-compose.yml` file.
changes. You can tear down the stack with `docker stack rm`. You can also join
any machine, physical or virtual, to this swarm, using the same Change the app behavior by editing code.
`docker swarm join` command you used on `myvm2`, and capacity will be added to
your cluster; just run `docker stack deploy` afterwards and your app will take In either case, simply run `docker stack deploy` again to deploy these
advantage of the new resources. changes.
You can join any machine, physical or virtual, to this swarm, using the
same `docker swarm join` command you used on `myvm2`, and capacity will be added
to your cluster. Just run `docker stack deploy` afterwards, and your app will
take advantage of the new resources.
## Cleanup
You can tear down the stack with `docker stack rm`. For example:
```
docker-machine ssh myvm1 "docker stack rm getstartedlab"
```
> Keep the swarm or remove it?
>
> At some point later, you can remove this swarm if you want to with
> `docker-machine ssh myvm2 "docker swarm leave"` on the worker
> and `docker-machine ssh myvm1 "docker swarm leave --force"` on the
> manager, but _you'll need this swarm for part 5, so please keep it
> around for now_.
{: .note-vanilla}
[On to Part 5 >>](part5.md){: class="button outline-btn" style="margin-bottom: 30px"} [On to Part 5 >>](part5.md){: class="button outline-btn" style="margin-bottom: 30px"}
## Recap and cheat sheet (optional) ## Recap and cheat sheet (optional)
Here's [a terminal recording of what was covered on this page](https://asciinema.org/a/113837): Here's [a terminal recording of what was covered on this page](https://asciinema.org/a/113837):

View File

@ -9,19 +9,31 @@ description: Learn how to create a multi-container application that uses all the
## Prerequisites ## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/). - [Install Docker version 1.13 or higher](/engine/installation/).
- Get [Docker Compose](/compose/overview.md) as described in [Part 3 prerequisites](/get-started/part3.md#prerequisites).
- Get [Docker Machine](/machine/overview.md) as described in [Part 4 prerequisites](/get-started/part4.md#prerequisites).
- Read the orientation in [Part 1](index.md). - Read the orientation in [Part 1](index.md).
- Learn how to create containers in [Part 2](part2.md). - Learn how to create containers in [Part 2](part2.md).
- Make sure you have pushed the container you created to a registry, as - Make sure you have pushed the container you created to a registry, as
instructed; we'll be using it here. instructed; we'll be using it here.
- Ensure your image is working by
running this and visiting `http://localhost/` (slotting in your info for
`username`, `repo`, and `tag`):
``` - Be sure your image works as a deployed container by running this command, and visting `http://localhost/` (slotting in your info for `username`,
`repo`, and `tag`):
```shell
docker run -p 80:80 username/repo:tag docker run -p 80:80 username/repo:tag
``` ```
- Have a copy of your `docker-compose.yml` from [Part 3](part3.md) handy. - Have a copy of your `docker-compose.yml` from [Part 3](part3.md) handy.
- Have the swarm you created in [part 4](part4.md) running and ready.
- Make sure that the machines you set up in [part 4](part4.md) are running
and ready. Run `docker-machine ls` to verify this. If the machines are
stopped, run `docker-machine start myvm1` to boot the manager, followed
by `docker-machine start myvm2` to boot the worker.
- Have the swarm you created in [part 4](part4.md) running and ready. Run
`docker-machine ssh myvm1 "docker node ls"` to verify this. If the swarm is up,
both nodes will report a `ready` status. If not, reinitialze the swarm and join
the worker as described in [Set up your
swarm](/get-started/part4.md#set-up-your-swarm).
## Introduction ## Introduction
@ -38,182 +50,208 @@ application (though very complex applications may want to use multiple stacks).
Some good news is, you have technically been working with stacks since part 3, Some good news is, you have technically been working with stacks since part 3,
when you created a Compose file and used `docker stack deploy`. But that was a when you created a Compose file and used `docker stack deploy`. But that was a
single service stack running on a single host, which is not usually what takes single service stack running on a single host, which is not usually what takes
place in production. Here, you're going to take what you've learned and make place in production. Here, you will take what you've learned, make
multiple services relate to each other, and run them on multiple machines. multiple services relate to each other, and run them on multiple machines.
This is the home stretch, so congratulate yourself! You're doing great, this is the home stretch!
## Adding a new service and redeploying. ## Add a new service and redeploy
It's easy to add services to our `docker-compose.yml` file. First, let's add It's easy to add services to our `docker-compose.yml` file. First, let's add
a free visualizer service that lets us look at how our swarm is scheduling a free visualizer service that lets us look at how our swarm is scheduling
containers. Open up `docker-compose.yml` in an editor and replace its contents containers.
with the following:
```yaml 1. Open up `docker-compose.yml` in an editor and replace its contents
version: "3" with the following. Be sure to replace `username/repo:tag` with your image details.
services:
web: ```yaml
image: username/repo:tag version: "3"
deploy: services:
replicas: 5 web:
restart_policy: # replace username/repo:tag with your name and image details
condition: on-failure image: username/repo:tag
resources: deploy:
limits: replicas: 5
cpus: "0.1" restart_policy:
memory: 50M condition: on-failure
ports: resources:
- "80:80" limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks: networks:
- webnet webnet:
visualizer: ```
image: dockersamples/visualizer:stable
ports: The only thing new here is the peer service to `web`, named `visualizer`.
- "8080:8080" You'll see two new things here: a `volumes` key, giving the visualizer
volumes: access to the host's socket file for Docker, and a `placement` key, ensuring
- "/var/run/docker.sock:/var/run/docker.sock" that this service only ever runs on a swarm manager -- never a worker.
deploy: That's because this container, built from [an open source project created by
placement: Docker](https://github.com/ManoMarks/docker-swarm-visualizer), displays
constraints: [node.role == manager] Docker services running on a swarm in a diagram.
We'll talk more about placement constraints and volumes in a moment.
2. Copy this new `docker-compose.yml` file to the swarm manager, `myvm1`:
```shell
docker-machine scp docker-compose.yml myvm1:~
```
3. Re-run the `docker stack deploy` command on the manager, and
whatever services need updating will be updated:
```shell
$ docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"
Updating service getstartedlab_web (id: angi1bf5e4to03qu9f93trnxm)
Updating service getstartedlab_visualizer (id: l9mnwkeq2jiononb5ihz9u7a4)
```
4. Take a look at the visualizer.
You saw in the Compose file that `visualizer` runs on port 8080. Get the
IP address of one of your nodes by running `docker-machine ls`. Go
to either IP address at port 8080 and you will see the visualizer running:
![Visualizer screenshot](images/get-started-visualizer1.png)
The single copy of `visualizer` is running on the manager as you expect, and
the five instances of `web` are spread out across the swarm. You can
corroborate this visualization by running `docker stack ps <stack>`:
```shell
docker-machine ssh myvm1 "docker stack ps getstartedlab"
```
The visualizer is a standalone service that can run in any app
that includes it in the stack. It doesn't depend on anything else.
Now let's create a service that *does* have a dependency: the Redis
service that will provide a visitor counter.
## Persist the data
Let's go through the same workflow once more to add a Redis database for storing
app data.
1. Save this new `docker-compose.yml` file, which finally adds a
Redis service. Be sure to replace `username/repo:tag` with your image details.
```yaml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- ./data:/data
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks: networks:
- webnet webnet:
networks: ```
webnet:
```
The only thing new here is the peer service to `web`, named `visualizer`. You'll Redis has an official image in the Docker library and has been granted the
see two new things here: a `volumes` key, giving the visualizer access to the short `image` name of just `redis`, so no `username/repo` notation here. The
host's socket file for Docker, and a `placement` key, ensuring that this service Redis port, 6379, has been pre-configured by Redis to be exposed from the
only ever runs on a swarm manager -- never a worker. That's because this container to the host, and here in our Compose file we expose it from the
container, built from [an open source project created by host to the world, so you can actually enter the IP for any of your nodes
Docker](https://github.com/ManoMarks/docker-swarm-visualizer), displays Docker into Redis Desktop Manager and manage this Redis instance, if you so choose.
services running on a swarm in a diagram.
We'll talk more about placement constraints and volumes in a moment. But for Most importantly, there are a couple of things in the `redis` specification
now, copy this new `docker-compose.yml` file to the swarm manager, `myvm1`: that make data persist between deployments of this stack:
``` - `redis` always runs on the manager, so it's always using the
docker-machine scp docker-compose.yml myvm1:~ same filesystem.
``` - `redis` accesses an arbitrary directory in the host's file system
as `/data` inside the container, which is where Redis stores data.
Now just re-run the `docker stack deploy` command on the manager, and whatever Together, this is creating a "source of truth" in your host's physical
services need updating will be updated: filesystem for the Redis data. Without this, Redis would store its data in
`/data` inside the container's filesystem, which would get wiped out if that
container were ever redeployed.
``` This source of truth has two components:
$ docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"
Updating service getstartedlab_web (id: angi1bf5e4to03qu9f93trnxm)
Updating service getstartedlab_visualizer (id: l9mnwkeq2jiononb5ihz9u7a4)
```
You saw in the Compose file that `visualizer` runs on port 8080. Get the IP - The placement constraint you put on the Redis service, ensuring that it
address of the one of your nodes by running `docker-machine ls`. Go to either IP always uses the same host.
address @ port 8080 and you will see the visualizer running: - The volume you created that lets the container access `./data` (on the host)
as `/data` (inside the Redis container). While containers come and go, the
files stored on `./data` on the specified host will persist, enabling
continuity.
![Visualizer screenshot](get-started-visualizer1.png) You are ready to deploy your new Redis-using stack.
The single copy of `visualizer` is running on the manager as you expect, and the 2. Create a `./data` directory on the manager:
five instances of `web` are spread out across the swarm. You can corroborate
this visualization by running `docker stack ps <stack>`:
``` ```shell
docker-machine ssh myvm1 "docker stack ps getstartedlab" $ docker-machine ssh myvm1 "mkdir ./data"
``` ```
The visualizer is a standalone service that can run in any app that includes it 3. Copy over the new `docker-compose.yml` file with `docker-machine scp`:
in the stack. It doesn't depend on anything else. Now let's create a service
that *does* have a dependency: the Redis service that will provide a visitor
counter.
```shell
$ docker-machine scp docker-compose.yml myvm1:~
```
## Persisting data 4. Run `docker stack deploy` one more time.
Go through the same workflow once more. Save this new `docker-compose.yml` file, ```shell
which finally adds a Redis service. $ docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"
```
```yaml 5. Check the web page at `http://localhost` and you'll see the results of the visitor counter, which is now live and storing information on Redis.
version: "3"
services:
web:
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- ./data:/data
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
```
Redis has an official image in the Docker library and has been granted the short ![Hello World in browser with Redis](images/app-in-browser-redis.png)
`image` name of just `redis`, so no `username/repo` notation here. The Redis
port, 6379, has been pre-configured by Redis to be exposed from the container to
the host, and here in our Compose file we expose it from the host to the world,
so you can actually enter the IP for any of your nodes into Redis Desktop
Manager and manage this Redis instance, if you so choose.
Most importantly, there are a couple of things in the `redis` specification that Also, check the visualizer at port 8080 on either node's IP address, and you'll see the `redis` service running along with the `web` and `visualizer` services.
make data persist between deployments of this stack:
- `redis` always runs on the manager, so it's always using the same filesystem. ![Visualizer with redis screenshot](images/visualizer-with-redis.png)
- `redis` accesses an arbitrary directory in the host's file system as `/data`
inside the container, which is where Redis stores data.
Together, this is creating a "source of truth" in your host's physical
filesystem for the Redis data. Without this, Redis would store its data in
`/data` inside the container's filesystem, which would get wiped out if that
container were ever redeployed.
This source of truth has two components:
- The placement constraint you put on the Redis service, ensuring that it
always uses the same host.
- The volume you created that lets the container access `./data` (on the host)
as `/data` (inside the Redis container). While containers come and go, the
files stored on `./data` on the specified host will persist, enabling
continuity.
To deploy your new Redis-using stack, create `./data` on the manager, copy over
the new `docker-compose.yml` file with `docker-machine scp`, and run
`docker stack deploy` one more time.
```
$ docker-machine ssh myvm1 "mkdir ./data"
$ docker-machine scp docker-compose.yml myvm1:~
$ docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"
```
Check the results on either nodes IP address and you'll see that a visitor counter is
now live and storing information on Redis.
[On to Part 6 >>](part6.md){: class="button outline-btn" style="margin-bottom: 30px"} [On to Part 6 >>](part6.md){: class="button outline-btn" style="margin-bottom: 30px"}

View File

@ -8,32 +8,32 @@ description: Deploy your app to production using Docker CE or EE.
## Prerequisites ## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/). - [Install Docker version 1.13 or higher](/engine/installation/).
- Get [Docker Compose](/compose/overview.md) as described in [Part 3 prerequisites](/get-started/part3.md#prerequisites).
- Get [Docker Machine](/machine/overview.md) as described in [Part 4 prerequisites](/get-started/part4.md#prerequisites).
- Read the orientation in [Part 1](index.md). - Read the orientation in [Part 1](index.md).
- Learn how to create containers in [Part 2](part2.md). - Learn how to create containers in [Part 2](part2.md).
- Make sure you have pushed the container you created to a registry, as - Make sure you have pushed the container you created to a registry, as
instructed; we'll be using it here. instructed; we'll be using it here.
- Ensure your image is working by - Be sure your image works as a deployed container by running this command, and visting `http://localhost/` (slotting in your info for `username`,
running this and visiting `http://localhost/` (slotting in your info for `repo`, and `tag`):
`username`, `repo`, and `tag`):
``` ```shell
docker run -p 80:80 username/repo:tag docker run -p 80:80 username/repo:tag
``` ```
- Have [the final version of `docker-compose.yml` from Part 5](/get-started/part5/#persisting-data) handy. - Have [the final version of `docker-compose.yml` from Part 5](/get-started/part5.md#persisting-data) handy.
## Introduction ## Introduction
You've been editing the same Compose file for this entire tutorial. Well, we You've been editing the same Compose file for this entire tutorial. Well, we
have good news: that Compose file works just as well in production as it does have good news. That Compose file works just as well in production as it does
on your machine. Here, we go through some options for running your on your machine. Here, we'll go through some options for running your
Dockerized application. Dockerized application.
## Choose an option ## Choose an option
{% capture cloud %} {% capture cloud %}
If you're okay with using Docker Community Edition in If you're okay with using Docker Community Edition in
production, you can use Docker Cloud to help manage your app on popular production, you can use Docker Cloud to help manage your app on popular service providers such as Amazon Web Services, DigitalOcean, and Microsoft Azure.
cloud providers such as Amazon Web Services, DigitalOcean, and Microsoft Azure.
To set up and deploy: To set up and deploy:
@ -60,9 +60,13 @@ First, link Docker Cloud with your cloud provider:
After your cloud provider is all set up, create a Swarm: After your cloud provider is all set up, create a Swarm:
* If you're on AWS you * If you're on Amazon Web Services (AWS) you
can [automatically create a can [automatically create a
swarm](/docker-cloud/cloud-swarm/create-cloud-swarm/){: onclick="ga('send', 'event', 'Get Started Referral', 'Cloud', 'Create AWS Swarm');"}. swarm](/docker-cloud/cloud-swarm/create-cloud-swarm-aws/){: onclick="ga('send', 'event', 'Get Started Referral AWS', 'Cloud', 'Create AWS Swarm');"}.
* If you are on Microsoft Azure, you can [automatically create a
swarm](/docker-cloud/cloud-swarm/create-cloud-swarm-azure/){: onclick="ga('send', 'event', 'Get Started Referral Azure', 'Cloud', 'Create Azure Swarm');"}.
* Otherwise, [create your nodes](/docker-cloud/getting-started/your_first_node/){: onclick="ga('send', 'event', 'Get Started Referral', 'Cloud', 'Create Nodes');"} * Otherwise, [create your nodes](/docker-cloud/getting-started/your_first_node/){: onclick="ga('send', 'event', 'Get Started Referral', 'Cloud', 'Create Nodes');"}
in the Docker Cloud UI, and run the `docker swarm init` and `docker swarm join` in the Docker Cloud UI, and run the `docker swarm init` and `docker swarm join`
commands you learned in [part 4](part4.md) over [SSH via Docker commands you learned in [part 4](part4.md) over [SSH via Docker