Get started updates (#9608)

* first pass at gs pt 1

* suggestions from Dawn

* unbreak tabs

* get started part 2 refresh

* comments from Dawn and Adrian

* draft of kube get started

* first draft of swarm demo for gs

* saving some updates...

* first draft docker hub section

* comments from Dawn

* comments from Dawn

* comments from Dawn

* comments from Dawn

* gsu frontmatter

* removed toc entry for get started part 6 and changed toc titles for the rest of get started topics

* addressing Adrian's feedback

* fixing top nav buttons
This commit is contained in:
Dawn-Docker 2019-10-09 16:15:00 -07:00 committed by Adrian Plata
parent a107f8720b
commit 5bddf49a1a
8 changed files with 566 additions and 1569 deletions

View File

@ -127,20 +127,18 @@ guides:
title: FAQ
- sectiontitle: Get started
section:
- sectiontitle: Get started with Docker
- sectiontitle: Quickstart
section:
- title: "Part 1: Orientation"
- title: "Part 1: Orientation and setup"
path: /get-started/
- title: "Part 2: Containers"
- title: "Part 2: Containerizing an Application"
path: /get-started/part2/
- title: "Part 3: Services"
- title: "Part 3: Deploying to Kubernetes"
path: /get-started/part3/
- title: "Part 4: Swarms"
- title: "Part 4: Deploying to Swarm"
path: /get-started/part4/
- title: "Part 5: Stacks"
- title: "Part 5: Sharing Images on Docker Hub"
path: /get-started/part5/
- title: "Part 6: Deploy your app"
path: /get-started/part6/
- path: /engine/docker-overview/
title: Docker overview
- sectiontitle: Develop with Docker

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

View File

@ -1,7 +1,7 @@
---
title: "Get Started, Part 1: Orientation and setup"
keywords: get started, setup, orientation, quickstart, intro, concepts, containers
description: Get oriented on some basics of Docker before diving into the walkthrough.
keywords: get started, setup, orientation, quickstart, intro, concepts, containers, docker desktop
description: Get oriented on some basics of Docker and install Docker Desktop.
redirect_from:
- /getstarted/
- /get-started/part1/
@ -61,165 +61,371 @@ teaches you how to:
1. Set up your Docker environment (on this page)
2. [Build an image and run it as one container](part2.md)
3. [Scale your app to run multiple containers](part3.md)
4. [Distribute your app across a cluster](part4.md)
5. [Stack services by adding a backend database](part5.md)
6. [Deploy your app to production](part6.md)
3. [Set up and use a Kubernetes environment on your development machine](part3.md)
4. [Set up and use a Swarm environment on your development machine](part4.md)
5. [Share your containerized applications on Docker Hub](part5.md)
## Docker concepts
Docker is a platform for developers and sysadmins to **develop, deploy, and run**
applications with containers. The use of Linux containers to deploy applications
Docker is a platform for developers and sysadmins to **build, share, and run**
applications with containers. The use of containers to deploy applications
is called _containerization_. Containers are not new, but their use for easily
deploying applications is.
Containerization is increasingly popular because containers are:
- Flexible: Even the most complex applications can be containerized.
- Lightweight: Containers leverage and share the host kernel.
- Interchangeable: You can deploy updates and upgrades on-the-fly.
- Lightweight: Containers leverage and share the host kernel,
making them much more efficient in terms of system resources than virtual machines.
- Portable: You can build locally, deploy to the cloud, and run anywhere.
- Scalable: You can increase and automatically distribute container replicas.
- Stackable: You can stack services vertically and on-the-fly.
- Loosely coupled: Containers are highly self sufficient and encapsulated,
allowing you to replace or upgrade one without disrupting others.
- Scalable: You can increase and automatically distribute container replicas across a datacenter.
- Secure: Containers apply aggressive constraints and isolations to processes without
any configuration required on the part of the user.
![Containers are portable](images/laurel-docker-containers.png){:width="100%"}
### Images and containers
A container is launched by running an image. An **image** is an executable
package that includes everything needed to run an application--the code, a
runtime, libraries, environment variables, and configuration files.
A **container** is a runtime instance of an image--what the image becomes in
memory when executed (that is, an image with state, or a user process). You can
see a list of your running containers with the command, `docker ps`, just as you
would in Linux.
Fundamentally, a container is nothing but a running process,
with some added encapsulation features applied to it in order to keep it isolated from the host
and from other containers.
One of the most important aspects of container isolation is that each container interacts
with its own, private filesystem; this filesystem is provided by a Docker **image**.
An image includes everything needed to run an application -- the code or binary,
runtimes, dependencies, and any other filesystem objects required.
### Containers and virtual machines
A **container** runs _natively_ on Linux and shares the kernel of the host
A container runs _natively_ on Linux and shares the kernel of the host
machine with other containers. It runs a discrete process, taking no more memory
than any other executable, making it lightweight.
By contrast, a **virtual machine** (VM) runs a full-blown "guest" operating
system with _virtual_ access to host resources through a hypervisor. In general,
VMs provide an environment with more resources than most applications need.
VMs incur a lot of overhead beyond what is being consumed by your application logic.
![Container stack example](/images/Container%402x.png){:width="300px"} | ![Virtual machine stack example](/images/VM%402x.png){:width="300px"}
## Prepare your Docker environment
## Install Docker Desktop
Install a [maintained version](/engine/installation/#updates-and-patches){: target="_blank" class="_"}
of Docker Community Edition (CE) or Enterprise Edition (EE) on a
[supported platform](/ee/supported-platforms/){: target="_blank" class="_"}.
The best way to get started developing containerized applications is with Docker Desktop, for OSX or Windows. Docker Desktop will allow you to easily set up Kubernetes or Swarm on your local development machine, so you can use all the features of the orchestrator you're developing applications for right away, no cluster required. Follow the installation instructions appropriate for your operating system:
> For full Kubernetes Integration
>
> - [Kubernetes on Docker Desktop for Mac](/docker-for-mac/kubernetes/){: target="_blank" class="_"}
is available in [17.12 Edge (mac45)](/docker-for-mac/edge-release-notes/#docker-community-edition-17120-ce-mac45-2018-01-05){: target="_blank" class="_"} or
[17.12 Stable (mac46)](/docker-for-mac/release-notes/#docker-community-edition-17120-ce-mac46-2018-01-09){: target="_blank" class="_"} and higher.
> - [Kubernetes on Docker Desktop for Windows](/docker-for-windows/kubernetes/){: target="_blank" class="_"}
is available in
[18.06.0 CE (win70)](/docker-for-windows/release-notes/){: target="_blank" class="_"} and higher as well as edge channels.
- [OSX](/docker-for-mac/install/){: target="_blank" class="_"}
- [Windows](/docker-for-windows/install/){: target="_blank" class="_"}
[Install Docker](/engine/installation/index.md){: class="button outline-btn"}
<div style="clear:left"></div>
## Enable Kubernetes
### Test Docker version
Docker Desktop will set up Kubernetes for you quickly and easily. Follow the setup and validation instructions appropriate for your operating system:
1. Run `docker --version` and ensure that you have a supported version of Docker:
<ul class="nav nav-tabs">
<li class="active"><a data-toggle="tab" href="#kubeosx">OSX</a></li>
<li><a data-toggle="tab" href="#kubewin">Windows</a></li>
</ul>
<div class="tab-content">
<div id="kubeosx" class="tab-pane fade in active">
{% capture local-content %}
```shell
docker --version
#### OSX
Docker version 17.12.0-ce, build c97c6d6
1. After installing Docker Desktop, you should see a Docker icon in your menu bar. Click on it, and navigate **Preferences... -> Kubernetes**.
2. Check the checkbox labeled *Enable Kubernetes*, and click **Apply**. Docker Desktop will automatically set up Kubernetes for you. You'll know everything has completed successfully once you can click on the Docker icon in the menu bar, and see a green light beside 'Kubernetes is Running'.
3. In order to confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: testpod
image: alpine:3.5
command: ["ping", "8.8.8.8"]
```
2. Run `docker info` (or `docker version` without `--`) to view even more details about your Docker installation:
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
4. In a terminal, navigate to where you created `pod.yaml` and create your pod:
```shell
docker info
kubectl apply -f pod.yaml
```
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.12.0-ce
Storage Driver: overlay2
5. Check that your pod is up and running:
```shell
kubectl get pods
```
You should see something like:
```shell
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 4s
```
6. Check that you get the logs you'd expect for a ping process:
```shell
kubectl logs demo
```
You should see the output of a healthy ping process:
```shell
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=21.393 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=15.320 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=11.111 ms
...
```
> To avoid permission errors (and the use of `sudo`), add your user to the `docker` group. [Read more](/engine/installation/linux/linux-postinstall/){: target="_blank" class="_"}.
### Test Docker installation
1. Test that your installation works by running the simple Docker image,
[hello-world](https://hub.docker.com/_/hello-world/){: target="_blank" class="_"}:
7. Finally, tear down your test pod:
```shell
docker run hello-world
kubectl delete -f pod.yaml
```
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:ca0eeb6fb05351dfc8759c20733c91def84cb8007aa89a5bf606bc8b315b9fc7
Status: Downloaded newer image for hello-world:latest
{% endcapture %}
{{ local-content | markdownify }}
Hello from Docker!
This message shows that your installation appears to be working correctly.
</div>
<div id="kubewin" class="tab-pane fade" markdown="1">
{% capture localwin-content %}
#### Windows
1. After installing Docker Desktop, you should see a Docker icon in your system tray. Right-click on it, and navigate **Settings -> Kubernetes**.
2. Check the checkbox labeled *Enable Kubernetes*, and click **Apply**. Docker Desktop will automatically set up Kubernetes for you. Note this can take a significant amount of time (20 minutes). You'll know everything has completed successfully once you can right-click on the Docker icon in the menu bar, click **Settings**, and see a green light beside 'Kubernetes is running'.
3. In order to confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: testpod
image: alpine:3.5
command: ["ping", "8.8.8.8"]
```
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
4. In powershell, navigate to where you created `pod.yaml` and create your pod:
```shell
kubectl apply -f pod.yaml
```
5. Check that your pod is up and running:
```shell
kubectl get pods
```
You should see something like:
```shell
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 4s
```
6. Check that you get the logs you'd expect for a ping process:
```shell
kubectl logs demo
```
You should see the output of a healthy ping process:
```shell
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=21.393 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=15.320 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=11.111 ms
...
```
2. List the `hello-world` image that was downloaded to your machine:
7. Finally, tear down your test pod:
```shell
docker image ls
kubectl delete -f pod.yaml
```
3. List the `hello-world` container (spawned by the image) which exits after
displaying its message. If it were still running, you would not need the `--all` option:
{% endcapture %}
{{ localwin-content | markdownify }}
</div>
<hr>
</div>
## Enable Docker Swarm
Docker Desktop runs primarily on Docker Engine, which has everything you need to run a Swarm built in. Follow the setup and validation instructions appropriate for your operating system:
<ul class="nav nav-tabs">
<li class="active"><a data-toggle="tab" href="#swarmosx">OSX</a></li>
<li><a data-toggle="tab" href="#swarmwin">Windows</a></li>
</ul>
<div class="tab-content">
<div id="swarmosx" class="tab-pane fade in active">
{% capture local-content %}
#### OSX
1. Open a terminal, and initialize Docker Swarm mode:
```shell
docker container ls --all
docker swarm init
```
CONTAINER ID IMAGE COMMAND CREATED STATUS
54f4984ed6a8 hello-world "/hello" 20 seconds ago Exited (0) 19 seconds ago
If all goes well, you should see a message similar to the following:
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
## Recap and cheat sheet
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
```shell
## List Docker CLI commands
docker
docker container --help
```shell
docker service create --name demo alpine:3.5 ping 8.8.8.8
```
## Display Docker version and info
docker --version
docker version
docker info
3. Check that your service created one running container:
## Execute Docker image
docker run hello-world
```shell
docker service ps demo
```
## List Docker images
docker image ls
You should see something like:
## List Docker containers (running, all, all in quiet mode)
docker container ls
docker container ls --all
docker container ls -aq
```
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:3.5 docker-desktop Running Running 8 seconds ago
```
## Conclusion of part one
4. Check that you get the logs you'd expect for a ping process:
Containerization makes [CI/CD](https://www.docker.com/solutions/cicd){: target="_blank" class="_"} seamless. For example:
```shell
docker service logs demo
```
- applications have no system dependencies
- updates can be pushed to any part of a distributed application
- resource density can be optimized.
You should see the output of a healthy ping process:
With Docker, scaling your application is a matter of spinning up new
executables, not running heavy VM hosts.
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
5. Finally, tear down your test service:
```shell
docker service rm demo
```
{% endcapture %}
{{ local-content | markdownify }}
</div>
<div id="swarmwin" class="tab-pane fade" markdown="1">
{% capture localwin-content %}
#### Windows
1. Open a powershell, and initialize Docker Swarm mode:
```shell
docker swarm init
```
If all goes well, you should see a message similar to the following:
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
```shell
docker service create --name demo alpine:3.5 ping 8.8.8.8
```
3. Check that your service created one running container:
```shell
docker service ps demo
```
You should see something like:
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:3.5 docker-desktop Running Running 8 seconds ago
```
4. Check that you get the logs you'd expect for a ping process:
```shell
docker service logs demo
```
You should see the output of a healthy ping process:
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
5. Finally, tear down your test service:
```shell
docker service rm demo
```
{% endcapture %}
{{ localwin-content | markdownify }}
</div>
<hr>
</div>
## Conclusion
At this point, you've installed Docker Desktop on your development machine, and confirmed that you can run simple containerized workloads in Kuberentes and Swarm. In the next section, we'll start developing our first containerized application.
[On to Part 2 >>](part2.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right: 100%"}
## CLI References
Further documentation for all CLI commands used in this article are available here:
- [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply)
- [`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
- [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs)
- [`kubectl delete`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete)
- [`docker swarm init`](https://docs.docker.com/engine/reference/commandline/swarm_init/)
- [`docker service *`](https://docs.docker.com/engine/reference/commandline/service/)

View File

@ -1,8 +1,7 @@
<ul class="pagination">
<li {% if include.selected=="1"%}class="active"{% endif %}><a href="part1">1: Orientation</a></li>
<li {% if include.selected=="2"%}class="active"{% endif %}><a href="part2">2: Containers</a></li>
<li {% if include.selected=="3"%}class="active"{% endif %}><a href="part3">3: Services</a></li>
<li {% if include.selected=="4"%}class="active"{% endif %}><a href="part4">4: Swarms</a></li>
<li {% if include.selected=="5"%}class="active"{% endif %}><a href="part5">5: Stacks</a></li>
<li {% if include.selected=="6"%}class="active"{% endif %}><a href="part6">6: Deploy your app</a></li>
<li {% if include.selected=="1"%}class="active"{% endif %}><a href="part1">1: Orientation and setup</a></li>
<li {% if include.selected=="2"%}class="active"{% endif %}><a href="part2">2: Containerizing an application</a></li>
<li {% if include.selected=="3"%}class="active"{% endif %}><a href="part3">3: Deploying to Kubernetes</a></li>
<li {% if include.selected=="4"%}class="active"{% endif %}><a href="part4">4: Deploying to Swarm</a></li>
<li {% if include.selected=="5"%}class="active"{% endif %}><a href="part5">5: Sharing images on Docker Hub</a></li>
</ul>

View File

@ -1,451 +1,114 @@
---
title: "Get Started, Part 2: Containers"
keywords: containers, python, code, coding, build, push, run
description: Learn how to write, build, and run a simple app -- the Docker way.
title: "Get Started, Part 2: Containerizing an Application"
keywords: containers, images, dockerfiles, node, code, coding, build, push, run
description: Learn how to create a Docker image by writing a Dockerfile, and use it to run a simple container.
---
{% include_relative nav.html selected="2" %}
## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/).
- Read the orientation in [Part 1](index.md).
- Give your environment a quick test run to make sure you're all set up:
```shell
docker run hello-world
```
- Work through setup and orientation in [Part 1](index.md).
## Introduction
It's time to begin building an app the Docker way. We start at the bottom of the hierarchy of such app, a container, which this page covers. Above this level is a service, which defines how containers behave in
production, covered in [Part 3](part3.md). Finally, at the top level is the
stack, defining the interactions of all the services, covered in
[Part 5](part5.md).
Now that we've got our orchestrator of choice set up in our development environment thanks to Docker Desktop,
we can begin to develop containerized applications. In general, the development workflow looks like this:
- Stack
- Services
- **Container** (you are here)
1. Create and test individual containers for each component of your application by first creating Docker images.
2. Assemble your containers and supporting infrastructure into a complete application, expressed either as a *Docker stack file* or in Kubernetes YAML.
3. Test, share and deploy your complete containerized application.
## Your new development environment
In this stage of the tutorial, let's focus on step 1 of this workflow: creating the images that our containers will be based on. Remember, a Docker image captures the private filesystem that our containerized processes will run in; we need to create an image that contains just what our application needs to run.
In the past, if you were to start writing a Python app, your first
order of business was to install a Python runtime onto your machine. But,
that creates a situation where the environment on your machine needs to be
perfect for your app to run as expected, and also needs to match your production
environment.
> **Containerized development environments** are easier to set up than traditional development environments, once you learn how to build images as we'll discuss below. This is because a containerized development environment will isolate all the dependencies your app needs inside your Docker image; there's no need to install anything other than Docker on your development machine. In this way, you can easily develop applications for different stacks without changing anything on your development machine.
With Docker, you can just grab a portable Python runtime as an image, no
installation necessary. Then, your build can include the base Python image
right alongside your app code, ensuring that your app, its dependencies, and the
runtime, all travel together.
## Setting Up
These portable images are defined by something called a `Dockerfile`.
1. Clone an example project from GitHub (if you don't have git installed, see the [https://git-scm.com/book/en/v2/Getting-Started-Installing-Git](install instructions) first):
## Define a container with `Dockerfile`
```shell
git clone -b v1 https://github.com/docker-training/node-bulletin-board
cd node-bulletin-board/bulletin-board-app
```
`Dockerfile` defines what goes on in the environment inside your
container. Access to resources like networking interfaces and disk drives is
virtualized inside this environment, which is isolated from the rest of your
system, so you need to map ports to the outside world, and
be specific about what files you want to "copy in" to that environment. However,
after doing that, you can expect that the build of your app defined in this
`Dockerfile` behaves exactly the same wherever it runs.
This is a simple bulletin board application, written in node.js. In this example, let's imagine you wrote this app, and are now trying to containerize it.
### `Dockerfile`
2. Have a look at the file called `Dockerfile`. Dockerfiles describe how to assemble a private filesystem for a container, and can also contain some metadata describing how to run a container based on this image. The bulletin board app Dockerfile looks like this:
Create an empty directory on your local machine. Change directories (`cd`) into the new directory,
create a file called `Dockerfile`, copy-and-paste the following content into
that file, and save it. Take note of the comments that explain each statement in
your new Dockerfile.
```dockerfile
FROM node:6.11.5
```dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
# Set the working directory to /app
WORKDIR /app
CMD [ "npm", "start" ]
```
# Copy the current directory contents into the container at /app
COPY . /app
Writing a Dockerfile is the first step to containerizing an application. You can think of these Dockerfile commands as a step-by-step recipe on how to build up our image. This one takes the following steps:
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
- Start `FROM` the pre-existing `node:6.11.5` image. This is an *official image*, built by the node.js vendors and validated by Docker to be a high-quality image containing the node 6.11.5 interpreter and basic dependencies.
- Use `WORKDIR` to specify that all subsequent actions should be taken from the directory `/usr/src/app` *in your image filesystem* (never the host's filesystem).
- `COPY` the file `package.json` from your host to the present location (`.`) in your image (so in this case, to `/usr/src/app/package.json`)
- `RUN` the command `npm install` inside your image filesystem (which will read `package.json` to determine your app's node dependencies, and install them)
- `COPY` in the rest of your app's source code from your host to your image filesystem.
# Make port 80 available to the world outside this container
EXPOSE 80
You can see that these are much the same steps you might have taken to set up and install your app on your host - but capturing these as a Dockerfile allows us to do the same thing inside a portable, isolated Docker image.
# Define environment variable
ENV NAME World
The steps above built up the filesystem of our image, but there's one more line in our Dockerfile. The `CMD` directive is our first example of specifying some metadata in our image that describes how to run a container based off of this image. In this case, it's saying that the containerized process that this image is meant to support is `npm start`.
# Run app.py when the container launches
CMD ["python", "app.py"]
```
What you see above is a good way to organize a simple Dockerfile; always start with a `FROM` command, follow it with the steps to build up your private filesystem, and conclude with any metadata specifications. There are many more Dockerfile directives than just the few we see above; for a complete list, see the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).
This `Dockerfile` refers to a couple of files we haven't created yet, namely
`app.py` and `requirements.txt`. Let's create those next.
## Build and Test Your Image
## The app itself
Now that we have some source code and a Dockerfile, it's time to build our first image, and make sure the containers launched from it work as expected.
Create two more files, `requirements.txt` and `app.py`, and put them in the same
folder with the `Dockerfile`. This completes our app, which as you can see is
quite simple. When the above `Dockerfile` is built into an image, `app.py` and
`requirements.txt` are present because of that `Dockerfile`'s `COPY` command,
and the output from `app.py` is accessible over HTTP thanks to the `EXPOSE`
command.
### `requirements.txt`
```
Flask
Redis
```
### `app.py`
```python
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
@app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
```
Now we see that `pip install -r requirements.txt` installs the Flask and Redis
libraries for Python, and the app prints the environment variable `NAME`, as
well as the output of a call to `socket.gethostname()`. Finally, because Redis
isn't running (as we've only installed the Python library, and not Redis
itself), we should expect that the attempt to use it here fails and produces
the error message.
> **Note**: Accessing the name of the host when inside a container retrieves the
container ID, which is like the process ID for a running executable.
That's it! You don't need Python or anything in `requirements.txt` on your
system, nor does building or running this image install them on your system. It
doesn't seem like you've really set up an environment with Python and Flask, but
you have.
## Build the app
We are ready to build the app. Make sure you are still at the top level of your
new directory. Here's what `ls` should show:
```shell
$ ls
Dockerfile app.py requirements.txt
```
Now run the build command. This creates a Docker image, which we're going to
name using the `--tag` option. Use `-t` if you want to use the shorter option.
```shell
docker build --tag=friendlyhello .
```
Where is your built image? It's in your machine's local Docker image registry:
```shell
$ docker image ls
REPOSITORY TAG IMAGE ID
friendlyhello latest 326387cea398
```
Note how the tag defaulted to `latest`. The full syntax for the tag option would
be something like `--tag=friendlyhello:v0.0.1`.
> Troubleshooting for Linux users
>
> _Proxy server settings_
>
> Proxy servers can block connections to your web app once it's up and running.
> If you are behind a proxy server, add the following lines before `RUN pip` in your
> Dockerfile, using the `ENV` command to specify the host and port for your
> proxy servers:
>
> ```conf
> # Set proxy server, replace host:port with values for your servers
> ENV http_proxy host:port
> ENV https_proxy host:port
> ```
>
> _DNS settings_
>
> DNS misconfigurations can generate problems with `pip`. You need to set your
> own DNS server address to make `pip` work properly. You might want
> to change the DNS settings of the Docker daemon. You can edit (or create) the
> configuration file at `/etc/docker/daemon.json` with the `dns` key, as following:
>
> ```json
>{
> "dns": ["your_dns_address", "8.8.8.8"]
>}
> ```
>
> In the example above, the first element of the list is the address of your DNS
> server. The second item is Google's DNS which can be used when the first one is
> not available.
>
> Before proceeding, save `daemon.json` and restart the docker service.
>
> `sudo service docker restart`
>
> Once fixed, retry to run the `build` command.
>
> _MTU settings_
>
> If the MTU (default is 1500) on the default bridge network is greater than the MTU of the host external network, then `pip` fails. Set the MTU of the docker bridge network to match that of the host by editing (or creating) the configuration file at `/etc/docker/daemon.json` with the `mtu` key, as follows:
>
> ```json
>{
> "mtu": 1450
>}
> ```
> Before proceeding, save `daemon.json` and restart the docker service.
>
> `sudo systemctl restart docker`
>
> Re-run the `build` command.
## Run the app
Run the app, mapping your machine's port 4000 to the container's published port
80 using `-p`:
```shell
docker run -p 4000:80 friendlyhello
```
You should see a message that Python is serving your app at `http://0.0.0.0:80`.
But that message is coming from inside the container, which doesn't know you
mapped port 80 of that container to 4000, making the correct URL
`http://localhost:4000`.
Go to that URL in a web browser to see the display content served up on a
web page.
![Hello World in browser](images/app-in-browser.png)
> **Note**: If you are using Docker Toolbox on Windows 7, use the Docker Machine IP
> instead of `localhost`. For example, http://192.168.99.100:4000/. To find the IP
> address, use the command `docker-machine ip`.
You can also use the `curl` command in a shell to view the same content.
```shell
$ curl http://localhost:4000
<h3>Hello World!</h3><b>Hostname:</b> 8fc990912a14<br/><b>Visits:</b> <i>cannot connect to Redis, counter disabled</i>
```
This port remapping of `4000:80` demonstrates the difference
between `EXPOSE` within the `Dockerfile` and what the `publish` value is set to when running
`docker run -p`. In later steps, map port 4000 on the host to port 80
in the container and use `http://localhost`.
Hit `CTRL+C` in your terminal to quit.
> On Windows, explicitly stop the container
>
> On Windows systems, `CTRL+C` does not stop the container. So, first
type `CTRL+C` to get the prompt back (or open another shell), then type
`docker container ls` to list the running containers, followed by
`docker container stop <Container NAME or ID>` to stop the
container. Otherwise, you get an error response from the daemon
when you try to re-run the container in the next step.
Now let's run the app in the background, in detached mode:
```shell
docker run -d -p 4000:80 friendlyhello
```
You get the long container ID for your app and then are kicked back to your
terminal. Your container is running in the background. You can also see the
abbreviated container ID with `docker container ls` (and both work interchangeably when
running commands):
```shell
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED
1fa4ab2cf395 friendlyhello "python app.py" 28 seconds ago
```
Notice that `CONTAINER ID` matches what's on `http://localhost:4000`.
Now use `docker container stop` to end the process, using the `CONTAINER ID`, like so:
```shell
docker container stop 1fa4ab2cf395
```
## Share your image
To demonstrate the portability of what we just created, let's upload our built
image and run it somewhere else. After all, you need to know how to push to
registries when you want to deploy containers to production.
A registry is a collection of repositories, and a repository is a collection of
images&#8212;sort of like a GitHub repository, except the code is already built.
An account on a registry can create many repositories. The `docker` CLI uses
Docker's public registry by default.
> **Note**: We use Docker's public registry here just because it's free
and pre-configured, but there are many public ones to choose from, and you can
even set up your own private registry using [Docker Trusted
Registry](/datacenter/dtr/2.2/guides/).
> **Windows users**: this example uses Linux containers. Make sure your environment is running Linux containers by right-clicking on the Docker logo in your system tray, and clicking 'Switch to Linux containers...' if the option appears. Don't worry - everything you'll learn in this tutorial works the exact same way for Windows containers.
### Log in with your Docker ID
1. Make sure you're in the directory `node-bulletin-board/bulletin-board-app` in a terminal or powershell, and build your bulletin board image:
If you don't have a Docker account, sign up for one at
[hub.docker.com](https://hub.docker.com){: target="_blank" class="_" }.
Make note of your username.
```script
docker image build -t bulletinboard:1.0 .
```
Log in to the Docker public registry on your local machine.
You'll see Docker step through each instruction in your Dockerfile, building up your image as it goes. If successful, the build process should end with a message `Successfully tagged bulletinboard:1.0`.
```shell
$ docker login
```
> **Windows Users:** you may receive a message titled 'SECURITY WARNING' at this step, noting the read, write and execute permissions being set for files added to your image; we aren't handling any sensitive information in this example, so feel free to disregard this warning in this example.
### Tag the image
2. Start a container based on your new image:
The notation for associating a local image with a repository on a registry is
`username/repository:tag`. The tag is optional, but recommended, since it is
the mechanism that registries use to give Docker images a version. Give the
repository and tag meaningful names for the context, such as
`get-started:part2`. This puts the image in the `get-started` repository and
tags it as `part2`.
```script
docker container run --publish 8000:8080 --detach --name bb bulletinboard:1.0
```
Now, put it all together to tag the image. Run `docker tag image` with your
username, repository, and tag names so that the image uploads to your
desired destination. The syntax of the command is:
We used a couple of common flags here:
```shell
docker tag image username/repository:tag
```
- `--publish` asks Docker to forward traffic incoming on the host's port 8000, to the container's port 8080 (containers have their own private set of ports, so if we want to reach one from the network, we have to forward traffic to it in this way; otherwise, firewall rules will prevent all network traffic from reaching your container, as a default security posture).
- `--detach` asks Docker to run this container in the background.
- `--name` lets us specify a name with which we can refer to our container in subsequent commands, in this case `bb`.
For example:
Also notice, we didn't specify what process we wanted our container to run. We didn't have to, since we used the `CMD` directive when building our Dockerfile; thanks to this, Docker knows to automatically run the process `npm start` inside our container when it starts up.
```shell
docker tag friendlyhello gordon/get-started:part2
```
3. Visit your application in a browser at `localhost:8000`. You should see your bulletin board application up and running. At this step, we would normally do everything we could to ensure our container works the way we expected; now would be the time to run unit tests, for example.
Run [docker image ls](/engine/reference/commandline/image_ls/) to see your newly
tagged image.
```shell
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
friendlyhello latest d9e555c53008 3 minutes ago 195MB
gordon/get-started part2 d9e555c53008 3 minutes ago 195MB
python 2.7-slim 1c7128a655f6 5 days ago 183MB
...
```
### Publish the image
Upload your tagged image to the repository:
```shell
docker push username/repository:tag
```
Once complete, the results of this upload are publicly available. If you log in
to [Docker Hub](https://hub.docker.com/), you see the new image there, with
its pull command.
### Pull and run the image from the remote repository
From now on, you can use `docker run` and run your app on any machine with this
command:
```shell
docker run -p 4000:80 username/repository:tag
```
If the image isn't available locally on the machine, Docker pulls it from
the repository.
```shell
$ docker run -p 4000:80 gordon/get-started:part2
Unable to find image 'gordon/get-started:part2' locally
part2: Pulling from gordon/get-started
10a267c67f42: Already exists
f68a39a6a5e4: Already exists
9beaffc0cf19: Already exists
3c1fe835fb6b: Already exists
4c9f1fa8fcb8: Already exists
ee7d8f576a14: Already exists
fbccdcced46e: Already exists
Digest: sha256:0601c866aab2adcc6498200efd0f754037e909e5fd42069adeff72d1e2439068
Status: Downloaded newer image for gordon/get-started:part2
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
```
No matter where `docker run` executes, it pulls your image, along with Python
and all the dependencies from `requirements.txt`, and runs your code. It all
travels together in a neat little package, and you don't need to install
anything on the host machine for Docker to run it.
## Conclusion of part two
That's all for this page. In the next section, we learn how to scale our
application by running this container in a **service**.
[Continue to Part 3 >>](part3.md){: class="button outline-btn"}
Or, learn how to [launch your container on your own machine using DigitalOcean](https://docs.docker.com/machine/examples/ocean/){: target="_blank" class="_" }.
## Recap and cheat sheet (optional)
Here's [a terminal recording of what was covered on this
page](https://asciinema.org/a/blkah0l4ds33tbe06y4vkme6g):
<script type="text/javascript"
src="https://asciinema.org/a/blkah0l4ds33tbe06y4vkme6g.js"
id="asciicast-blkah0l4ds33tbe06y4vkme6g" speed="2" async></script>
Here is a list of the basic Docker commands from this page, and some related
ones if you'd like to explore a bit before moving on.
```shell
docker build -t friendlyhello . # Create image using this directory's Dockerfile
docker run -p 4000:80 friendlyhello # Run "friendlyhello" mapping port 4000 to 80
docker run -d -p 4000:80 friendlyhello # Same thing, but in detached mode
docker container ls # List all running containers
docker container ls -a # List all containers, even those not running
docker container stop <hash> # Gracefully stop the specified container
docker container kill <hash> # Force shutdown of the specified container
docker container rm <hash> # Remove specified container from this machine
docker container rm $(docker container ls -a -q) # Remove all containers
docker image ls -a # List all images on this machine
docker image rm <image id> # Remove specified image from this machine
docker image rm $(docker image ls -a -q) # Remove all images from this machine
docker login # Log in this CLI session using your Docker credentials
docker tag <image> username/repository:tag # Tag <image> for upload to registry
docker push username/repository:tag # Upload tagged image to registry
docker run username/repository:tag # Run image from a registry
```
4. Once you're satisfied that your bulletin board container works correctly, delete it:
```script
docker container rm --force bb
```
## Conclusion
At this point, we've performed a simple containerization of an application, and confirmed that our app runs successfully in its container. The next step will be to write the Kubernetes yaml that describes how to run and manage these containers on Kubernetes which we'll study in Part 3 of this tutorial, or to write the stack file that will let us do the same on Docker Swarm, which we discuss in Part 4.
[On to Part 3 >>](part3.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right: 100%"}
## CLI References
Further documentation for all CLI commands used in this article are available here:
- [docker image *](https://docs.docker.com/engine/reference/commandline/image/)
- [docker container *](https://docs.docker.com/engine/reference/commandline/container/)
- [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)

View File

@ -1,278 +1,137 @@
---
title: "Get Started, Part 3: Services"
keywords: services, replicas, scale, ports, compose, compose file, stack, networking
description: Learn how to define load-balanced and scalable service that runs containers.
title: "Get Started, Part 3: Deploying to Kubernetes"
keywords: kubernetes, pods, deployments, kubernetes services
description: Learn how to describe and deploy a simple application on Kubernetes.
---
{% include_relative nav.html selected="3" %}
## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/index.md).
- Work through containerizing an application in [Part 2](part2.md).
- Make sure that Kubernetes is enabled on your Docker Desktop:
- **OSX**: click the Docker icon in your menu bar and make sure there's a green light beside 'Kubernetes is Running'
- **Windows**: click the Docker icon in the system tray and navigate to Kubernetes, and make sure there's a green light beside 'Kubernetes is Running'.
- Get [Docker Compose](/compose/overview.md). On [Docker Desktop for
Mac](/docker-for-mac/index.md) and [Docker Desktop for
Windows](/docker-for-windows/index.md) it's pre-installed, so you're good-to-go.
On Linux systems you need to [install it
directly](https://github.com/docker/compose/releases). On pre Windows 10 systems
_without Hyper-V_, use [Docker
Toolbox](/toolbox/overview.md).
- Read the orientation in [Part 1](index.md).
- Learn how to create containers in [Part 2](part2.md).
- Make sure you have published the `friendlyhello` image you created by
[pushing it to a registry](/get-started/part2.md#share-your-image). We use that
shared image here.
- Be sure your image works as a deployed container. Run this command,
slotting in your info for `username`, `repo`, and `tag`: `docker run -p 4000:80
username/repo:tag`, then visit `http://localhost:4000/`.
If Kubernetes isn't running, follow the instructions in [Part 1](part1.md) of this tutorial to finish setting it up.
## Introduction
In part 3, we scale our application and enable load-balancing. To do this, we
must go one level up in the hierarchy of a distributed application: the
**service**.
Now that we've demonstrated that the individual components of our application run as stand-alone containers, it's time to arrange for them to be managed by an orchestrator like Kubernetes. Kubernetes provides many tools for scaling, networking, securing and maintaining your containerized applications, above and beyond the abilities of containers themselves.
- Stack
- **Services** (you are here)
- Container (covered in [part 2](part2.md))
In order to validate that our containerized application works well on Kubernetes, we'll use Docker Desktop's built in Kubernetes environment right on our development machine to deploy our application, before handing it off to run on a full Kubernetes cluster in production. The Kubernetes environment created by Docker Desktop is _fully featured_, meaning it has all the Kubernetes features your app will enjoy on a real cluster, accessible from the convenience of your development machine.
## About services
## Describing Apps Using Kubernetes YAML
In a distributed application, different pieces of the app are called "services".
For example, if you imagine a video sharing site, it probably includes a service
for storing application data in a database, a service for video transcoding in
the background after a user uploads something, a service for the front-end, and
so on.
All containers in Kubernetes are scheduled as _pods_, which are groups of co-located containers that share some resources. Furthermore, in a realistic application we almost never create individual pods; instead, most of our workloads are scheduled as _deployments_, which are scalable groups of pods maintained automatically by Kubernetes. Lastly, all Kubernetes objects can and should be described in manifests called _Kubernetes YAML_ files; these YAML files describe all the components and configurations of your Kubernetes app, and can be used to easily create and destroy your app in any Kubernetes environment.
Services are really just "containers in production." A service only runs one
image, but it codifies the way that image runs&#8212;what ports it should use,
how many replicas of the container should run so the service has the capacity it
needs, and so on. Scaling a service changes the number of container instances
running that piece of software, assigning more computing resources to the
service in the process.
1. You already wrote a very basic Kubernetes YAML file in the first part of this tutorial; let's write a slightly more sophisticated one now, to run and manage our bulletin board. Place the following in a file called `bb.yaml`and save it in the same place you put the other yaml file.
Luckily it's very easy to define, run, and scale services with the Docker
platform -- just write a `docker-compose.yml` file.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: bulletinboard:1.0
---
apiVersion: v1
kind: Service
metadata:
name: bb-entrypoint
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
```
## Your first `docker-compose.yml` file
In this Kubernetes YAML file, we have two objects, separated by the `---`:
- A `Deployment`, describing a scalable group of identical pods. In this case, you'll get just one `replica`, or copy, of your pod, and that pod (which is described under the `template:` key) has just one container in it, based off of your `bulletinboard:1.0` image from the previous step in this tutorial.
- A `NodePort` service, which will route traffic from port 30001 on your host to port 8080 inside the pods it routes to, allowing you to reach your bulletin board from the network.
A `docker-compose.yml` file is a YAML file that defines how Docker containers
should behave in production.
Also notice that while Kubernetes YAML can appear long and complicated at first, it almost always follows the same pattern:
- The `apiVersion`, which indicates the Kubernetes API that parses this object
- The `kind`, indicating what sort of object this is
- Some `metadata`, applying things like names to your objects
- The `spec`, specifying all the parameters and configurations of your object.
### `docker-compose.yml`
## Deploying and Checking Your Application
Save this file as `docker-compose.yml` wherever you want. Be sure you have
[pushed the image](/get-started/part2.md#share-your-image) you created in [Part
2](part2.md) to a registry, and update this `.yml` by replacing
`username/repo:tag` with your image details.
1. In a terminal, navigate to where you created `bb.yaml` and deploy your application to Kubernetes:
```yaml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
```
```shell
kubectl apply -f bb.yaml
```
This `docker-compose.yml` file tells Docker to do the following:
you should see output that looks like the following, indicating your Kubernetes objects were created successfully:
- Pull [the image we uploaded in step 2](part2.md) from the registry.
```shell
deployment.apps/bb-demo created
service/bb-entrypoint created
```
- Run 5 instances of that image as a service
called `web`, limiting each one to use, at most, 10% of a single core of
CPU time (this could also be e.g. "1.5" to mean 1 and half core for each),
and 50MB of RAM.
2. Make sure everything worked by listing your deployments:
- Immediately restart containers if one fails.
```shell
kubectl get deployments
```
- Map port 4000 on the host to `web`'s port 80.
if all is well, your deployment should be listed as follows:
- Instruct `web`'s containers to share port 80 via a load-balanced network
called `webnet`. (Internally, the containers themselves publish to
`web`'s port 80 at an ephemeral port.)
```shell
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
bb-demo 1 1 1 1 48s
```
- Define the `webnet` network with the default settings (which is a
load-balanced overlay network).
This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services:
```shell
kubectl get services
## Run your new load-balanced app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bb-entrypoint NodePort 10.106.145.116 <none> 8080:30001/TCP 53s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 138d
```
Before we can use the `docker stack deploy` command we first run:
In addition to the default `kubernetes` service, we see our `bb-entrypoint` service, accepting traffic on port 30001/TCP.
```shell
docker swarm init
```
3. Open a browser and visit your bulletin board at `localhost:30001`; you should see your bulletin board, the same as when we ran it as a stand-alone container in the previous step of this tutorial.
>**Note**: We get into the meaning of that command in [part 4](part4.md).
> If you don't run `docker swarm init` you get an error that "this node is not a swarm manager."
4. Once satisfied, tear down your application:
Now let's run it. You need to give your app a name. Here, it is set to
`getstartedlab`:
```shell
kubectl delete -f bb.yaml
```
```shell
docker stack deploy -c docker-compose.yml getstartedlab
```
## Conclusion
Our single service stack is running 5 container instances of our deployed image
on one host. Let's investigate.
At this point, we have successfully used Docker Desktop to deploy our application to a fully-featured Kubernetes environment on our development machine. We haven't done much with Kubernetes yet, but the door is now open: you can begin adding other components to your app and taking advantage of all the features and power of Kubernetes, right on your own machine.
Get the service ID for the one service in our application:
In addition to deploying to Kubernetes, we have also described our application as a Kubernetes YAML file. This simple text file contains everything we need to create our application in a running state; we can check it into version control and share it with our colleagues, allowing us to distribute our applications to other clusters (like the testing and production clusters that probably come after our development environments) easily.
```shell
docker service ls
```
[On to Part 4 >>](part4.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right: 100%"}
Look for output for the `web` service, prepended with your app name. If you
named it the same as shown in this example, the name is
`getstartedlab_web`. The service ID is listed as well, along with the number of
replicas, image name, and exposed ports.
## Kubernetes References
Alternatively, you can run `docker stack services`, followed by the name of
your stack. The following example command lets you view all services associated with the
`getstartedlab` stack:
Further documentation for all new Kubernetes objects used in this article are available here:
```bash
docker stack services getstartedlab
ID NAME MODE REPLICAS IMAGE PORTS
bqpve1djnk0x getstartedlab_web replicated 5/5 username/repo:tag *:4000->80/tcp
```
- [Kubernetes Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [Kubernetes Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/)
A single container running in a service is called a **task**. Tasks are given unique
IDs that numerically increment, up to the number of `replicas` you defined in
`docker-compose.yml`. List the tasks for your service:
```bash
docker service ps getstartedlab_web
```
Tasks also show up if you just list all the containers on your system, though that
is not filtered by service:
```bash
docker container ls -q
```
You can run `curl -4 http://localhost:4000` several times in a row, or go to that URL in
your browser and hit refresh a few times.
![Hello World in browser](images/app80-in-browser.png)
Either way, the container ID changes, demonstrating the
load-balancing; with each request, one of the 5 tasks is chosen, in a
round-robin fashion, to respond. The container IDs match your output from
the previous command (`docker container ls -q`).
To view all tasks of a stack, you can run `docker stack ps` followed by your app name, as shown in the following example:
```bash
docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
uwiaw67sc0eh getstartedlab_web.1 username/repo:tag docker-desktop Running Running 9 minutes ago
sk50xbhmcae7 getstartedlab_web.2 username/repo:tag docker-desktop Running Running 9 minutes ago
c4uuw5i6h02j getstartedlab_web.3 username/repo:tag docker-desktop Running Running 9 minutes ago
0dyb70ixu25s getstartedlab_web.4 username/repo:tag docker-desktop Running Running 9 minutes ago
aocrb88ap8b0 getstartedlab_web.5 username/repo:tag docker-desktop Running Running 9 minutes ago
```
> Running Windows 10?
>
> Windows 10 PowerShell should already have `curl` available, but if not you can
> grab a Linux terminal emulator like
> [Git BASH](https://git-for-windows.github.io/){: target="_blank" class="_"},
> or download
> [wget for Windows](http://gnuwin32.sourceforge.net/packages/wget.htm)
> which is very similar.
> Slow response times?
>
> Depending on your environment's networking configuration, it may take up to 30
> seconds for the containers
> to respond to HTTP requests. This is not indicative of Docker or
> swarm performance, but rather an unmet Redis dependency that we
> address later in the tutorial. For now, the visitor counter isn't working
> for the same reason; we haven't yet added a service to persist data.
## Scale the app
You can scale the app by changing the `replicas` value in `docker-compose.yml`,
saving the change, and re-running the `docker stack deploy` command:
```shell
docker stack deploy -c docker-compose.yml getstartedlab
```
Docker performs an in-place update, no need to tear the stack down first or kill
any containers.
Now, re-run `docker container ls -q` to see the deployed instances reconfigured.
If you scaled up the replicas, more tasks, and hence, more containers, are
started.
### Take down the app and the swarm
* Take the app down with `docker stack rm`:
```shell
docker stack rm getstartedlab
```
* Take down the swarm.
```
docker swarm leave --force
```
It's as easy as that to stand up and scale your app with Docker. You've taken a
huge step towards learning how to run containers in production. Up next, you
learn how to run this app as a bonafide swarm on a cluster of Docker
machines.
> **Note**: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using [Docker
Cloud](/docker-cloud/), or on any hardware or cloud provider you choose with
[Docker Enterprise Edition](https://www.docker.com/enterprise-edition).
[On to "Part 4" >>](part4.md){: class="button outline-btn"}
## Recap and cheat sheet (optional)
Here's [a terminal recording of what was covered on this page](https://asciinema.org/a/b5gai4rnflh7r0kie01fx6lip):
<script type="text/javascript" src="https://asciinema.org/a/b5gai4rnflh7r0kie01fx6lip.js" id="asciicast-b5gai4rnflh7r0kie01fx6lip" speed="2" async></script>
To recap, while typing `docker run` is simple enough, the true implementation
of a container in production is running it as a service. Services codify a
container's behavior in a Compose file, and this file can be used to scale,
limit, and redeploy our app. Changes to the service can be applied in place, as
it runs, using the same command that launched the service:
`docker stack deploy`.
Some commands to explore at this stage:
```shell
docker stack ls # List stacks or apps
docker stack deploy -c <composefile> <appname> # Run the specified Compose file
docker service ls # List running services associated with an app
docker service ps <service> # List tasks associated with an app
docker inspect <task or container> # Inspect task or container
docker container ls -q # List container IDs
docker stack rm <appname> # Tear down an application
docker swarm leave --force # Take down a single node swarm from the manager
```

View File

@ -1,585 +1,98 @@
---
title: "Get Started, Part 4: Swarms"
keywords: swarm, scale, cluster, machine, vm, manager, worker, deploy, ssh, orchestration
description: Learn how to create clusters of Dockerized machines.
title: "Get Started, Part 4: Deploying to Swarm"
keywords: swarm, swarm services, stacks
description: Learn how to describe and deploy a simple application on Docker Swarm.
---
{% include_relative nav.html selected="4" %}
## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/index.md).
- Work through containerizing an application in [Part 2](part2.md).
- Make sure that Swarm is enabled on your Docker Desktop by typing `docker system info`, and looking for a message `Swarm: active` (you might have to scroll up a little).
- Get [Docker Compose](/compose/overview.md) as described in [Part 3 prerequisites](/get-started/part3.md#prerequisites).
- Get [Docker Machine](/machine/overview.md), which is pre-installed with
[Docker Desktop for Mac](/docker-for-mac/index.md) and [Docker Desktop for
Windows](/docker-for-windows/index.md), but on Linux systems you need to
[install it directly](/machine/install-machine/#installing-machine-directly). On pre Windows 10 systems _without Hyper-V_, as well as Windows 10 Home, use
[Docker Toolbox](/toolbox/overview.md).
- Read the orientation in [Part 1](index.md).
- Learn how to create containers in [Part 2](part2.md).
- Make sure you have published the `friendlyhello` image you created by
[pushing it to a registry](/get-started/part2.md#share-your-image). We use that
shared image here.
- Be sure your image works as a deployed container. Run this command,
slotting in your info for `username`, `repo`, and `tag`: `docker run -p 80:80
username/repo:tag`, then visit `http://localhost/`.
- Have a copy of your `docker-compose.yml` from [Part 3](part3.md) handy.
If Swarm isn't running, simply type `docker swarm init` at a shell prompt to set it up.
## Introduction
In [part 3](part3.md), you took an app you wrote in [part 2](part2.md), and
defined how it should run in production by turning it into a service, scaling it
up 5x in the process.
Now that we've demonstrated that the individual components of our application run as stand-alone containers and shown how to deploy it using Kubernetes, let's look at how to arrange for them to be managed by Docker Swarm. Swarm provides many tools for scaling, networking, securing and maintaining your containerized applications, above and beyond the abilities of containers themselves.
Here in part 4, you deploy this application onto a cluster, running it on
multiple machines. Multi-container, multi-machine applications are made possible
by joining multiple machines into a "Dockerized" cluster called a **swarm**.
In order to validate that our containerized application works well on Swarm, we'll use Docker Desktop's built in Swarm environment right on our development machine to deploy our application, before handing it off to run on a full Swarm cluster in production. The Swarm environment created by Docker Desktop is _fully featured_, meaning it has all the Swarm features your app will enjoy on a real cluster, accessible from the convenience of your development machine.
## Understanding Swarm clusters
## Describing Apps Using Stack Files
A swarm is a group of machines that are running Docker and joined into
a cluster. After that has happened, you continue to run the Docker commands
you're used to, but now they are executed on a cluster by a **swarm manager**.
The machines in a swarm can be physical or virtual. After joining a swarm, they
are referred to as **nodes**.
Swarm never creates individual containers like we did in the previous step of this tutorial; instead, all Swarm workloads are scheduled as _services_, which are scalable groups of containers with added networking features maintained automatically by Swarm. Furthermore, all Swarm objects can and should be described in manifests called _stack files_; these YAML files describe all the components and configurations of your Swarm app, and can be used to easily create and destroy your app in any Swarm environment.
Swarm managers can use several strategies to run containers, such as "emptiest
node" -- which fills the least utilized machines with containers. Or "global",
which ensures that each machine gets exactly one instance of the specified
container. You instruct the swarm manager to use these strategies in the Compose
file, just like the one you have already been using.
1. Let's write a simple stack file to run and manage our bulletin board. Place the following in a file called `bb-stack.yaml`:
Swarm managers are the only machines in a swarm that can execute your commands,
or authorize other machines to join the swarm as **workers**. Workers are just
there to provide capacity and do not have the authority to tell any other
machine what it can and cannot do.
```yaml
version: '3.7'
Up until now, you have been using Docker in a single-host mode on your local
machine. But Docker also can be switched into **swarm mode**, and that's what
enables the use of swarms. Enabling swarm mode instantly makes the current
machine a swarm manager. From then on, Docker runs the commands you execute
on the swarm you're managing, rather than just on the current machine.
services:
bb-app:
image: bulletinboard:1.0
ports:
- "8000:8080"
```
## Set up your swarm
A swarm is made up of multiple nodes, which can be either physical or virtual
machines. The basic concept is simple enough: run `docker swarm init` to enable
swarm mode and make your current machine a swarm manager, then run
`docker swarm join` on other machines to have them join the swarm as workers.
Choose a tab below to see how this plays out in various contexts. We use VMs
to quickly create a two-machine cluster and turn it into a swarm.
In this Swarm YAML file, we have just one object: a `service`, describing a scalable group of identical containers. In this case, you'll get just one container (the default), and that container will be based off of your `bulletinboard:1.0` image from step 2 of this tutorial. We've furthermore asked Swarm to forward all traffic arriving at port 8000 on our development machine to port 8080 inside our bulletin board container.
### Create a cluster
> **Kubernetes Services and Swarm Services are very different!** Despite the similar name, the two orchestrators mean very different things by the term 'service'. In Swarm, a service provides both scheduling _and_ networking facilities, creating containers and providing tools for routing traffic to them. In Kubernetes, scheduling and networking are handled separately: _deployments_ (or other controllers) handle the scheduling of containers as pods, while _services_ are responsible only for adding networking features to those pods.
## Deploying and Checking Your Application
<ul class="nav nav-tabs">
<li class="active"><a data-toggle="tab" href="#local">Local VMs (Mac, Linux, Windows 7 and 8)</a></li>
<li><a data-toggle="tab" href="#localwin">Local VMs (Windows 10/Hyper-V)</a></li>
</ul>
<div class="tab-content">
<div id="local" class="tab-pane fade in active">
{% capture local-content %}
1. Deploy your application to Swarm:
#### VMs on your local machine (Mac, Linux, Windows 7 and 8)
```shell
docker stack deploy -c bb-stack.yaml demo
```
You need a hypervisor that can create virtual machines (VMs), so
[install Oracle VirtualBox](https://www.virtualbox.org/wiki/Downloads) for your
machine's OS.
If all goes well, Swarm will report creating all your stack objects with no complaints:
> **Note**: If you are on a Windows system that has Hyper-V installed,
such as Windows 10, there is no need to install VirtualBox and you should
use Hyper-V instead. View the instructions for Hyper-V systems by clicking
the Hyper-V tab above. If you are using
[Docker Toolbox](/toolbox/overview.md), you should already have
VirtualBox installed as part of it, so you are good to go.
```shell
Creating network demo_default
Creating service demo_bb-app
```
Now, create a couple of VMs using `docker-machine`, using the VirtualBox driver:
Notice that in addition to your service, Swarm also creates a Docker network by default to isolate the containers deployed as part of your stack.
```shell
docker-machine create --driver virtualbox myvm1
docker-machine create --driver virtualbox myvm2
```
2. Make sure everything worked by listing your service:
{% endcapture %}
{{ local-content | markdownify }}
```shell
docker service ls
```
</div>
<div id="localwin" class="tab-pane fade" markdown="1">
{% capture localwin-content %}
If all has gone well, your service will report with 1/1 of its replicas created:
#### VMs on your local machine (Windows 10)
```shell
ID NAME MODE REPLICAS IMAGE PORTS
il7elwunymbs demo_bb-app replicated 1/1 bulletinboard:1.0 *:8000->8080/tcp
```
First, quickly create a virtual switch for your virtual machines (VMs) to share,
so they can connect to each other.
This indicates 1/1 containers you asked for as part of your services are up and running. Also, we see that port 8000 on your development machine is getting forwarded to port 8080 in your bulletin board container.
1. Launch Hyper-V Manager
2. Click **Virtual Switch Manager** in the right-hand menu
3. Click **Create Virtual Switch** of type **External**
4. Give it the name `myswitch`, and check the box to share your host machine's
active network adapter
3. Open a browser and visit your bulletin board at `localhost:8000`; you should see your bulletin board, the same as when we ran it as a stand-alone container in Step 2 of this tutorial.
Now, create a couple of VMs using our node management tool,
`docker-machine`:
4. Once satisfied, tear down your application:
> **Note**: you need to run the following as administrator or else you don't have the permission to create hyperv VMs!
```shell
docker stack rm demo
```
```shell
docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1
docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm2
```
## Conclusion
{% endcapture %}
{{ localwin-content | markdownify }}
</div>
<hr>
</div>
At this point, we have successfully used Docker Desktop to deploy our application to a fully-featured Swarm environment on our development machine. We haven't done much with Swarm yet, but the door is now open: you can begin adding other components to your app and taking advantage of all the features and power of Swarm, right on your own machine.
#### List the VMs and get their IP addresses
In addition to deploying to Swarm, we have also described our application as a stack file. This simple text file contains everything we need to create our application in a running state; we can check it into version control and share it with our colleagues, allowing us to distribute our applications to other clusters (like the testing and production clusters that probably come after our development environments) easily.
You now have two VMs created, named `myvm1` and `myvm2`.
[On to Part 5 >>](part5.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right: 100%"}
Use this command to list the machines and get their IP addresses.
## Swarm &amp; CLI References
> **Note**: you need to run the following as administrator or else you don't get any reasonable output (only "UNKNOWN").
Further documentation for all new Swarm objects and CLI commands used in this article are available here:
```shell
docker-machine ls
```
- [Swarm Services](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/)
- [Swarm Stacks](https://docs.docker.com/engine/swarm/stack-deploy/)
- [`docker stack *`](https://docs.docker.com/engine/reference/commandline/stack/)
- [`docker service *`](https://docs.docker.com/engine/reference/commandline/service/)
Here is example output from this command.
```shell
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.100:2376 v17.06.2-ce
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v17.06.2-ce
```
#### Initialize the swarm and add nodes
The first machine acts as the manager, which executes management commands
and authenticates workers to join the swarm, and the second is a worker.
You can send commands to your VMs using `docker-machine ssh`. Instruct `myvm1`
to become a swarm manager with `docker swarm init` and look for output like
this:
```shell
$ docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
Swarm initialized: current node <node ID> is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token <token> \
<myvm ip>:<port>
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
> Ports 2377 and 2376
>
> Always run `docker swarm init` and `docker swarm join` with port 2377
> (the swarm management port), or no port at all and let it take the default.
>
> The machine IP addresses returned by `docker-machine ls` include port 2376,
> which is the Docker daemon port. Do not use this port or
> [you may experience errors](https://forums.docker.com/t/docker-swarm-join-with-virtualbox-connection-error-13-bad-certificate/31392/2){: target="_blank" class="_"}.
> Having trouble using SSH? Try the --native-ssh flag
>
> Docker Machine has [the option to let you use your own system's SSH](/machine/reference/ssh/#different-types-of-ssh), if
> for some reason you're having trouble sending commands to your Swarm manager. Just specify the
> `--native-ssh` flag when invoking the `ssh` command:
>
> ```
> docker-machine --native-ssh ssh myvm1 ...
> ```
As you can see, the response to `docker swarm init` contains a pre-configured
`docker swarm join` command for you to run on any nodes you want to add. Copy
this command, and send it to `myvm2` via `docker-machine ssh` to have `myvm2`
join your new swarm as a worker:
```shell
$ docker-machine ssh myvm2 "docker swarm join \
--token <token> \
<ip>:2377"
This node joined a swarm as a worker.
```
Congratulations, you have created your first swarm!
Run `docker node ls` on the manager to view the nodes in this swarm:
```shell
$ docker-machine ssh myvm1 "docker node ls"
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
brtu9urxwfd5j0zrmkubhpkbd myvm2 Ready Active
rihwohkh3ph38fhillhhb84sk * myvm1 Ready Active Leader
```
> Leaving a swarm
>
> If you want to start over, you can run `docker swarm leave` from each node.
## Deploy your app on the swarm cluster
The hard part is over. Now you just repeat the process you used in [part
3](part3.md) to deploy on your new swarm. Just remember that only swarm managers
like `myvm1` execute Docker commands; workers are just for capacity.
### Configure a `docker-machine` shell to the swarm manager
So far, you've been wrapping Docker commands in `docker-machine ssh` to talk to
the VMs. Another option is to run `docker-machine env <machine>` to get
and run a command that configures your current shell to talk to the Docker
daemon on the VM. This method works better for the next step because it allows
you to use your local `docker-compose.yml` file to deploy the app
"remotely" without having to copy it anywhere.
Type `docker-machine env myvm1`, then copy-paste and run the command provided as
the last line of the output to configure your shell to talk to `myvm1`, the
swarm manager.
The commands to configure your shell differ depending on whether you are Mac,
Linux, or Windows, so examples of each are shown on the tabs below.
<ul class="nav nav-tabs">
<li class="active"><a data-toggle="tab" href="#mac-linux-machine">Mac, Linux</a></li>
<li><a data-toggle="tab" href="#win-machine">Windows</a></li>
</ul>
<div class="tab-content">
<div id="mac-linux-machine" class="tab-pane fade in active">
{% capture mac-linux-machine-content %}
#### Docker machine shell environment on Mac or Linux
Run `docker-machine env myvm1` to get the command to configure your shell to
talk to `myvm1`.
```shell
$ docker-machine env myvm1
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/sam/.docker/machine/machines/myvm1"
export DOCKER_MACHINE_NAME="myvm1"
# Run this command to configure your shell:
# eval $(docker-machine env myvm1)
```
Run the given command to configure your shell to talk to `myvm1`.
```shell
eval $(docker-machine env myvm1)
```
Run `docker-machine ls` to verify that `myvm1` is now the active machine, as
indicated by the asterisk next to it.
```shell
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v17.06.2-ce
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v17.06.2-ce
```
{% endcapture %}
{{ mac-linux-machine-content | markdownify }}
</div>
<div id="win-machine" class="tab-pane fade">
{% capture win-machine-content %}
#### Docker machine shell environment on Windows
Run `docker-machine env myvm1` to get the command to configure your shell to
talk to `myvm1`.
```shell
PS C:\Users\sam\sandbox\get-started> docker-machine env myvm1
$Env:DOCKER_TLS_VERIFY = "1"
$Env:DOCKER_HOST = "tcp://192.168.203.207:2376"
$Env:DOCKER_CERT_PATH = "C:\Users\sam\.docker\machine\machines\myvm1"
$Env:DOCKER_MACHINE_NAME = "myvm1"
$Env:COMPOSE_CONVERT_WINDOWS_PATHS = "true"
# Run this command to configure your shell:
# & "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression
```
Run the given command to configure your shell to talk to `myvm1`.
```shell
& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression
```
Run `docker-machine ls` to verify that `myvm1` is the active machine as indicated by the asterisk next to it.
```shell
PS C:PATH> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * hyperv Running tcp://192.168.203.207:2376 v17.06.2-ce
myvm2 - hyperv Running tcp://192.168.200.181:2376 v17.06.2-ce
```
{% endcapture %}
{{ win-machine-content | markdownify }}
</div>
<hr>
</div>
### Deploy the app on the swarm manager
Now that you have `myvm1`, you can use its powers as a swarm manager to
deploy your app by using the same `docker stack deploy` command you used in part
3 to `myvm1`, and your local copy of `docker-compose.yml`. This command may take a few seconds
to complete and the deployment takes some time to be available. Use the
`docker service ps <service_name>` command on a swarm manager to verify that
all services have been redeployed.
You are connected to `myvm1` by means of the `docker-machine` shell
configuration, and you still have access to the files on your local host. Make
sure you are in the same directory as before, which includes the
[`docker-compose.yml` file you created in part
3](/get-started/part3/#docker-composeyml).
Just like before, run the following command to deploy the app on `myvm1`.
```bash
docker stack deploy -c docker-compose.yml getstartedlab
```
And that's it, the app is deployed on a swarm cluster!
> **Note**: If your image is stored on a private registry instead of Docker Hub,
> you need to be logged in using `docker login <your-registry>` and then you
> need to add the `--with-registry-auth` flag to the above command. For example:
>
> ```bash
> docker login registry.example.com
>
> docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab
> ```
>
> This passes the login token from your local client to the swarm nodes where the
> service is deployed, using the encrypted WAL logs. With this information, the
> nodes are able to log into the registry and pull the image.
>
Now you can use the same [docker commands you used in part
3](/get-started/part3.md#run-your-new-load-balanced-app). Only this time notice
that the services (and associated containers) have been distributed between
both `myvm1` and `myvm2`.
```bash
$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE
jq2g3qp8nzwx getstartedlab_web.1 gordon/get-started:part2 myvm1 Running
88wgshobzoxl getstartedlab_web.2 gordon/get-started:part2 myvm2 Running
vbb1qbkb0o2z getstartedlab_web.3 gordon/get-started:part2 myvm2 Running
ghii74p9budx getstartedlab_web.4 gordon/get-started:part2 myvm1 Running
0prmarhavs87 getstartedlab_web.5 gordon/get-started:part2 myvm2 Running
```
> Connecting to VMs with `docker-machine env` and `docker-machine ssh`
>
> * To set your shell to talk to a different machine like `myvm2`, simply re-run
`docker-machine env` in the same or a different shell, then run the given
command to point to `myvm2`. This is always specific to the current shell. If
you change to an unconfigured shell or open a new one, you need to re-run the
commands. Use `docker-machine ls` to list machines, see what state they are in,
get IP addresses, and find out which one, if any, you are connected to. To learn
more, see the [Docker Machine getting started topics](/machine/get-started.md#create-a-machine).
>
> * Alternatively, you can wrap Docker commands in the form of
`docker-machine ssh <machine> "<command>"`, which logs directly into
the VM but doesn't give you immediate access to files on your local host.
>
> * On Mac and Linux, you can use `docker-machine scp <file> <machine>:~`
to copy files across machines, but Windows users need a Linux terminal emulator
like [Git Bash](https://git-for-windows.github.io/){: target="_blank" class="_"} for this to work.
>
> This tutorial demos both `docker-machine ssh` and
`docker-machine env`, since these are available on all platforms via the `docker-machine` CLI.
### Accessing your cluster
You can access your app from the IP address of **either** `myvm1` or `myvm2`.
The network you created is shared between them and load-balancing. Run
`docker-machine ls` to get your VMs' IP addresses and visit either of them on a
browser on port 4000, hitting refresh (or just `curl` them).
![Hello World in browser](images/app-in-browser-swarm.png)
There are five possible container IDs all cycling by randomly, demonstrating
the load-balancing.
The reason both IP addresses work is that nodes in a swarm participate in an
ingress **routing mesh**. This ensures that a service deployed at a certain port
within your swarm always has that port reserved to itself, no matter what node
is actually running the container. Here's a diagram of how a routing mesh for a
service called `my-web` published at port `8080` on a three-node swarm would
look:
![routing mesh diagram](/engine/swarm/images/ingress-routing-mesh.png)
> Having connectivity trouble?
>
> Keep in mind that to use the ingress network in the swarm,
> you need to have the following ports open between the swarm nodes
> before you enable swarm mode:
>
> - Port 7946 TCP/UDP for container network discovery.
> - Port 4789 UDP for the container ingress network.
>
> Double check what you have in the ports section under your web
> service and make sure the ip addresses you enter in your browser
> or curl reflects that
## Iterating and scaling your app
From here you can do everything you learned about in parts 2 and 3.
Scale the app by changing the `docker-compose.yml` file.
Change the app behavior by editing code, then rebuild, and push the new image.
(To do this, follow the same steps you took earlier to [build the
app](part2.md#build-the-app) and [publish the
image](part2.md#publish-the-image)).
In either case, simply run `docker stack deploy` again to deploy these changes.
You can join any machine, physical or virtual, to this swarm, using the
same `docker swarm join` command you used on `myvm2`, and capacity is added
to your cluster. Just run `docker stack deploy` afterwards, and your app can
take advantage of the new resources.
## Cleanup and reboot
### Stacks and swarms
You can tear down the stack with `docker stack rm`. For example:
```
docker stack rm getstartedlab
```
> Keep the swarm or remove it?
>
> At some point later, you can remove this swarm if you want to with
> `docker-machine ssh myvm2 "docker swarm leave"` on the worker
> and `docker-machine ssh myvm1 "docker swarm leave --force"` on the
> manager, but _you need this swarm for part 5, so keep it
> around for now_.
### Unsetting docker-machine shell variable settings
You can unset the `docker-machine` environment variables in your current shell
with the given command.
On **Mac or Linux** the command is:
```shell
eval $(docker-machine env -u)
```
On **Windows** the command is:
```shell
& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env -u | Invoke-Expression
```
This disconnects the shell from `docker-machine` created virtual machines,
and allows you to continue working in the same shell, now using native `docker`
commands (for example, on Docker Desktop for Mac or Docker Desktop for Windows). To learn more,
see the [Machine topic on unsetting environment variables](/machine/get-started/#unset-environment-variables-in-the-current-shell).
### Restarting Docker machines
If you shut down your local host, Docker machines stops running. You can check the status of machines by running `docker-machine ls`.
```
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Stopped Unknown
myvm2 - virtualbox Stopped Unknown
```
To restart a machine that's stopped, run:
```
docker-machine start <machine-name>
```
For example:
```
$ docker-machine start myvm1
Starting "myvm1"...
(myvm1) Check network to re-create if needed...
(myvm1) Waiting for an IP...
Machine "myvm1" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
$ docker-machine start myvm2
Starting "myvm2"...
(myvm2) Check network to re-create if needed...
(myvm2) Waiting for an IP...
Machine "myvm2" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
```
[On to Part 5 >>](part5.md){: class="button outline-btn"}
## Recap and cheat sheet (optional)
Here's [a terminal recording of what was covered on this
page](https://asciinema.org/a/113837):
<script type="text/javascript" src="https://asciinema.org/a/113837.js" id="asciicast-113837" speed="2" async></script>
In part 4 you learned what a swarm is, how nodes in swarms can be managers or
workers, created a swarm, and deployed an application on it. You saw that the
core Docker commands didn't change from part 3, they just had to be targeted to
run on a swarm master. You also saw the power of Docker's networking in action,
which kept load-balancing requests across containers, even though they were
running on different machines. Finally, you learned how to iterate and scale
your app on a cluster.
Here are some commands you might like to run to interact with your swarm and your VMs a bit:
```shell
docker-machine create --driver virtualbox myvm1 # Create a VM (Mac, Win7, Linux)
docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 # Win10
docker-machine env myvm1 # View basic information about your node
docker-machine ssh myvm1 "docker node ls" # List the nodes in your swarm
docker-machine ssh myvm1 "docker node inspect <node ID>" # Inspect a node
docker-machine ssh myvm1 "docker swarm join-token -q worker" # View join token
docker-machine ssh myvm1 # Open an SSH session with the VM; type "exit" to end
docker node ls # View nodes in swarm (while logged on to manager)
docker-machine ssh myvm2 "docker swarm leave" # Make the worker leave the swarm
docker-machine ssh myvm1 "docker swarm leave -f" # Make master leave, kill swarm
docker-machine ls # list VMs, asterisk shows which VM this shell is talking to
docker-machine start myvm1 # Start a VM that is currently not running
docker-machine env myvm1 # show environment variables and command for myvm1
eval $(docker-machine env myvm1) # Mac command to connect shell to myvm1
& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression # Windows command to connect shell to myvm1
docker stack deploy -c <file> <app> # Deploy an app; command shell must be set to talk to manager (myvm1), uses local Compose file
docker-machine scp docker-compose.yml myvm1:~ # Copy file to node's home dir (only required if you use ssh to connect to manager and deploy the app)
docker-machine ssh myvm1 "docker stack deploy -c <file> <app>" # Deploy an app using ssh (you must have first copied the Compose file to myvm1)
eval $(docker-machine env -u) # Disconnect shell from VMs, use native docker
docker-machine stop $(docker-machine ls -q) # Stop all running VMs
docker-machine rm $(docker-machine ls -q) # Delete all VMs and their disk images
```

View File

@ -1,302 +1,61 @@
---
title: "Get Started, Part 5: Stacks"
keywords: stack, data, persist, dependencies, redis, storage, volume, port
description: Learn how to create a multi-container application that uses all the machines in a cluster.
title: "Get Started, Part 5: Sharing Images on Docker Hub"
keywords: docker hub, push, images
description: Learn how to share images on Docker Hub.
---
{% include_relative nav.html selected="5" %}
## Prerequisites
- [Install Docker version 1.13 or higher](/engine/installation/).
- Get [Docker Compose](/compose/overview.md) as described in [Part 3 prerequisites](/get-started/part3.md#prerequisites).
- Get [Docker Machine](/machine/overview.md) as described in [Part 4 prerequisites](/get-started/part4.md#prerequisites).
- Read the orientation in [Part 1](index.md).
- Learn how to create containers in [Part 2](part2.md).
- Make sure you have published the `friendlyhello` image you created by
[pushing it to a registry](/get-started/part2.md#share-your-image). We
use that shared image here.
- Be sure your image works as a deployed container. Run this command,
slotting in your info for `username`, `repo`, and `tag`: `docker run -p 80:80
username/repo:tag`, then visit `http://localhost/`.
- Have a copy of your `docker-compose.yml` from [Part 3](part3.md) handy.
- Make sure that the machines you set up in [part 4](part4.md) are running
and ready. Run `docker-machine ls` to verify this. If the machines are
stopped, run `docker-machine start myvm1` to boot the manager, followed
by `docker-machine start myvm2` to boot the worker.
- Have the swarm you created in [part 4](part4.md) running and ready. Run
`docker-machine ssh myvm1 "docker node ls"` to verify this. If the swarm is up,
both nodes report a `ready` status. If not, reinitialize the swarm and join
the worker as described in [Set up your
swarm](/get-started/part4.md#set-up-your-swarm).
- Work through containerizing an application in [Part 2](part2.md).
## Introduction
In [part 4](part4.md), you learned how to set up a swarm, which is a cluster of
machines running Docker, and deployed an application to it, with containers
running in concert on multiple machines.
At this point, you've built a containerized application in [Part 2](part2.md), and potentially run it on Kubernetes in [Part 3](part3.md) or Swarm in [Part 4](part4.md), all on your local development machine thanks to Docker Desktop. The final step in developing a containerized application is to share your images on a registry like [Docker Hub](https://hub.docker.com/), so they can be easily downloaded and run on any destination cluster.
Here in part 5, you reach the top of the hierarchy of distributed
applications: the **stack**. A stack is a group of interrelated services that
share dependencies, and can be orchestrated and scaled together. A single stack
is capable of defining and coordinating the functionality of an entire
application (though very complex applications may want to use multiple stacks).
## Setting Up Your Docker Hub Account
Some good news is, you have technically been working with stacks since part 3,
when you created a Compose file and used `docker stack deploy`. But that was a
single service stack running on a single host, which is not usually what takes
place in production. Here, you can take what you've learned, make
multiple services relate to each other, and run them on multiple machines.
If you don't yet have a Docker ID, follow these steps to set one up; this will allow you to share images on Docker Hub.
You're doing great, this is the home stretch!
1. Visit the Docker Hub sign up page, [https://hub.docker.com/signup](https://hub.docker.com/signup).
## Add a new service and redeploy
2. Fill out the form and submit to create your Docker ID.
It's easy to add services to our `docker-compose.yml` file. First, let's add
a free visualizer service that lets us look at how our swarm is scheduling
containers.
3. Click on the Docker icon in your toolbar or system tray, and click **Sign In / Create Docker ID**. Fill in your new Docker ID and password. If everything worked, your Docker ID will appear in the Docker Desktop dropdown in place of the 'Sign In' option you just used.
1. Open up `docker-compose.yml` in an editor and replace its contents
with the following. Be sure to replace `username/repo:tag` with your image details.
> You can do the same thing from the command line by typing `docker login`.
```yaml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
```
## Creating and Pushing to a Docker Hub Repository
The only thing new here is the peer service to `web`, named `visualizer`.
Notice two new things here: a `volumes` key, giving the visualizer
access to the host's socket file for Docker, and a `placement` key, ensuring
that this service only ever runs on a swarm manager -- never a worker.
That's because this container, built from [an open source project created by
Docker](https://github.com/ManoMarks/docker-swarm-visualizer), displays
Docker services running on a swarm in a diagram.
At this point, you've set up your Docker Hub account and have connected it to your Docker Desktop. Now lets make our first repo, and share our bulletin board app there.
We talk more about placement constraints and volumes in a moment.
1. Click on the Docker icon in your menu bar, and navigate to **Repositories -> Create...**. You'll be taken to a Docker Hub page to create a new repository.
2. Make sure your shell is configured to talk to `myvm1` (full examples are [here](part4.md#configure-a-docker-machine-shell-to-the-swarm-manager)).
2. Fill out the Repository Name as `bulletin`. Leave all the other options alone for now, and click **Create** at the bottom.
* Run `docker-machine ls` to list machines and make sure you are connected to `myvm1`, as indicated by an asterisk next to it.
![make a repo](images/newrepo.png){:width="100%"}
* If needed, re-run `docker-machine env myvm1`, then run the given command to configure the shell.
On **Mac or Linux** the command is:
```shell
eval $(docker-machine env myvm1)
```
On **Windows** the command is:
```shell
& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression
```
3. Re-run the `docker stack deploy` command on the manager, and
whatever services need updating are updated:
3. Now we're ready to share our image on Docker Hub, but there's one thing we must do first: images must be *namespaced correctly* to share on Docker Hub. Specifically, images must be named like `<Docker Hub ID>/<Repository Name>:<tag>`. We can relabel our `bulletinboard:1.0` image like this (of course please replace `gordon` with your Docker ID):
```shell
$ docker stack deploy -c docker-compose.yml getstartedlab
Updating service getstartedlab_web (id: angi1bf5e4to03qu9f93trnxm)
Creating service getstartedlab_visualizer (id: l9mnwkeq2jiononb5ihz9u7a4)
docker image tag bulletinboard:1.0 gordon/bulletinboard:1.0
```
4. Take a look at the visualizer.
You saw in the Compose file that `visualizer` runs on port 8080. Get the
IP address of one of your nodes by running `docker-machine ls`. Go
to either IP address at port 8080 and you can see the visualizer running:
![Visualizer screenshot](images/get-started-visualizer1.png)
The single copy of `visualizer` is running on the manager as you expect, and
the 5 instances of `web` are spread out across the swarm. You can
corroborate this visualization by running `docker stack ps <stack>`:
4. Finally, push your image to Docker Hub:
```shell
docker stack ps getstartedlab
docker image push gordon/bulletinboard:1.0
```
The visualizer is a standalone service that can run in any app
that includes it in the stack. It doesn't depend on anything else.
Now let's create a service that *does* have a dependency: the Redis
service that provides a visitor counter.
Visit your repository in Docker Hub, and you'll see your new image there. Remember, Docker Hub repositories are public by default.
## Persist the data
> **Having trouble pushing?** Remember, you must be signed in to Docker Hub through Docker Desktop or the command line, and you must also name your images correctly, per the above steps. If the push seemed to work but you don't see it in Docker Hub, refresh your browser after a couple of minutes and check again.
Let's go through the same workflow once more to add a Redis database for storing
app data.
## Conclusion
1. Save this new `docker-compose.yml` file, which finally adds a
Redis service. Be sure to replace `username/repo:tag` with your image details.
Now that your image is available on Docker Hub, you'll be able to run it anywhere; if you try to use it on a new cluster that doesn't have it yet, Docker will automatically try and download it from Docker Hub. By moving images around in this way, we no longer need to install any dependencies except Docker and our orchestrator on the machines we want to run our software on; the dependencies of our containerized applications are completely encapsulated and isolated within our images, which we can share via Docker Hub in the manner above.
```yaml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
```
Redis has an official image in the Docker library and has been granted the
short `image` name of just `redis`, so no `username/repo` notation here. The
Redis port, 6379, has been pre-configured by Redis to be exposed from the
container to the host, and here in our Compose file we expose it from the
host to the world, so you can actually enter the IP for any of your nodes
into Redis Desktop Manager and manage this Redis instance, if you so choose.
Most importantly, there are a couple of things in the `redis` specification
that make data persist between deployments of this stack:
- `redis` always runs on the manager, so it's always using the
same filesystem.
- `redis` accesses an arbitrary directory in the host's file system
as `/data` inside the container, which is where Redis stores data.
Together, this is creating a "source of truth" in your host's physical
filesystem for the Redis data. Without this, Redis would store its data in
`/data` inside the container's filesystem, which would get wiped out if that
container were ever redeployed.
This source of truth has two components:
- The placement constraint you put on the Redis service, ensuring that it
always uses the same host.
- The volume you created that lets the container access `./data` (on the host) as `/data` (inside the Redis container). While containers come and go, the files stored on `./data` on the specified host persists, enabling continuity.
You are ready to deploy your new Redis-using stack.
2. Create a `./data` directory on the manager:
```shell
docker-machine ssh myvm1 "mkdir ./data"
```
3. Make sure your shell is configured to talk to `myvm1` (full examples are [here](part4.md#configure-a-docker-machine-shell-to-the-swarm-manager)).
* Run `docker-machine ls` to list machines and make sure you are connected to `myvm1`, as indicated by an asterisk next to it.
* If needed, re-run `docker-machine env myvm1`, then run the given command to configure the shell.
On **Mac or Linux** the command is:
```shell
eval $(docker-machine env myvm1)
```
On **Windows** the command is:
```shell
& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression
```
4. Run `docker stack deploy` one more time.
```shell
$ docker stack deploy -c docker-compose.yml getstartedlab
```
5. Run `docker service ls` to verify that the three services are running as expected.
```shell
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
x7uij6xb4foj getstartedlab_redis replicated 1/1 redis:latest *:6379->6379/tcp
n5rvhm52ykq7 getstartedlab_visualizer replicated 1/1 dockersamples/visualizer:stable *:8080->8080/tcp
mifd433bti1d getstartedlab_web replicated 5/5 gordon/getstarted:latest *:80->80/tcp
```
6. Check the web page at one of your nodes, such as `http://192.168.99.101`, and take a look at the results of the visitor counter, which is now live and storing information on Redis.
![Hello World in browser with Redis](images/app-in-browser-redis.png)
Also, check the visualizer at port 8080 on either node's IP address, and notice the `redis` service running along with the `web` and `visualizer` services.
![Visualizer with redis screenshot](images/visualizer-with-redis.png)
Another thing to keep in mind: at the moment, we've only pushed your image to Docker Hub; what about your Dockerfiles, Kube YAML and stack files? A crucial best practice is to keep these in version control, perhaps alongside your source code for your application, and add a link or note in your Docker Hub repository description indicating where these files can be found, preserving the record not only of how your image was built, but how it's meant to be run as a full application.
[On to Part 6 >>](part6.md){: class="button outline-btn"}
## Recap (optional)
Here's [a terminal recording of what was covered on this page](https://asciinema.org/a/113840):
<script type="text/javascript" src="https://asciinema.org/a/113840.js" speed="2" id="asciicast-113840" async></script>
You learned that stacks are inter-related services all running in concert, and
that -- surprise! -- you've been using stacks since part three of this tutorial.
You learned that to add more services to your stack, you insert them in your
Compose file. Finally, you learned that by using a combination of placement
constraints and volumes you can create a permanent home for persisting data, so
that your app's data survives when the container is torn down and redeployed.