get-started: refresh node language guide (#18033)

* refresh node language guide

Signed-off-by: Craig Osterhout <craig.osterhout@docker.com>
This commit is contained in:
Craig Osterhout 2023-09-11 13:46:43 -07:00 committed by GitHub
parent 233f98c2f7
commit 2055f855da
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 783 additions and 1009 deletions

View File

@ -1,22 +1,19 @@
---
description: Containerize Node.js apps using Docker
keywords: Docker, getting started, node, node.js, language
title: What will you learn in this module?
description: Containerize and develop Node.js apps using Docker
keywords: getting started, node, node.js
title: Node.js language-specific guide
toc_min: 1
toc_max: 2
---
The Node.js getting started guide teaches you how to create a containerized Node.js application using Docker. In this guide, youll learn how to:
The Node.js language-specific guide teaches you how to containerize a Node.js application using Docker. In this guide, youll learn how to:
* Create a simple Node.js application
* Create a new Dockerfile which contains instructions required to build a Node.js image
* Run the newly built image as a container
* Set up a local development environment to connect a database to the container
* Use Docker Compose to run the Node.js application
* Configure a CI/CD pipeline for your application using GitHub Actions.
* Containerize and run a Node.js application
* Set up a local environment to develop a Node.js application using containers
* Run tests for a Node.js application using containers
* Configure a CI/CD pipeline for a containerized Node.js application using GitHub Actions
* Deploy your containerized Node.js application locally to Kubernetes to test and debug your deployment
After completing the Node.js getting started modules, you should be able to containerize your own Node.js application based on the examples and instructions provided in this guide.
Start by containerizing an existing Node.js application.
Let's get started!
{{< button text="Build your Node.js image" url="build-images.md" >}}
{{< button text="Containerize a Node.js app" url="containerize.md" >}}

View File

@ -1,307 +0,0 @@
---
title: Build your Node image
keywords: containers, images, node.js, node, dockerfiles, node, coding, build, push,
run
description: Learn how to build your first Docker image by writing a Dockerfile
aliases:
- /get-started/nodejs/build-images/
---
## Prerequisites
* You understand basic [Docker concepts](../../get-started/overview.md).
* You're familiar with the [Dockerfile format](../../build/building/packaging.md#dockerfile).
* You have [enabled BuildKit](../../build/buildkit/index.md#getting-started)
on your machine.
## Overview
Now that we have a good overview of containers and the Docker platform, lets take a look at building our first image. An image includes everything you need to run an application - the code or binary, runtime, dependencies, and any other file system objects required.
To complete this tutorial, you need the following:
- Node.js version 18 or later. [Download Node.js](https://nodejs.org/en/)
- Docker running locally: Follow the instructions to [download and install Docker](../../desktop/index.md).
- An IDE or a text editor to edit files. We recommend using Visual Studio Code.
## Sample application
Lets create a simple Node.js application that we can use as our example. Create a directory in your local machine named `node-docker` and follow the steps below to create a simple REST API.
```console
$ cd [path to your node-docker directory]
$ npm init -y
$ npm install ronin-server ronin-mocks
$ touch server.js
```
Now, lets add some code to handle our REST requests. Well use a mock server so we can focus on Dockerizing the application.
Open this working directory in your IDE and add the following code into the `server.js` file.
```js
const ronin = require('ronin-server')
const mocks = require('ronin-mocks')
const server = ronin.server()
server.use('/', mocks.server(server.Router(), false, true))
server.start()
```
The mocking server is called `Ronin.js` and will listen on port 8000 by default. You can make POST requests to the root (/) endpoint and any JSON structure you send to the server will be saved in memory. You can also send GET requests to the same endpoint and receive an array of JSON objects that you have previously POSTed.
## Test the application
Lets start our application and make sure its running properly. Open your terminal and navigate to your working directory you created.
```console
$ node server.js
```
To test that the application is working properly, make a POST request with some JSON data to the API and then make a GET request to see that the data has been saved.
Open a new terminal and run the following curl command:
```console
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{"msg": "testing" }'
```
If the POST request is successful, then the output should look similar to:
```console
{"code":"success","payload":[{"msg":"testing","id":"31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1",
"createDate":"2020-08-28T21:53:07.157Z"}]}
```
Now make a GET request:
```console
$ curl http://localhost:8000/test
```
And the output should look similar to:
```console
{"code":"success","meta":{"total":1,"count":1},"payload":[{"msg":"testing","id":"31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1",
"createDate":"2020-08-28T21:53:07.157Z"}]}
```
Switch back to the terminal where our server is running. You should now see the following requests in the server logs.
```console
2020-XX-31T16:35:08:4260 INFO: POST /test
2020-XX-31T16:35:21:3560 INFO: GET /test
```
Great! We verified that the application works. At this stage, you've completed testing the server script locally.
Press `CTRL-c` from within the terminal session where the server is running to stop it.
```console
2021-08-06T12:11:33:8930 INFO: POST /test
2021-08-06T12:11:41:5860 INFO: GET /test
^Cshutting down...
```
We will now continue to build and run the application in Docker.
## Create a Dockerfile for Node.js
Next, we need to add a line in our Dockerfile that tells Docker what base image
we would like to use for our application.
```dockerfile
# syntax=docker/dockerfile:1
FROM node:18-alpine
```
Docker images can be inherited from other images. Therefore, instead of creating our own base image, well use the official Node.js image that already has all the tools and packages that we need to run a Node.js application. You can think of this in the same way you would think about class inheritance in object oriented programming. For example, if we were able to create Docker images in JavaScript, we might write something like the following.
`class MyImage extends NodeBaseImage {}`
This would create a class called `MyImage` that inherited functionality from the base class `NodeBaseImage`.
In the same way, when we use the `FROM` command, we tell Docker to include in our image all the functionality from the `node:18-alpine` image.
> **Note**
>
> If you want to learn more about creating your own base images, see [Creating base images](../../build/building/base-images.md).
The `NODE_ENV` environment variable specifies the environment in which an application is running (usually, development or production). One of the simplest things you can do to improve performance is to set `NODE_ENV` to `production`.
```dockerfile
ENV NODE_ENV=production
```
To make things easier when running the rest of our commands, lets create a working directory. This instructs Docker to use this path as the default location for all subsequent commands. This way we do not have to type out full file paths but can use relative paths based on the working directory.
```dockerfile
WORKDIR /app
```
Usually the very first thing you do once youve downloaded a project written in Node.js is to install npm packages. This ensures that your application has all its dependencies installed into the `node_modules` directory where the Node runtime will be able to find them.
Before we can run `npm install`, we need to get our `package.json` and `package-lock.json` files into our images. We use the `COPY` command to do this. The `COPY` command takes two parameters: `src` and `dest`. The first parameter `src` tells Docker what file(s) you would like to copy into the image. The second parameter `dest` tells Docker where you want that file(s) to be copied to. For example:
```dockerfile
COPY ["<src>", "<dest>"]
```
You can specify multiple `src` resources separated by a comma. For example, `COPY ["<src1>", "<src2>",..., "<dest>"]`.
Well copy the `package.json` and the `package-lock.json` file into our working directory `/app`.
```dockerfile
COPY ["package.json", "package-lock.json*", "./"]
```
Note that, rather than copying the entire working directory, we are only copying the package.json file. This allows us to take advantage of cached Docker layers.
Once we have our files inside the image, we can use the `RUN` command to execute the command npm install. This works exactly the same as if we were running npm install locally on our machine, but this time these Node modules will be installed into the `node_modules` directory inside our image.
```dockerfile
RUN npm install --production
```
At this point, we have an image that is based on node version 18 and we have installed our dependencies. The next thing we need to do is to add our source code into the image. Well use the COPY command just like we did with our `package.json` files above.
```dockerfile
COPY . .
```
The COPY command takes all the files located in the current directory and copies them into the image. Now, all we have to do is to tell Docker what command we want to run when our image is run inside of a container. We do this with the `CMD` command.
```dockerfile
CMD ["node", "server.js"]
```
Here's the complete Dockerfile.
```dockerfile
# syntax=docker/dockerfile:1
FROM node:18-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --production
COPY . .
CMD ["node", "server.js"]
```
## Create a .dockerignore file
To use a file in the [build context](../../build/building/context.md), the
Dockerfile refers to the file specified in an instruction, for example, a
COPY instruction. A `.dockerignore` file lets you specify files and directories
to be excluded from the build context. To improve the build's performance,
create a `.dockerignore` file and add the `node_modules` directory in it:
```.dockerignore
node_modules
```
## Build image
Now that weve created our Dockerfile, lets build our image. To do this, we use the `docker build` command. The `docker build` command builds Docker images from a Dockerfile and a “context”. A builds context is the set of files located in the specified PATH or URL. The Docker build process can access any of the files located in the context.
The build command optionally takes a `--tag` flag. The tag is used to set the name of the image and an optional tag in the format `name:tag`. Well leave off the optional “tag” for now to help simplify things. If you do not pass a tag, Docker will use “latest” as its default tag. Youll see this in the last line of the build output.
Lets build our first Docker image.
```console
$ docker build --tag node-docker .
```
```console
[+] Building 93.8s (11/11) FINISHED
=> [internal] load build definition from dockerfile 0.1s
=> => transferring dockerfile: 617B 0.0s
=> [internal] load .dockerignore 0.0s
...
=> [2/5] WORKDIR /app 0.4s
=> [3/5] COPY [package.json, package-lock.json*, ./] 0.2s
=> [4/5] RUN npm install --production 9.8s
=> [5/5] COPY . .
```
## View local images
To see a list of images we have on our local machine, we have two options. One is to use the CLI and the other is to use Docker Desktop. Since we are currently working in the terminal lets take a look at listing images with the CLI.
To list images, simply run the `images` command.
```console
$ docker images
```
```console
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc About a minute ago 945MB
```
Your exact output may vary, but you should see the image we just built `node-docker:latest` with the `latest` tag.
## Tag images
An image name is made up of slash-separated name components. Name components may contain lowercase letters, digits and separators. A separator is defined as a period, one or two underscores, or one or more dashes. A name component may not start or end with a separator.
An image is made up of a manifest and a list of layers. In simple terms, a “tag” points to a combination of these artifacts. You can have multiple tags for an image. Lets create a second tag for the image we built and take a look at its layers.
To create a new tag for the image we built above, run the following command.
```console
$ docker tag node-docker:latest node-docker:v1.0.0
```
The Docker tag command creates a new tag for an image. It does not create a new image. The tag points to the same image and is just another way to reference the image.
Now run the `docker images` command to see a list of our local images.
```console
$ docker images
```
```console
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc 24 minutes ago 945MB
node-docker v1.0.0 3809733582bc 24 minutes ago 945MB
```
You can see that we have two images that start with `node-docker`. We know they are the same image because if you look at the IMAGE ID column, you can see that the values are the same for the two images.
Lets remove the tag that we just created. To do this, well use the rmi command. The rmi command stands for “remove image”.
```console
$ docker rmi node-docker:v1.0.0
```
```console
Untagged: node-docker:v1.0.0
```
Notice that the response from Docker tells us that the image has not been removed but only “untagged”. Verify this by running the images command.
```console
$ docker images
```
```console
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc 32 minutes ago 945MB
```
Our image that was tagged with `:v1.0.0` has been removed but we still have the `node-docker:latest` tag available on our machine.
## Next steps
In this module, we took a look at setting up our example Node application that we will use for the rest of the tutorial. We also created a Dockerfile that we used to build our Docker image. Then, we took a look at tagging our images and removing images. In the next module, well take a look at how to:
{{< button text="Run your image as a container" url="run-containers.md" >}}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs]({{% param "repo" %}}/issues/new?title=[Node.js%20docs%20feedback]) GitHub repository. Alternatively, [create a PR]({{% param "repo" %}}/pulls) to suggest updates.

View File

@ -1,21 +1,139 @@
---
title: Configure CI/CD for your application
keywords: CI/CD, GitHub Actions, NodeJS, local, development
description: Learn how to develop your application locally.
title: Configure CI/CD for your Node.js application
keywords: ci/cd, github actions, node.js, node
description: Learn how to configure CI/CD using GitHub Actions for your Node.js application.
---
## Get started with GitHub Actions
## Prerequisites
{{< include "gha-tutorial.md" >}}
Complete all the previous sections of this guide, starting with [Containerize a Node.js application](containerize.md). You must have a [GitHub](https://github.com/signup) account and a [Docker](https://hub.docker.com/signup) account to complete this section.
## Overview
In this section, you'll learn how to set up and use GitHub Actions to build and test your Docker image as well as push it to Docker Hub. You will complete the following steps:
1. Create a new repository on GitHub.
2. Define the GitHub Actions workflow.
3. Run the workflow.
## Step one: Create the repository
Create a GitHub repository, configure the Docker Hub secrets, and push your source code.
1. [Create a new repository](https://github.com/new) on GitHub.
2. Open the repository **Settings**, and go to **Secrets and variables** >
**Actions**.
3. Create a new secret named `DOCKER_USERNAME` and your Docker ID as value.
4. Create a new [Personal Access Token
(PAT)](/docker-hub/access-tokens/#create-an-access-token) for Docker Hub. You
can name this token `node-docker`.
5. Add the PAT as a second secret in your GitHub repository, with the name
`DOCKERHUB_TOKEN`.
6. In your local repository on your machine, run the following command to change
the origin to the repository you just created. Make sure you change
`your-username` to your GitHub username and `your-repository` to the name of
the repository you created.
```console
$ git remote set-url origin https://github.com/your-username/your-repository.git
```
7. Run the following command to push your local repository to GitHub.
```console
$ git push -u origin main
```
## Step two: Set up the workflow
Set up your GitHub Actions workflow for building, testing, and pushing the image
to Docker Hub.
1. Go to your repository on GitHub and then select the **Actions** tab.
2. Select **set up a workflow yourself**.
This takes you to a page for creating a new GitHub actions workflow file in
your repository, under `.github/workflows/main.yml` by default.
3. In the editor window, copy and paste the following YAML configuration.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build and test
uses: docker/build-push-action@v4
with:
context: .
target: test
load: true
-
name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
target: prod
tags: ${{ secrets.DOCKER_USERNAME }}/${{ github.event.repository.name }}:latest
```
For more information about the YAML syntax used here, see [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions).
## Step three: Run the workflow
Save the workflow file and run the job.
1. Select **Commit changes...** and push the changes to the `main` branch.
After pushing the commit, the workflow starts automatically.
2. Go to the **Actions** tab. It displays the workflow.
Selecting the workflow shows you the breakdown of all the steps.
3. When the workflow is complete, go to your
[repositories on Docker Hub](https://hub.docker.com/repositories).
If you see the new repository in that list, it means the GitHub Actions
successfully pushed the image to Docker Hub.
## Summary
In this section, you learned how to set up a GitHub Actions workflow for your Node.js application.
Related information:
- [Introduction to GitHub Actions](../../build/ci/github-actions/index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
In this module, you have learnt how to set up GitHub Actions workflow to an existing Docker project, optimize your workflow to improve build times and reduce the number of pull requests, and finally, we learnt how to push only specific versions to Docker Hub. You can also set up nightly tests against the latest tag, test each PR, or do something more elegant with the tags we are using and make use of the Git tag for the same tag in our image.
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.
You can also consider deploying your application. For detailed instructions, see:
{{< button text="Deploy your app" url="./deploy.md" >}}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs]({{% param "repo" %}}/issues/new?title=[Node.js%20docs%20feedback]) GitHub repository. Alternatively, [create a PR]({{% param "repo" %}}/pulls) to suggest updates.
{{< button text="Develop using Kubernetes" url="./deploy.md" >}}

View File

@ -0,0 +1,156 @@
---
title: Containerize a Node.js application
keywords: node.js, node, containerize, initialize
description: Learn how to containerize a Node.js application.
redirect_from:
- /get-started/nodejs/build-images/
- /language/nodejs/build-images/
- /language/nodejs/run-containers/
---
## Prerequisites
* You have installed the latest version of [Docker
Desktop](../../get-docker.md).
* You have a [git client](https://git-scm.com/downloads). The examples in this
section use a command-line based git client, but you can use any client.
## Overview
This section walks you through containerizing and running a Node.js
application.
## Get the sample application
Clone the sample application to use with this guide. Open a terminal, change
directory to a directory that you want to work in, and run the following command
to clone the repository:
```console
$ git clone https://github.com/docker/docker-nodejs-sample
```
## Test the application without Docker (optional)
You can test the application locally without Docker before you continue building
and running the application with Docker. This section requires you to have
Node.js 18 or later installed on your machine. Download and install
[Node.js](https://nodejs.org/).
Open a terminal, change directory to the `docker-nodejs-sample` directory, and
run the following command to install the packages.
```console
$ npm install
```
When the packages have finished installing, run the following command to start
the application.
```console
$ node src/index.js
```
Open a browser and view the application at [http://localhost:3000](http://localhost:3000). You should see a simple todo application.
In the terminal, press `ctrl`+`c` to stop the application.
## Initialize Docker assets
Now that you have an application, you can use `docker init` to create the
necessary Docker assets to containerize your application. Inside the
`docker-nodejs-sample` directory, run the `docker init` command in a terminal.
Refer to the following example to answer the prompts from `docker init`.
```console
$ docker init
Welcome to the Docker Init CLI!
This utility will walk you through creating the following files with sensible defaults for your project:
- .dockerignore
- Dockerfile
- compose.yaml
Let's get started!
? What application platform does your project use? Node
? What version of Node do you want to use? 18.0.0
? Which package manager do you want to use? npm
? What command do you want to use to start the app: node src/index.js
? What port does your server listen on? 3000
```
You should now have the following contents in your `docker-nodejs-sample`
directory.
```
├── docker-nodejs-sample/
│ ├── spec/
│ ├── src/
│ ├── .dockerignore
│ ├── .gitignore
│ ├── compose.yaml
│ ├── Dockerfile
│ ├── package-lock.json
│ ├── package.json
│ └── README.md
```
To learn more about the files that `docker init` added, see the following:
- [Dockerfile](../../engine/reference/builder.md)
- [.dockerignore](../../engine/reference/builder.md#dockerignore-file)
- [compose.yaml](../../compose/compose-file/_index.md)
## Run the application
Inside the `docker-nodejs-sample` directory, run the following command in a
terminal.
```console
$ docker compose up --build
```
Open a browser and view the application at [http://localhost:3000](http://localhost:3000). You should see a simple todo application.
In the terminal, press `ctrl`+`c` to stop the application.
### Run the application in the background
You can run the application detached from the terminal by adding the `-d`
option. Inside the `docker-nodejs-sample` directory, run the following command
in a terminal.
```console
$ docker compose up --build -d
```
Open a browser and view the application at [http://localhost:3000](http://localhost:3000).
You should see a simple todo application.
In the terminal, run the following command to stop the application.
```console
$ docker compose down
```
For more information about Compose commands, see the [Compose CLI
reference](../../compose/reference/_index.md).
## Summary
In this section, you learned how you can containerize and run your Node.js
application using Docker.
Related information:
- [Dockerfile reference](../../engine/reference/builder.md)
- [Build with Docker guide](../../build/guide/index.md)
- [.dockerignore file reference](../../engine/reference/builder.md#dockerignore-file)
- [Docker Compose overview](../../compose/_index.md)
## Next steps
In the next section, you'll learn how you can develop your application using
containers.
{{< button text="Develop your application" url="develop.md" >}}

View File

@ -1,11 +1,130 @@
---
title: Deploy your app
keywords: deploy, cloud, ACI, ECS, NodeJS, local, development
description: Learn how to deploy your application
title: Test your deployment
keywords: deploy, kubernetes, node, node.js
description: Learn how to develop locally using Kubernetes
---
{{< include "deploy.md" >}}
## Prerequisites
## Feedback
- Complete all the previous sections of this guide, starting with [Containerize a Node.js application](containerize.md).
- [Turn on Kubernetes](/desktop/kubernetes/#turn-on-kubernetes) in Docker Desktop.
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs]({{% param "repo" %}}/issues/new?title=[Node.js%20docs%20feedback]) GitHub repository. Alternatively, [create a PR]({{% param "repo" %}}/pulls) to suggest updates.
## Overview
In this section, you'll learn how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine. This allows you to test and debug your workloads on Kubernetes locally before deploying.
## Create a Kubernetes YAML file
In the cloned repository's directory, create a file name `docker-node-kubernetes.yaml`. Open the file in an IDE or text editor and add the following contents.
Replace `DOCKER_USERNAME/REPO_NAME` with your Docker username and the name of the repository that you created in [Configure CI/CD for your Node.js application](configure-ci-cd.md).
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-nodejs-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
todo: web
template:
metadata:
labels:
todo: web
spec:
containers:
- name: todo-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: todo-entrypoint
namespace: default
spec:
type: NodePort
selector:
todo: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod, and that pod (which is
described under the template: key) has just one container in it, based off of
the image built by GitHub Actions in [Configure CI/CD for your Node.js application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3000 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
## Deploy and check your application
1. In a terminal, navigate to where you created `docker-node-kubernetes.yaml`
and deploy your application to Kubernetes.
```console
$ kubectl apply -f docker-node-kubernetes.yaml
```
You should see output that looks like the following, indicating your Kubernetes objects were created successfully.
```shell
deployment.apps/docker-nodejs-demo created
service/todo-entrypoint created
```
2. Make sure everything worked by listing your deployments.
```console
$ kubectl get deployments
```
Your deployment should be listed as follows:
```shell
NAME READY UP-TO-DATE AVAILABLE AGE
docker-nodejs-demo 1/1 1 1 6s
```
This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services.
```console
$ kubectl get services
```
You should get output like the following.
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d22h
todo-entrypoint NodePort 10.111.101.229 <none> 3000:30001/TCP 33s
```
In addition to the default `kubernetes` service, you can see your `todo-entrypoint` service, accepting traffic on port 30001/TCP.
3. Open a browser and visit your app at `localhost:30001`. You should see your
application.
4. Run the following command to tear down your application.
```console
$ kubectl delete -f docker-node-kubernetes.yaml
```
## Summary
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](../../desktop/kubernetes.md)
- [Swarm mode overview](../../engine/swarm/_index.md)

View File

@ -1,246 +1,288 @@
---
title: Use containers for development
keywords: get started, NodeJS, local, development
description: Learn how to develop your application locally.
title: Use containers for Node.js development
keywords: node, node.js, development
description: Learn how to develop your Node.js application locally using containers.
aliases:
- /get-started/nodejs/develop/
---
## Prerequisites
Work through the steps to build an image and run it as a containerized application in [Run your image as a container](run-containers.md).
Complete [Containerize a Node.js application](containerize.md).
## Introduction
## Overview
In this module, well walk through setting up a local development environment for the application we built in the previous modules. Well use Docker to build our images and Docker Compose to make everything a whole lot easier.
In this section, you'll learn how to set up a development environment for your containerized application. This includes:
- Adding a local database and persisting data
- Configuring your container to run a development environment
- Debugging your containerized application
## Local database and containers
## Add a local database and persist data
First, well take a look at running a database in a container and how we use volumes and networking to persist our data and allow our application to talk with the database. Then well pull everything together into a compose file which will allow us to setup and run a local development environment with one command. Finally, well take a look at connecting a debugger to our application running inside a container.
You can use containers to set up local services, like a database. In this section, you'll update the `compose.yaml` file to define a database service and a volume to persist data.
Instead of downloading MongoDB, installing, configuring and then running the Mongo database as a service, we can use the Docker Official Image for MongoDB and run it in a container.
Open the `compose.yaml` file in an IDE or text editor. You'll notice it
already contains commented-out instructions for a Postgres database and volume.
Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to store our persistent data and configuration. Let's use the managed volumes feature that docker provides instead of using bind mounts. For more information, see [Use volumes](../../storage/volumes.md).
Open `src/persistence/postgres.js` in an IDE or text editor. You'll notice that
this application uses a Postgres database and requires some environment
variables in order to connect to the database. The `compose.yaml` file doesn't
have these variables defined.
Lets create our volumes now. Well create one for the data and one for configuration of MongoDB.
You need to update the following items in the `compose.yaml` file:
- Uncomment all of the database instructions.
- Add the environment variables under the server service.
- Add `secrets` to the server service for the database password.
```console
$ docker volume create mongodb
$ docker volume create mongodb_config
```
Now well create a network that our application and database will use to talk with each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service which we can use when creating our connection string.
```console
$ docker network create mongodb
```
Now we can run MongoDB in a container and attach to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally.
```console
$ docker run -it --rm -d -v mongodb:/data/db \
-v mongodb_config:/data/configdb -p 27017:27017 \
--network mongodb \
--name mongodb \
mongo
```
Okay, now that we have a running MongoDB, lets update `server.js` to use MongoDB and not an in-memory data store.
```javascript
const ronin = require( 'ronin-server' )
const database = require( 'ronin-database' )
const mocks = require( 'ronin-mocks' )
async function main() {
try {
await database.connect( process.env.CONNECTIONSTRING )
const server = ronin.server({
port: process.env.SERVER_PORT
})
server.use( '/', mocks.server( server.Router()) )
const result = await server.start()
console.info( result )
} catch( error ) {
console.error( error )
}
}
main()
```
Weve added the `ronin-database` module and we updated the code to connect to the database and set the in-memory flag to false. We now need to rebuild our image so it contains our changes.
First lets add the `ronin-database` module to our application using npm.
```console
$ npm install ronin-database
```
Now we can build our image.
```console
$ docker build --tag node-docker .
```
Now, lets run our container. But this time well need to set the `CONNECTIONSTRING` environment variable so our application knows what connection string to use to access the database. Well do this right in the `docker run` command.
```console
$ docker run \
-it --rm -d \
--network mongodb \
--name rest-server \
-p 8000:8000 \
-e CONNECTIONSTRING=mongodb://mongodb:27017/notes \
node-docker
```
The `notes` at the end of the connection string is the desired name for our database.
Lets test that our application is connected to the database and is able to add a note.
```console
$ curl --request POST \
--url http://localhost:8000/notes \
--header 'content-type: application/json' \
--data '{"name": "this is a note", "text": "this is a note that I wanted to take while I was working on writing a blog post.", "owner": "peter"}'
```
You should receive the following json back from our service.
```json
{"code":"success","payload":{"_id":"5efd0a1552cd422b59d4f994","name":"this is a note","text":"this is a note that I wanted to take while I was working on writing a blog post.","owner":"peter","createDate":"2020-07-01T22:11:33.256Z"}}
```
## Use Compose to develop locally
In this section, well create a Compose file to start our node-docker and the MongoDB with one command. Well also set up the Compose file to start the node-docker in debug mode so that we can connect a debugger to the running node process.
Open the notes-service in your IDE or text editor and create a new file named `docker-compose.dev.yml`. Copy and paste the below commands into the file.
The following is the updated `compose.yaml` file.
```yaml
version: '3.8'
services:
notes:
build:
context: .
ports:
- 8000:8000
- 9229:9229
environment:
- SERVER_PORT=8000
- CONNECTIONSTRING=mongodb://mongo:27017/notes
volumes:
- ./:/app
- /app/node_modules
command: npm run debug
mongo:
image: mongo:4.2.8
ports:
- 27017:27017
volumes:
- mongodb:/data/db
- mongodb_config:/data/configdb
server:
build:
context: .
environment:
NODE_ENV: production
POSTGRES_HOST: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD_FILE: /run/secrets/db-password
POSTGRES_DB: example
ports:
- 3000:3000
depends_on:
db:
condition: service_healthy
secrets:
- db-password
db:
image: postgres
restart: always
user: postgres
secrets:
- db-password
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=example
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
interval: 10s
timeout: 5s
retries: 5
volumes:
mongodb:
mongodb_config:
db-data:
secrets:
db-password:
file: db/password.txt
```
This Compose file is super convenient as we do not have to type all the parameters to pass to the `docker run` command. We can declaratively do that in the Compose file.
> **Note**
>
> To learn more about the instructions in the Compose file, see [Compose file
> reference](/compose/compose-file/).
We are exposing `port 9229` so that we can attach a debugger. With `volumes`, we are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.
Before you run the application using Compose, notice that this Compose file uses
`secrets` and specifies a `password.txt` file to hold the database's password.
You must create this file as it's not included in the source repository.
One other really cool feature of using a Compose file is that we have service resolution set up to use the service names. So we are now able to use `“mongo”` in our connection string. The reason we use mongo is because that is what we have named our MongoDB service in the Compose file as.
In the cloned repository's directory, create a new directory named `db`. Inside the `db` directory, create a file named `password.txt`. Open `password.txt` in an IDE or text editor and add a password of your choice.
To start our application in debug mode, we need to add a line to our `package.json` file to tell npm how to start our application in debug mode.
You should now have the following contents in your `docker-nodejs-sample`
directory.
Open the `package.json` file and add the following line to the scripts section:
```
├── docker-nodejs-sample/
│ ├── db/
│ │ └── password.txt
│ ├── spec/
│ ├── src/
│ ├── .dockerignore
│ ├── .gitignore
│ ├── compose.yaml
│ ├── Dockerfile
│ ├── package-lock.json
│ ├── package.json
│ └── README.md
```
```json
"debug": "nodemon --inspect=0.0.0.0:9229 -L server.js"
```
As you can see, we are going to use nodemon. Nodemon starts our server in debug mode and also watches for files that have changed, and restarts our server. Lets run the following command in a terminal to install nodemon into our project directory.
```json
$ npm install nodemon
```
Lets start our application and confirm that it is running properly.
Run the following command to start your application.
```console
$ docker compose -f docker-compose.dev.yml up --build
$ docker compose up --build
```
We pass the `--build` flag so Docker compiles our image and then starts it.
Open a browser and verify that the application is running at [http://localhost:3000](http://localhost:3000).
If all goes well, you should see something similar:
Add some items to the todo list to test data persistence.
![Screenshot of image being compiled](images/node-compile.png)
After adding some items to the todo list, press `ctrl+c` in the terminal to stop your application.
Now lets test our API endpoint. Run the following curl command:
In the terminal, run `docker compose rm` to remove your containers and then run `docker compose up` to run your application again.
```console
$ curl --request GET --url http://localhost:8000/notes
$ docker compose rm
$ docker compose up --build
```
You should receive the following response:
Refresh [http://localhost:3000](http://localhost:3000) in your browser and verify that the todo items persisted, even after the containers were removed and ran again.
```json
{"code":"success","meta":{"total":0,"count":0},"payload":[]}
## Configure and run a development container
You can use a bind mount to mount your source code into the container. The container can then see the changes you make to the code immediately, as soon as you save a file. This means that you can run processes, like nodemon, in the container that watch for filesystem changes and respond to them. To learn more about bind mounts, see [Storage overview](../../storage/index.md).
In addition to adding a bind mount, you can configure your Dockerfile and `compose.yaml` file to install development dependencies and run development tools.
### Update your Dockerfile for development
Open the Dockerfile in an IDE or text editor. Note that the Dockerfile doesn't
install development dependencies and doesn't run nodemon. You'll
need to update your Dockerfile to install the development dependencies and run
nodemon.
Rather than creating one Dockerfile for production, and another Dockerfile for
development, you can use one multi-stage Dockerfile for both.
Update your Dockerfile to the following multi-stage Dockerfile.
```dockerfile
# syntax=docker/dockerfile:1
ARG NODE_VERSION=18.0.0
FROM node:${NODE_VERSION}-alpine as base
WORKDIR /usr/src/app
EXPOSE 3000
FROM base as dev
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --include=dev
USER node
COPY . .
CMD npm run dev
FROM base as prod
ENV NODE_ENV production
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --omit=dev
USER node
COPY . .
CMD node src/index.js
```
## Connect a debugger
In the Dockerfile, you first add a label `as base` to the `FROM
node:${NODE_VERSION}-alpine` statement. This allows you to refer to this build
stage in other build stages. Next, you add a new build stage labeled `dev` to
install your dev dependencies and start the container using `npm run dev`.
Finally, you add a stage labeled `prod` that omits the dev dependencies and runs
your application using `node src/index.js`. To learn more about multi-stage
builds, see [Multi-stage builds](../../build/building/multi-stage.md).
Well use the debugger that comes with the Chrome browser. Open Chrome on your machine and then type the following into the address bar.
Next, you'll need to update your Compose file to use the new stage.
`about:inspect`
### Update your Compose file for development
It opens the following screen.
To run the `dev` stage with Compose, you need to update your `compose.yaml` file.
Open your `compose.yaml` file in an IDE or text editor, and then add the
`target: dev` instruction to target the `dev` stage from your multi-stage
Dockerfile.
![Imaging showing Chrome inspect with DevTools](images/chrome-inspect.png)
Also, add a new volume to the server service for the bind mount. For this application, you'll mount `./src` from your local machine to `/usr/src/app/src` in the container.
Select **Configure**. This opens the **Target discovery settings**. Specify the target `127.0.0.1:9229` if it does not exist and then select **Done**.
The following is the updated Compose file.
Select the **Open dedicated DevTools for Node** link. This opens the DevTools that are connected to the running Node.js process inside our container.
Lets change the source code and then set a breakpoint.
Add the following code above the existing `server.use()` statement, and save the file. Make sure that the `return` statement is on a line of its own, as shown here, so you can set the breakpoint appropriately.
```js
server.use( '/foo', (req, res) => {
return res.json({ "foo": "bar" })
})
```yaml
services:
server:
build:
context: .
target: dev
environment:
NODE_ENV: production
POSTGRES_HOST: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD_FILE: /run/secrets/db-password
POSTGRES_DB: example
ports:
- 3000:3000
depends_on:
db:
condition: service_healthy
secrets:
- db-password
volumes:
- ./src:/usr/src/app/src
db:
image: postgres
restart: always
user: postgres
secrets:
- db-password
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=example
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
interval: 10s
timeout: 5s
retries: 5
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
```
If you take a look at the terminal where our Compose application is running, youll see that nodemon noticed the changes and reloaded our application.
### Run your development container and debug your application
![Image of terminal noticing change and reloading](images/nodemon.png)
Navigate back to the Chrome DevTools and set a breakpoint on the line containing the `return res.json({ "foo": "bar" })` statement, and then run the following curl command to trigger the breakpoint.
Run the following command to run your application with the new changes to the `Dockerfile` and `compose.yaml` file.
```console
$ curl --request GET --url http://localhost:8000/foo
$ docker compose up --build
```
You should have seen the code stop at the breakpoint and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces, etc.
Open a browser and verify that the application is running at [http://localhost:3000](http://localhost:3000).
Any changes to the application's source files on your local machine will now be
immediately reflected in the running container.
Open `docker-nodejs-sample/src/static/js/app.js` in an IDE or text editor and update the button text on line 109 from `Add Item` to `Add`.
```diff
+ {submitting ? 'Adding...' : 'Add'}
- {submitting ? 'Adding...' : 'Add Item'}
```
Refresh [http://localhost:3000](http://localhost:3000) in your browser and verify that the updated text appears.
You can now connect an inspector client to your application for debugging. For
more details about inspector clients, see the [Node.js
documentation](https://nodejs.org/en/docs/guides/debugging-getting-started).
## Summary
In this section, you took a look at setting up your Compose file to add a mock
database and persist data. You also learned how to create a multi-stage
Dockerfile and set up a bind mount for development.
Related information:
- [Volumes top-level element](/compose/compose-file/07-volumes/)
- [Services top-level element](/compose/compose-file/05-services/)
- [Multi-stage builds](../../build/building/multi-stage.md)
## Next steps
In this module, we took a look at creating a general development image that we can use pretty much like our normal command line. We also set up our Compose file to map our source code into the running container and exposed the debugging port.
In the next module, well take a look at how to run unit tests in Docker. See:
In the next section, you'll learn how to run unit tests using Docker.
{{< button text="Run your tests" url="run-tests.md" >}}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs]({{% param "repo" %}}/issues/new?title=[Node.js%20docs%20feedback]) GitHub repository. Alternatively, [create a PR]({{% param "repo" %}}/pulls) to suggest updates.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 248 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 853 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 330 KiB

View File

@ -1,198 +0,0 @@
---
title: Run your image as a container
keywords: get started, Node JS, run, container,
description: Learn how to run the image as a container.
aliases:
- /get-started/nodejs/run-containers/
---
## Prerequisites
Work through the steps to build a Node JS image in [Build your Node image](build-images.md).
## Overview
In the previous module we created our sample application and then we created a Dockerfile that we used to create an image. We created our image using the command `docker build`. Now that we have an image, we can run that image and see if our application is running correctly.
A container is a normal operating system process except that this process is isolated and has its own file system, its own networking, and its own isolated process tree separate from the host.
To run an image inside of a container, we use the `docker run` command. The `docker run` command requires one parameter and that is the image name. Lets start our image and make sure it is running correctly. Execute the following command in your terminal.
```console
$ docker run node-docker
```
When you run this command, youll notice that you were not returned to the command prompt. This is because our application is a REST server and will run in a loop waiting for incoming requests without returning control back to the OS until we stop the container.
Lets open a new terminal then make a POST request to the server using the curl command.
```console
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{"msg": "testing"}'
curl: (7) Failed to connect to localhost port 8000: Connection refused
```
Our curl command failed because the connection to our server was refused. It means that we were not able to connect to localhost on port 8000. This is expected because our container is running in isolation which includes networking. Lets stop the container and restart with port 8000 published on our local network.
To stop the container, press ctrl-c. This will return you to the terminal prompt.
To publish a port for our container, well use the `--publish` flag (`-p` for short) on the docker run command. The format of the `--publish` command is `[host port]:[container port]`. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the --publish flag.
Start the container and expose port 8000 to port 8000 on the host.
```console
$ docker run --publish 8000:8000 node-docker
```
Now lets rerun the curl command from above. Remember to open a new terminal.
```console
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{"msg": "testing"}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}
```
Success! We were able to connect to the application running inside of our container on port 8000. Switch back to the terminal where your container is running and you should see the POST request logged to the console.
`2020-09-01T17:36:09:8770 INFO: POST /test`
Press ctrl-c to stop the container.
## Run in detached mode
This is great so far, but our sample application is a web server and we should not have to have our terminal connected to the container. Docker can run your container in detached mode or in the background. To do this, we can use the `--detach` or `-d` for short. Docker will start your container the same as before but this time will “detach” from the container and return you to the terminal prompt.
```console
$ docker run -d -p 8000:8000 node-docker
ce02b3179f0f10085db9edfccd731101868f58631bdf918ca490ff6fd223a93b
```
Docker started our container in the background and printed the Container ID on the terminal.
Again, lets make sure that our container is running properly. Run the same curl command from above.
```console
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{"msg": "testing"}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}
```
## List containers
Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, to see a list of containers running on our machine, run `docker ps`. This is similar to how the ps command is used to see a list of processes on a Linux machine.
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 0.0.0.0:8000->8000/tcp wonderful_kalam
```
The `ps` command tells a bunch of stuff about our running containers. We can see the Container ID, the image running inside the container, the command that was used to start the container, when it was created, the status, ports that exposed and the name of the container.
You are probably wondering where the name of our container is coming from. Since we didnt provide a name for the container when we started it, Docker generated a random name. Well fix this in a minute but first we need to stop the container. To stop the container, run the `docker stop` command which does just that, stops the container. You will need to pass the name of the container or you can use the container id.
```console
$ docker stop wonderful_kalam
wonderful_kalam
```
Now rerun the `docker ps` command to see a list of running containers.
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
## Stop, start, and name containers
Docker containers can be started, stopped and restarted. When we stop a container, it is not removed but the status is changed to stopped and the process inside of the container is stopped. When we ran the `docker ps` command, the default output is to only show running containers. If we pass the `--all` or `-a` for short, we will see all containers on our system whether they are stopped or started.
```console
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 16 minutes ago Exited (0) 5 minutes ago wonderful_kalam
ec45285c456d node-docker "docker-entrypoint.s…" 28 minutes ago Exited (0) 20 minutes ago agitated_moser
fb7a41809e5d node-docker "docker-entrypoint.s…" 37 minutes ago Exited (0) 36 minutes ago goofy_khayyam
```
If youve been following along, you should see several containers listed. These are containers that we started and stopped but have not been removed.
Lets restart the container that we just stopped. Locate the name of the container we just stopped and replace the name of the container below in the restart command.
```console
$ docker restart wonderful_kalam
```
Now, list all the containers again using the ps command.
```console
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d node-docker "docker-entrypoint.s…" 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d node-docker "docker-entrypoint.s…" 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam
```
Notice that the container we just restarted has been started in detached mode and has port 8000 exposed. Also, observe the status of the container is “Up X seconds”. When you restart a container, it will be started with the same flags or commands that it was originally started with.
Lets stop and remove all of our containers and take a look at fixing the random naming issue.
Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system.
```console
$ docker stop wonderful_kalam
wonderful_kalam
```
Now that all of our containers are stopped, lets remove them. When a container is removed, it is no longer running nor is it in the stopped status. However, the process inside the container has been stopped and the metadata for the container has been removed.
```console
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d node-docker "docker-entrypoint.s…" 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d node-docker "docker-entrypoint.s…" 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam
```
To remove a container, simply run the `docker rm` command passing the container name. You can pass multiple container names to the command in one command.
Again, make sure you replace the containers names in the below command with the container names from your system.
```console
$ docker rm wonderful_kalam agitated_moser goofy_khayyam
wonderful_kalam
agitated_moser
goofy_khayyam
```
Run the `docker ps --all` command again to see that all containers are gone.
Now lets address the pesky random name issue. Standard practice is to name your containers for the simple reason that it is easier to identify what is running in the container and what application or service it is associated with. Just like good naming conventions for variables in your code make it simpler to read, so does naming your containers.
To name a container, we just need to pass the `--name` flag to the run command.
```console
$ docker run -d -p 8000:8000 --name rest-server node-docker
1aa5d46418a68705c81782a58456a4ccdb56a309cb5e6bd399478d01eaa5cdda
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aa5d46418a6 node-docker "docker-entrypoint.s…" 3 seconds ago Up 3 seconds 0.0.0.0:8000->8000/tcp rest-server
```
Now, we can easily identify our container based on the name.
## Next steps
In this module, we took a look at running containers, publishing ports, and running containers in detached mode. We also took a look at managing containers by starting, stopping, and restarting them. We also looked at naming our containers so they are more easily identifiable. In the next module, well learn how to run a database in a container and connect it to our application. See:
{{< button text="How to develop your application" url="develop.md" >}}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs]({{% param "repo" %}}/issues/new?title=[Node.js%20docs%20feedback]) GitHub repository. Alternatively, [create a PR]({{% param "repo" %}}/pulls) to suggest updates.

View File

@ -1,325 +1,174 @@
---
title: Run your Tests using Node.js and Mocha frameworks
keywords: Node.js, build, Mocha, test
description: How to Build and Run your Tests using Node.js and Mocha frameworks
title: Run Node.js tests in a container
keywords: node.js, node, test
description: Learn how to run your Node.js tests in a container.
---
## Prerequisites
Work through the steps to build an image and run it as a containerized application in [Use your container for development](develop.md).
Complete all the previous sections of this guide, starting with [Containerize a Node.js application](containerize.md).
## Introduction
## Overview
Testing is an essential part of modern software development. Testing can mean a lot of things to different development teams. There are unit tests, integration tests and end-to-end testing. In this guide we take a look at running your unit tests in Docker.
Testing is an essential part of modern software development. Testing can mean a
lot of things to different development teams. There are unit tests, integration
tests and end-to-end testing. In this guide you take a look at running your unit
tests in Docker when developing and when building.
## Create a test
## Run tests when developing locally
Let's define a Mocha test in a `./test` directory within our application.
The sample application already has the Jest package for running tests and has tests inside the `spec` directory. When developing locally, you can easily use Compose to run your tests.
Run the following command to run the test script from the `package.json` file inside a container.
```
$ docker compose run server npm run test
```
To learn more about the command, see [docker compose run](/engine/reference/commandline/compose_run/).
You should see output like the following.
```console
$ mkdir -p test
> docker-nodejs@1.0.0 test
> jest
PASS spec/routes/deleteItem.spec.js
PASS spec/routes/getItems.spec.js
PASS spec/routes/addItem.spec.js
PASS spec/routes/updateItem.spec.js
PASS spec/persistence/sqlite.spec.js
● Console
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
Test Suites: 5 passed, 5 total
Tests: 9 passed, 9 total
Snapshots: 0 total
Time: 2.008 s
Ran all test suites.
```
Save the following code in `./test/test.js`.
## Run tests when building
```javascript
var assert = require('assert');
describe('Array', function() {
describe('#indexOf()', function() {
it('should return -1 when the value is not present', function() {
assert.equal([1, 2, 3].indexOf(4), -1);
});
});
});
```
To run your tests when building, you need to update your Dockerfile to add a new test stage.
### Running locally and testing the application
Lets build our Docker image and confirm everything is running properly. Run the following command to build and run your Docker image in a container.
```console
$ docker compose -f docker-compose.dev.yml up --build
```
Now lets test our application by POSTing a JSON payload and then make an HTTP GET request to make sure our JSON was saved correctly.
```console
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{"msg": "testing"}'
```
Now, perform a GET request to the same endpoint to make sure our JSON payload was saved and retrieved correctly. The “id” and “createDate” will be different for you.
```console
$ curl http://localhost:8000/test
{"code":"success","payload":[{"msg":"testing","id":"e88acedb-203d-4a7d-8269-1df6c1377512","createDate":"2020-10-11T23:21:16.378Z"}]}
```
## Install Mocha
Run the following command to install Mocha and add it to the developer dependencies:
```console
$ npm install --save-dev mocha
```
## Update package.json and Dockerfile to run tests
Okay, now that we know our application is running properly, lets try and run our tests inside of the container. Well use the same docker run command we used above but this time, well override the CMD that is inside of our container with npm run test. This will invoke the command that is in the package.json file under the “script” section. See below.
```javascript
{
...
"scripts": {
"test": "mocha ./**/*.js",
"start": "nodemon --inspect=0.0.0.0:9229 -L server.js"
},
...
}
```
Below is the Docker command to start the container and run tests:
```console
$ docker compose -f docker-compose.dev.yml run notes npm run test
```
When you run the tests, you should get an error like the following:
```console
> mocha ./**/*.js
sh: mocha: not found
```
The current Dockerfile does not install dev dependencies in the image, so mocha cannot be found. To fix this, you can update the Dockerfile to install the dev dependencies.
The following is the updated Dockerfile.
```dockerfile
# syntax=docker/dockerfile:1
FROM node:18-alpine
ENV NODE_ENV=production
ARG NODE_VERSION=18.0.0
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --include=dev
FROM node:${NODE_VERSION}-alpine as base
WORKDIR /usr/src/app
EXPOSE 3000
FROM base as dev
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --include=dev
USER node
COPY . .
CMD ["node", "server.js"]
```
Run the command again, and this time rebuild the image to use the new Dockerfile.
```console
$ docker compose -f docker-compose.dev.yml run --build notes npm run test
```
When you run the tests this time, you should get the following output:
```console
> mocha ./**/*.js
Array
#indexOf()
✔ should return -1 when the value is not present
1 passing (6ms)
```
This image with dev dependencies installed is not suitable for a production image. Rather than creating multiple Dockerfiles, we can create a multi-stage Dockerfile to create an image for testing and an image for production.
### Multi-stage Dockerfile for testing
In addition to running the tests on command, we can run them when we build our image, using a multi-stage Dockerfile. The following Dockerfile will run our tests and build our production image.
```dockerfile
# syntax=docker/dockerfile:1
FROM node:18-alpine as base
WORKDIR /code
COPY package.json package.json
COPY package-lock.json package-lock.json
FROM base as test
RUN npm ci
COPY . .
CMD ["npm", "run", "test"]
CMD npm run dev
FROM base as prod
RUN npm ci --production
ENV NODE_ENV production
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --omit=dev
USER node
COPY . .
CMD ["node", "server.js"]
```
We first add a label `as base` to the `FROM node:18-alpine` statement. This allows us to refer to this build stage in other build stages. Next we add a new build stage labeled test. We will use this stage for running our tests.
Now lets rebuild our image and run our tests. We will run the same docker build command as above but this time we will add the `--target test` flag so that we specifically run the test build stage.
```console
$ docker build -t node-docker --target test .
[+] Building 66.5s (12/12) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 662B 0.0s
=> [internal] load .dockerignore
...
=> [internal] load build context 4.2s
=> => transferring context: 9.00MB 4.1s
=> [base 2/4] WORKDIR /code 0.2s
=> [base 3/4] COPY package.json package.json 0.0s
=> [base 4/4] COPY package-lock.json package-lock.json 0.0s
=> [test 1/2] RUN npm ci 6.5s
=> [test 2/2] COPY . .
```
Now that our test image is built, we can run it in a container and see if our tests pass.
```console
$ docker run -it --rm -p 8000:8000 node-docker
> node-docker@1.0.0 test /code
> mocha ./**/*.js
Array
#indexOf()
✓ should return -1 when the value is not present
1 passing (12ms)
```
Ive truncated the build output but you can see that the Mocha test runner completed and all our tests passed.
This is great but at the moment we have to run two docker commands to build and run our tests. We can improve this slightly by using a RUN statement instead of the CMD statement in the test stage. The CMD statement is not executed during the building of the image but is executed when you run the image in a container. While with the RUN statement, our tests will be run during the building of the image and stop the build when they fail.
Update your Dockerfile with the highlighted line below.
```dockerfile
# syntax=docker/dockerfile:1
FROM node:18-alpine as base
WORKDIR /code
COPY package.json package.json
COPY package-lock.json package-lock.json
CMD node src/index.js
FROM base as test
RUN npm ci
ENV NODE_ENV test
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --include=dev
USER node
COPY . .
RUN npm run test
FROM base as prod
RUN npm ci --production
COPY . .
CMD ["node", "server.js"]
```
Now to run our tests, we just need to run the docker build command as above.
Instead of using `CMD` in the test stage, use `RUN` to run the tests. The reason is that the `CMD` instruction runs when the container runs, and the `RUN` instruction runs when the image is being built and the build will fail if the tests fail.
Run the following command to build a new image using the test stage as the target and view the test results. Include `--progress=plain` to view the build output, `--no-cache` to ensure the tests always run, and `--target test` to target the test stage.
```console
$ docker build -t node-docker --target test .
[+] Building 8.9s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 650B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B
> node-docker@1.0.0 test /code
> mocha ./**/*.js
Array
#indexOf()
✓ should return -1 when the value is not present
1 passing (9ms)
Removing intermediate container beadc36b293a
---> 445b80e59acd
Successfully built 445b80e59acd
Successfully tagged node-docker:latest
```
Ive truncated the output again for simplicity but you can see that our tests are run and passed. Lets break one of the tests and observe the output when our tests fail.
Open the test/test.js file and change line 5 as follows.
```shell
1 var assert = require('assert');
2 describe('Array', function() {
3 describe('#indexOf()', function() {
4 it('should return -1 when the value is not present', function() {
5 assert.equal([1, 2, 3].indexOf(3), -1);
6 });
7 });
8 });
$ docker build -t node-docker-image-test --progress=plain --no-cache --target test .
```
Now, run the same docker build command from above and observe that the build fails and the failing testing information is printed to the console.
To learn more about building and running tests, see the [Build with Docker guide](../../build/guide/_index.md).
You should see output containing the following.
```console
$ docker build -t node-docker --target test .
Sending build context to Docker daemon 22.35MB
Step 1/8 : FROM node:18-alpine as base
---> 995ff80c793e
...
Step 8/8 : RUN npm run test
---> Running in b96d114a336b
> node-docker@1.0.0 test /code
> mocha ./**/*.js
#11 [test 3/3] RUN npm run test
#11 1.058
#11 1.058 > docker-nodejs@1.0.0 test
#11 1.058 > jest
#11 1.058
#11 3.765 PASS spec/routes/getItems.spec.js
#11 3.767 PASS spec/routes/deleteItem.spec.js
#11 3.783 PASS spec/routes/updateItem.spec.js
#11 3.806 PASS spec/routes/addItem.spec.js
#11 4.179 PASS spec/persistence/sqlite.spec.js
#11 4.207
#11 4.208 Test Suites: 5 passed, 5 total
#11 4.208 Tests: 9 passed, 9 total
#11 4.208 Snapshots: 0 total
#11 4.208 Time: 2.168 s
#11 4.208 Ran all test suites.
#11 4.265 npm notice
#11 4.265 npm notice New major version of npm available! 8.6.0 -> 9.8.1
#11 4.265 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v9.8.1>
#11 4.265 npm notice Run `npm install -g npm@9.8.1` to update!
#11 4.266 npm notice
#11 DONE 4.3s
Array
#indexOf()
1) should return -1 when the value is not present
0 passing (12ms)
1 failing
1) Array
#indexOf()
should return -1 when the value is not present:
AssertionError [ERR_ASSERTION]: 2 == -1
+ expected - actual
-2
+-1
at Context.<anonymous> (test/test.js:5:14)
at processImmediate (internal/timers.js:461:21)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! node-docker@1.0.0 test: `mocha ./**/*.js`
npm ERR! Exit status 1
...
```
## Summary
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information:
- [docker compose run](/engine/reference/commandline/compose_run/)
- [Build with Docker guide](../../build/guide/index.md)
## Next steps
In this module, we took a look at running tests as part of our Docker image build process.
Next, youll learn how to set up a CI/CD pipeline using GitHub Actions.
In the next module, well take a look at how to set up a CI/CD pipeline using GitHub Actions. See:
{{< button text="Configure CI/CD" url="configure-ci-cd.md" >}}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs]({{% param "repo" %}}/issues/new?title=[Node.js%20docs%20feedback]) GitHub repository. Alternatively, [create a PR]({{% param "repo" %}}/pulls) to suggest updates.
{{< button text="Configure CI/CD" url="configure-ci-cd.md" >}}

View File

@ -33,17 +33,15 @@ Guides:
section:
- title: "Overview"
path: /language/nodejs/
- title: "Build images"
path: /language/nodejs/build-images/
- title: "Run containers"
path: /language/nodejs/run-containers/
- title: "Containerize your app"
path: /language/nodejs/containerize/
- title: "Develop your app"
path: /language/nodejs/develop/
- title: "Run your tests"
path: /language/nodejs/run-tests/
- title: "Configure CI/CD"
path: /language/nodejs/configure-ci-cd/
- title: "Deploy your app"
- title: "Test your deployment"
path: /language/nodejs/deploy/
- sectiontitle: Python
section: