chore: add aliases, run formatter

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
This commit is contained in:
David Karlsson 2024-10-03 17:20:16 +02:00
parent 2e59bd4eb7
commit 4238c62877
80 changed files with 1463 additions and 1218 deletions

View File

@ -5,6 +5,8 @@ description: Explore the Docker guides
params:
icon: developer_guide
layout: landing
aliases:
- /learning-paths/
---
This section contains more advanced guides to help you learn how Docker can optimize your development workflows.

View File

@ -10,7 +10,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/cpp/
- /language/cpp/
- /guides/language/cpp/
languages: [cpp]
levels: [beginner]
params:
@ -23,10 +24,10 @@ The C++ getting started guide teaches you how to create a containerized C++ appl
>
> Docker would like to thank [Pradumna Saraf](https://twitter.com/pradumna_saraf) for his contribution to this guide.
* Containerize and run a C++ application
* Set up a local environment to develop a C++ application using containers
* Configure a CI/CD pipeline for a containerized C++ application using GitHub Actions
* Deploy your containerized application locally to Kubernetes to test and debug your deployment
- Containerize and run a C++ application
- Set up a local environment to develop a C++ application using containers
- Configure a CI/CD pipeline for a containerized C++ application using GitHub Actions
- Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the C++ getting started modules, you should be able to containerize your own C++ application based on the examples and instructions provided in this guide.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: ci/cd, github actions, c++, shiny
description: Learn how to configure CI/CD using GitHub Actions for your C++ application.
aliases:
- /language/cpp/configure-ci-cd/
- /language/cpp/configure-ci-cd/
- /guides/language/cpp/configure-ci-cd/
---
## Prerequisites
@ -69,27 +70,24 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -123,8 +121,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your C++ application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -5,12 +5,13 @@ weight: 10
keywords: C++, containerize, initialize
description: Learn how to containerize a C++ application.
aliases:
- /language/cpp/containerize/
- /language/cpp/containerize/
- /guides/language/cpp/containerize/
---
## Prerequisites
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Overview
@ -38,9 +39,10 @@ directory.
```
To learn more about the files in the repository, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yml](/reference/compose-file/_index.md)
## Run the application
@ -67,7 +69,6 @@ $ docker compose up --build -d
Open a browser and view the application at [http://localhost:8080](http://localhost:8080).
In the terminal, run the following command to stop the application.
```console
@ -83,7 +84,8 @@ In this section, you learned how you can containerize and run your C++
application using Docker.
Related information:
- [Docker Compose overview](/manuals/compose/_index.md)
- [Docker Compose overview](/manuals/compose/_index.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, c++
description: Learn how to develop locally using Kubernetes
aliases:
- /language/cpp/deploy/
- /language/cpp/deploy/
- /guides/language/cpp/deploy/
---
## Prerequisites
@ -42,9 +43,9 @@ spec:
service: ok-api
spec:
containers:
- name: ok-api-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: ok-api-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -56,21 +57,21 @@ spec:
selector:
service: ok-api
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
- port: 8080
targetPort: 8080
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your C++ application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your C++ application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -133,9 +134,10 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
## Summary
In this section, you learned how to use Docker Desktop to deploy your C++ application to a fully-featured Kubernetes environment on your development machine.
In this section, you learned how to use Docker Desktop to deploy your C++ application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: C++, local, development
description: Learn how to develop your C++ application locally.
aliases:
- /language/cpp/develop/
- /language/cpp/develop/
- /guides/language/cpp/develop/
---
## Prerequisites
@ -66,9 +67,10 @@ Press `ctrl+c` in the terminal to stop your application.
In this section, you also learned how to use Compose Watch to automatically rebuild and run your container when you update your code.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps

View File

@ -8,6 +8,8 @@ summary: |
streamline your development and deployment processes.
levels: [beginner]
subjects: [databases]
aliases:
- /guides/use-case/databases/
params:
time: 20 minutes
---
@ -69,7 +71,7 @@ In this command:
- `mysql:latest` specifies that you want to use the latest version of the MySQL
image.
To verify that you container is running, run `docker ps` in a terminal
To verify that you container is running, run `docker ps` in a terminal
{{< /tab >}}
{{< tab name="GUI" >}}
@ -83,11 +85,12 @@ To run a container using the GUI:
The **Run a new container** model appears.
4. Expand **Optional settings**.
5. In the optional settings, specify the following:
- **Container name**: `my-mysql`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
![The optional settings screen with the options specified.](images/databases-1.webp)
6. Select `Run`.
@ -186,7 +189,6 @@ guide. To stop and remove a container, either:
- Or, in the Docker Dashboard, select the **Delete** icon next to your
container in the **Containers** view.
Next, you can use either the Docker Desktop GUI or CLI to run the container with
the port mapped.
@ -230,9 +232,9 @@ To run a container using the GUI:
- **Container name**: `my-mysql`
- **Host port** for the **3306/tcp** port: `3307`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
![The optional settings screen with the options specified.](images/databases-2.webp)
6. Select `Run`.
@ -331,7 +333,7 @@ To run a database container with a volume attached, and then verify that the
data persists:
1. Run the container and attach the volume.
```console
$ docker run --name my-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -e MYSQL_DATABASE=mydb -v my-db-volume:/var/lib/mysql -d mysql:latest
```
@ -340,11 +342,11 @@ data persists:
2. Create some data in the database. Use the `docker exec` command to run
`mysql` inside the container and create a table.
```console
$ docker exec my-mysql mysql -u root -pmy-secret-pw -e "CREATE TABLE IF NOT EXISTS mydb.mytable (column_name VARCHAR(255)); INSERT INTO mydb.mytable (column_name) VALUES ('value');"
```
This command uses the `mysql` tool in the container to create a table named
`mytable` with a column named `column_name`, and finally inserts a value of
`value`.
@ -363,6 +365,7 @@ data persists:
```console
$ docker run --name my-mysql -v my-db-volume:/var/lib/mysql -d mysql:latest
```
5. Verify that the table you created still exists. Use the `docker exec` command
again to run `mysql` inside the container.
@ -374,6 +377,7 @@ data persists:
records from the `mytable` table.
You should see output like the following.
```console
column_name
value
@ -386,32 +390,35 @@ To run a database container with a volume attached, and then verify that the
data persists:
1. Run a container with a volume attached.
1. In the Docker Dashboard, select the global search at the top of the window.
2. Specify `mysql` in the search box, and select the **Images** tab if not
already selected.
already selected.
3. Hover over the **mysql** image and select **Run**.
The **Run a new container** model appears.
The **Run a new container** model appears.
4. Expand **Optional settings**.
5. In the optional settings, specify the following:
- **Container name**: `my-mysql`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- **Volumes**:
- `my-db-volume`:`/var/lib/mysql`
- `my-db-volume`:`/var/lib/mysql`
![The optional settings screen with the options specified.](images/databases-3.webp)
Here, the name of the volume is `my-db-volume` and it is mounted in the
container at `/var/lib/mysql`.
container at `/var/lib/mysql`.
6. Select `Run`.
2. Create some data in the database.
1. In the **Containers** view, next to your container select the **Show
container actions** icon, and then select **Open in terminal**.
2. Run the following command in the container's terminal to add a table.
```console
# mysql -u root -pmy-secret-pw -e "CREATE TABLE IF NOT EXISTS mydb.mytable (column_name VARCHAR(255)); INSERT INTO mydb.mytable (column_name) VALUES ('value');"
```
@ -420,35 +427,37 @@ data persists:
named `mytable` with a column named `column_name`, and finally inserts a
value of value`.
3. In the **Containers** view, select the **Delete** icon next to your
container, and then select **Delete forever**. Without a volume, the table
you created would be lost when deleting the container.
4. Run a container with a volume attached.
1. In the Docker Dashboard, select the global search at the top of the window.
2. Specify `mysql` in the search box, and select the **Images** tab if not
already selected.
already selected.
3. Hover over the **mysql** image and select **Run**.
The **Run a new container** model appears.
The **Run a new container** model appears.
4. Expand **Optional settings**.
5. In the optional settings, specify the following:
- **Container name**: `my-mysql`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- **Volumes**:
- `my-db-volume`:`/var/lib/mysql`
- `my-db-volume`:`/var/lib/mysql`
![The optional settings screen with the options specified.](images/databases-3.webp)
6. Select `Run`.
5. Verify that the table you created still exists.
1. In the **Containers** view, next to your container select the **Show
container actions** icon, and then select **Open in terminal**.
2. Run the following command in the container's terminal to verify that table
you created still exists.
```console
# mysql -u root -pmy-secret-pw -e "SELECT * FROM mydb.mytable;"
```
@ -456,7 +465,6 @@ data persists:
This command uses the `mysql` tool in the container to select all the
records from the `mytable` table.
You should see output like the following.
```console
@ -489,6 +497,7 @@ guide. To stop and remove a container, either:
To build and run your custom image:
1. Create a Dockerfile.
1. Create a file named `Dockerfile` in your project directory. For this
example, you can create the `Dockerfile` in an empty directory of your
choice. This file will define how to build your custom MySQL image.
@ -521,13 +530,13 @@ To build and run your custom image:
`scripts`, and then create a file named `create_table.sql` with the
following content.
```text
CREATE TABLE IF NOT EXISTS mydb.myothertable (
column_name VARCHAR(255)
);
```text
CREATE TABLE IF NOT EXISTS mydb.myothertable (
column_name VARCHAR(255)
);
INSERT INTO mydb.myothertable (column_name) VALUES ('other_value');
```
INSERT INTO mydb.myothertable (column_name) VALUES ('other_value');
```
You should now have the following directory structure.
@ -539,6 +548,7 @@ To build and run your custom image:
```
2. Build your image.
1. In a terminal, change directory to the directory where your `Dockerfile`
is located.
2. Run the following command to build the image.
@ -546,6 +556,7 @@ To build and run your custom image:
```console
$ docker build -t my-custom-mysql .
```
In this command, `-t my-custom-mysql` tags (names) your new image as
`my-custom-mysql`. The period (.) at the end of the command specifies the
current directory as the context for the build, where Docker looks for the
@ -582,6 +593,7 @@ To build and run your custom image:
```
You should see output like the following.
```console
column_name
other_value
@ -600,10 +612,11 @@ you'll create a Compose file and use it to run a MySQL database container and a
To run your containers with Docker Compose:
1. Create a Docker Compose file.
1. Create a file named `compose.yaml` in your project directory. This file
will define the services, networks, and volumes.
2. Add the following content to the `compose.yaml` file.
```yaml
services:
db:
@ -643,7 +656,7 @@ To run your containers with Docker Compose:
allowing you to connect to the database from your host machine.
- `volumes` mounts `my-db-volume` to `/var/lib/mysql` inside the container
to persist database data.
In addition to the database service, there is a phpMyAdmin service. By
default Compose sets up a single network for your app. Each container for
a service joins the default network and is both reachable by other
@ -652,13 +665,15 @@ To run your containers with Docker Compose:
service name, `db`, in order to connect to the database service. For more details about Compose, see the [Compose file reference](/reference/compose-file/).
2. Run Docker Compose.
1. Open a terminal and change directory to the directory where your
`compose.yaml` file is located.
2. Run Docker Compose using the following command.
```console
$ docker compose up
```
You can now access phpMyAdmin at
[http://localhost:8080](http://localhost:8080) and connect to your
database using `root` as the username and `my-secret-pw` as the password.

View File

@ -9,6 +9,8 @@ summary: |
cache, and native multi-architecture support.
levels: [beginner]
products: [dbc]
aliases:
- /learning-paths/docker-build-cloud/
params:
featured: true
image: images/learning-paths/build-cloud.png

View File

@ -5,6 +5,8 @@ summary: Simplify the process of defining, configuring, and running multi-contai
description: Learn how to use Docker Compose to define and run multi-container Docker applications.
levels: [beginner]
products: [compose]
aliases:
- /learning-paths/docker-compose/
params:
featured: true
image: images/learning-paths/compose.png

View File

@ -10,6 +10,8 @@ description: |
your development workflow.
levels: [Beginner]
products: [scout]
aliases:
- /learning-paths/docker-scout/
params:
featured: true
image: images/learning-paths/scout.png

View File

@ -4,14 +4,15 @@ linkTitle: C# (.NET)
description: Containerize and develop .NET apps using Docker
summary: Learn how to containerize .NET applications using Docker, including building, running, and deploying .NET apps in Docker containers, with best practices and step-by-step examples.
keywords: getting started, .net
toc_min: 1
toc_max: 2
aliases:
- /language/dotnet/
- /guides/language/dotnet/
languages: [c-sharp]
levels: [beginner]
params:
time: 20 minutes
toc_min: 1
toc_max: 2
---
The .NET getting started guide teaches you how to create a containerized .NET application using Docker. In this guide, you'll learn how to:

View File

@ -5,7 +5,8 @@ weight: 40
keywords: .net, CI/CD
description: Learn how to Configure CI/CD for your .NET application
aliases:
- /language/dotnet/configure-ci-cd/
- /language/dotnet/configure-ci-cd/
- /guides/language/dotnet/configure-ci-cd/
---
## Prerequisites
@ -77,33 +78,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: build
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -138,8 +135,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -8,6 +8,7 @@ aliases:
- /language/dotnet/build-images/
- /language/dotnet/run-containers/
- /language/dotnet/containerize/
- /guides/language/dotnet/containerize/
---
## Prerequisites

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, .net, local, development
description: Learn how to deploy your application
aliases:
- /language/dotnet/deploy/
- /language/dotnet/deploy/
- /guides/language/dotnet/deploy/
---
## Prerequisites
@ -52,7 +53,12 @@ spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command: ['sh', '-c', 'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;']
command:
[
"sh",
"-c",
'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;',
]
containers:
- image: DOCKER_USERNAME/REPO_NAME
name: server
@ -138,14 +144,14 @@ status:
In this Kubernetes YAML file, there are four objects, separated by the `---`. In addition to a Service and Deployment for the database, the other two objects are:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
.NET application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
.NET application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -212,6 +218,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: .net, development
description: Learn how to develop your .NET application locally using containers.
aliases:
- /language/dotnet/develop/
- /language/dotnet/develop/
- /guides/language/dotnet/develop/
---
## Prerequisites
@ -15,9 +16,10 @@ Complete [Containerize a .NET application](containerize.md).
## Overview
In this section, you'll learn how to set up a development environment for your containerized application. This includes:
- Adding a local database and persisting data
- Configuring Compose to automatically update your running Compose services as you edit and save your code
- Creating a development container that contains the .NET Core SDK tools and dependencies
- Adding a local database and persisting data
- Configuring Compose to automatically update your running Compose services as you edit and save your code
- Creating a development container that contains the .NET Core SDK tools and dependencies
## Update the application
@ -69,7 +71,6 @@ You should now have the following in your `docker-dotnet-sample` directory.
│ └── README.md
```
## Add a local database and persist data
You can use containers to set up local services, like a database. In this section, you'll update the `compose.yaml` file to define a database service and a volume to persist data.
@ -109,7 +110,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -233,7 +234,7 @@ Use Compose Watch to automatically update your running Compose services as you e
Open your `compose.yaml` file in an IDE or text editor and then add the Compose Watch instructions. The following is the updated `compose.yaml` file.
```yaml {hl_lines="11-14"}
```yaml {hl_lines="11-14"}
services:
server:
build:
@ -262,7 +263,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -272,6 +273,7 @@ secrets:
db-password:
file: db/password.txt
```
Run the following command to run your application with Compose Watch.
```console
@ -335,7 +337,7 @@ ENTRYPOINT ["dotnet", "myWebApp.dll"]
The following is the updated `compose.yaml` file.
```yaml {hl_lines="5"}
```yaml {hl_lines="5"}
services:
server:
build:
@ -351,8 +353,8 @@ services:
- action: rebuild
path: .
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80'
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80'
db:
image: postgres
restart: always
@ -367,7 +369,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -386,9 +388,10 @@ In this section, you took a look at setting up your Compose file to add a local
database and persist data. You also learned how to use Compose Watch to automatically rebuild and run your container when you update your code. And finally, you learned how to create a development container that contains the SDK tools and dependencies needed for development.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 30
keywords: .NET, test
description: Learn how to run your .NET tests in a container.
aliases:
- /language/dotnet/run-tests/
- /language/dotnet/run-tests/
- /guides/language/dotnet/run-tests/
---
## Prerequisites
@ -46,7 +47,7 @@ To run your tests when building, you need to update your Dockerfile. You can cre
The following is the updated Dockerfile.
```dockerfile {hl_lines="9"}
```dockerfile {hl_lines="9"}
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
@ -109,7 +110,8 @@ You should see output containing the following.
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information:
- [docker compose run](/reference/cli/docker/compose/run/)
- [docker compose run](/reference/cli/docker/compose/run/)
## Next steps

View File

@ -1,23 +1,23 @@
---
title: PDF analysis and chat
description: Containerize generative AI (GenAI) apps using Docker
keywords: python, generative ai, genai, llm, neo4j, ollama, langchain
toc_min: 1
toc_max: 2
keywords: python, generative ai, genai, llm, neo4j, ollama, langchain
summary: |
This guide explains how to build a PDF bot using Docker and generative AI,
focusing on setting up a containerized environment for parsing PDF documents
and generating intelligent responses based on the content.
levels: [beginner]
subjects: [ai]
aliases:
- /guides/use-case/genai-pdf-bot/
params:
time: 20 minutes
---
The generative AI (GenAI) guide teaches you how to containerize an existing GenAI application using Docker. In this guide, youll learn how to:
* Containerize and run a Python-based GenAI application
* Set up a local environment to run the complete GenAI stack locally for development
- Containerize and run a Python-based GenAI application
- Set up a local environment to run the complete GenAI stack locally for development
Start by containerizing an existing GenAI application.

View File

@ -4,6 +4,8 @@ linkTitle: Containerize your app
weight: 10
keywords: python, generative ai, genai, llm, neo4j, ollama, containerize, intitialize, langchain, openai
description: Learn how to containerize a generative AI (GenAI) application.
aliases:
- /guides/use-case/genai-pdf-bot/containerize/
---
## Prerequisites
@ -12,8 +14,8 @@ description: Learn how to containerize a generative AI (GenAI) application.
>
> GenAI applications can often benefit from GPU acceleration. Currently Docker Desktop supports GPU acceleration only on [Windows with the WSL2 backend](/manuals/desktop/gpu.md#using-nvidia-gpus-with-wsl2). Linux users can also access GPU acceleration using a native installation of the [Docker Engine](/manuals/engine/install/_index.md).
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md) or, if you are a Linux user and are planning to use GPU acceleration, [Docker Engine](/manuals/engine/install/_index.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md) or, if you are a Linux user and are planning to use GPU acceleration, [Docker Engine](/manuals/engine/install/_index.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
## Overview
@ -91,10 +93,10 @@ directory.
```
To learn more about the files that `docker init` added, see the following:
- [Dockerfile](../../../reference/dockerfile.md)
- [.dockerignore](../../../reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](../../../reference/dockerfile.md)
- [.dockerignore](../../../reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -130,7 +132,8 @@ In this section, you learned how you can containerize and run your GenAI
application using Docker.
Related information:
- [docker init CLI reference](../../../reference/cli/docker/init.md)
- [docker init CLI reference](../../../reference/cli/docker/init.md)
## Next steps

View File

@ -4,6 +4,8 @@ linkTitle: Develop your app
weight: 20
keywords: python, local, development, generative ai, genai, llm, neo4j, ollama, langchain, openai
description: Learn how to develop your generative AI (GenAI) application locally.
aliases:
- /guides/use-case/genai-pdf-bot/develop/
---
## Prerequisites
@ -31,6 +33,7 @@ To run the database service:
This file contains the environment variables that the containers will use.
2. In the cloned repository's directory, open the `compose.yaml` file in an IDE or text editor.
3. In the `compose.yaml` file, add the following:
- Add instructions to run a Neo4j database
- Specify the environment file under the server service in order to pass in the environment variables for the connection
@ -67,7 +70,7 @@ To run the database service:
> To learn more about Neo4j, see the [Neo4j Official Docker Image](https://hub.docker.com/_/neo4j).
4. Run the application. Inside the `docker-genai-sample` directory,
run the following command in a terminal.
run the following command in a terminal.
```console
$ docker compose up --build
@ -80,12 +83,14 @@ run the following command in a terminal.
## Add a local or remote LLM service
The sample application supports both [Ollama](https://ollama.ai/) and [OpenAI](https://openai.com/). This guide provides instructions for the following scenarios:
- Run Ollama in a container
- Run Ollama outside of a container
- Use OpenAI
While all platforms can use any of the previous scenarios, the performance and
GPU support may vary. You can use the following guidelines to help you choose the appropriate option:
- Run Ollama in a container if you're on Linux, and using a native installation of the Docker Engine, or Windows 10/11, and using Docker Desktop, you
have a CUDA-supported GPU, and your system has at least 8 GB of RAM.
- Run Ollama outside of a container if you're on an Apple silicon Mac.
@ -99,6 +104,7 @@ Choose one of the following options for your LLM service.
When running Ollama in a container, you should have a CUDA-supported GPU. While you can run Ollama in a container without a supported GPU, the performance may not be acceptable. Only Linux and Windows 11 support GPU access to containers.
To run Ollama in a container and provide GPU access:
1. Install the prerequisites.
- For Docker Engine on Linux, install the [NVIDIA Container Toolkilt](https://github.com/NVIDIA/nvidia-container-toolkit).
- For Docker Desktop on Windows 10/11, install the latest [NVIDIA driver](https://www.nvidia.com/Download/index.aspx) and make sure you are using the [WSL2 backend](/manuals/desktop/wsl/_index.md#turn-on-docker-desktop-wsl-2)
@ -125,7 +131,11 @@ To run Ollama in a container and provide GPU access:
environment:
- NEO4J_AUTH=${NEO4J_USERNAME}/${NEO4J_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
test:
[
"CMD-SHELL",
"wget --no-verbose --tries=1 --spider localhost:7474 || exit 1",
]
interval: 5s
timeout: 3s
retries: 5
@ -181,6 +191,7 @@ To run Ollama in a container and provide GPU access:
{{< tab name="Run Ollama outside of a container" >}}
To run Ollama outside of a container:
1. [Install](https://github.com/jmorganca/ollama) and run Ollama on your host
machine.
2. Update the `OLLAMA_BASE_URL` value in your `.env` file to
@ -208,6 +219,7 @@ To run Ollama outside of a container:
## Run your GenAI application
At this point, you have the following services in your Compose file:
- Server service for your main GenAI application
- Database service to store vectors in a Neo4j database
- (optional) Ollama service to run the LLM
@ -237,11 +249,12 @@ In this section, you learned how to set up a development environment to provide
access all the services that your GenAI application needs.
Related information:
- [Dockerfile reference](../../../reference/dockerfile.md)
- [Compose file reference](/reference/compose-file/_index.md)
- [Ollama Docker image](https://hub.docker.com/r/ollama/ollama)
- [Neo4j Official Docker Image](https://hub.docker.com/_/neo4j)
- [GenAI Stack demo applications](https://github.com/docker/genai-stack)
- [Dockerfile reference](../../../reference/dockerfile.md)
- [Compose file reference](/reference/compose-file/_index.md)
- [Ollama Docker image](https://hub.docker.com/r/ollama/ollama)
- [Neo4j Official Docker Image](https://hub.docker.com/_/neo4j)
- [GenAI Stack demo applications](https://github.com/docker/genai-stack)
## Next steps

View File

@ -2,13 +2,15 @@
title: GenAI video transcription and chat
linkTitle: Video transcription and chat
description: Explore a generative AI video analysis app that uses Docker, OpenAI, and Pinecone.
keywords: python, generative ai, genai, llm, whisper, pinecone, openai, whisper
keywords: python, generative ai, genai, llm, whisper, pinecone, openai, whisper
summary: |
Learn how to build and deploy a generative AI video bot using Docker, with
step-by-step instructions for setup, integration, and optimization to enhance
your AI development projects.
subjects: [ai]
levels: [beginner]
aliases:
- /guides/use-case/genai-video-bot/
params:
time: 20 minutes
---
@ -20,6 +22,7 @@ technologies related to the
[GenAI Stack](https://www.docker.com/blog/introducing-a-new-genai-stack/).
The project showcases the following technologies:
- [Docker and Docker Compose](#docker-and-docker-compose)
- [OpenAI](#openai-api)
- [Whisper](#whisper)
@ -42,7 +45,6 @@ The project showcases the following technologies:
>
> OpenAI is a third-party hosted service and [charges](https://openai.com/pricing) may apply.
- You have a [Pinecone API Key](https://app.pinecone.io/).
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
@ -56,10 +58,13 @@ addition, it provides timestamps from the video that can help you find the sourc
1. Clone the sample application's repository. In a terminal, run the following
command.
```console
$ git clone https://github.com/Davidnet/docker-genai.git
```
The project contains the following directories and files:
```text
├── docker-genai/
│ ├── docker-bot/
@ -88,9 +93,11 @@ addition, it provides timestamps from the video that can help you find the sourc
3. Build and run the application. In a terminal, change directory to your
`docker-genai` directory and run the following command.
```console
$ docker compose up --build
```
Docker Compose builds and runs the application based on the services defined
in the `docker-compose.yaml` file. When the application is running, you'll
see the logs of 2 services in the terminal.
@ -150,9 +157,9 @@ how to use the service.
The answer to that question exists in the video processed in the previous
example,
[https://www.youtube.com/watch?v=yaQZFhrW0fU](https://www.youtube.com/watch?v=yaQZFhrW0fU).
![Asking a question to the Dockerbot](images/bot.webp)
In this example, the Dockerbot answers the question and
provides links to the video with timestamps, which may contain more
information about the answer.
@ -172,6 +179,7 @@ how to use the service.
## Explore the application architecture
The following image shows the application's high-level service architecture, which includes:
- yt-whisper: A local service, ran by Docker Compose, that interacts with the
remote OpenAI and Pinecone services.
- dockerbot: A local service, ran by Docker Compose, that interacts with the

View File

@ -11,7 +11,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/golang/
- /language/golang/
- /guides/language/golang/
languages: [go]
levels: [beginner]
params:
@ -28,24 +29,24 @@ This guide will show you how to create, test, and deploy containerized Go applic
In this guide, youll learn how to:
* Create a `Dockerfile` which contains the instructions for building a container image for a program written in Go.
* Run the image as a container in your local Docker instance and manage the container's lifecycle.
* Use multi-stage builds for building small images efficiently while keeping your Dockerfiles easy to read and maintain.
* Use Docker Compose to orchestrate running of multiple related containers together in a development environment.
* Configure a CI/CD pipeline for your application using [GitHub Actions](https://docs.github.com/en/actions)
* Deploy your containerized Go application.
- Create a `Dockerfile` which contains the instructions for building a container image for a program written in Go.
- Run the image as a container in your local Docker instance and manage the container's lifecycle.
- Use multi-stage builds for building small images efficiently while keeping your Dockerfiles easy to read and maintain.
- Use Docker Compose to orchestrate running of multiple related containers together in a development environment.
- Configure a CI/CD pipeline for your application using [GitHub Actions](https://docs.github.com/en/actions)
- Deploy your containerized Go application.
## Prerequisites
Some basic understanding of Go and its toolchain is assumed. This isn't a Go tutorial. If you are new to the : languages:,
the [Go website](https://golang.org/) is a great place to explore,
so *go* (pun intended) check it out!
Some basic understanding of Go and its toolchain is assumed. This isn't a Go tutorial. If you are new to the : languages:,
the [Go website](https://golang.org/) is a great place to explore,
so _go_ (pun intended) check it out!
You also must know some basic [Docker concepts](/get-started/docker-concepts/the-basics/what-is-a-container.md) as well as to
You also must know some basic [Docker concepts](/get-started/docker-concepts/the-basics/what-is-a-container.md) as well as to
be at least vaguely familiar with the [Dockerfile format](/manuals/build/concepts/dockerfile.md).
Your Docker set-up must have BuildKit enabled. BuildKit is enabled by default for all users on [Docker Desktop](/manuals/desktop/_index.md).
If you have installed Docker Desktop, you dont have to manually enable BuildKit. If you are running Docker on Linux,
Your Docker set-up must have BuildKit enabled. BuildKit is enabled by default for all users on [Docker Desktop](/manuals/desktop/_index.md).
If you have installed Docker Desktop, you dont have to manually enable BuildKit. If you are running Docker on Linux,
please check out BuildKit [getting started](/manuals/build/buildkit/_index.md#getting-started) page.
Some familiarity with the command line is also expected.

View File

@ -5,8 +5,9 @@ weight: 5
keywords: containers, images, go, golang, dockerfiles, coding, build, push, run
description: Learn how to build your first Docker image by writing a Dockerfile
aliases:
- /get-started/golang/build-images/
- /language/golang/build-images/
- /get-started/golang/build-images/
- /language/golang/build-images/
- /guides/language/golang/build-images/
---
## Overview
@ -31,8 +32,8 @@ The example application is a caricature of a microservice. It is purposefully tr
The application offers two HTTP endpoints:
* It responds with a string containing a heart symbol (`<3`) to requests to `/`.
* It responds with `{"Status" : "OK"}` JSON to a request to `/health`.
- It responds with a string containing a heart symbol (`<3`) to requests to `/`.
- It responds with `{"Status" : "OK"}` JSON to a request to `/health`.
It responds with HTTP error 404 to any other request.
@ -50,7 +51,6 @@ $ git clone https://github.com/docker/docker-gs-ping
The application's `main.go` file is straightforward, if you are familiar with Go:
```go
package main
@ -99,7 +99,7 @@ func IntMin(a, b int) int {
To build a container image with Docker, a `Dockerfile` with build instructions is required.
Begin your `Dockerfile` with the (optional) parser directive line that instructs BuildKit to
Begin your `Dockerfile` with the (optional) parser directive line that instructs BuildKit to
interpret your file according to the grammar rules for the specified version of the syntax.
You then tell Docker what base image you would like to use for your application:
@ -183,7 +183,7 @@ COPY *.go ./
This `COPY` command uses a wildcard to copy all files with `.go` extension
located in the current directory on the host (the directory where the `Dockerfile`
is located) into the current directory inside the image.
is located) into the current directory inside the image.
Now, to compile your application, use the familiar `RUN` command:
@ -274,7 +274,7 @@ Build your first Docker image.
$ docker build --tag docker-gs-ping .
```
The build process will print some diagnostic messages as it goes through the build steps.
The build process will print some diagnostic messages as it goes through the build steps.
The following is just an example of what these messages may look like.
```console
@ -406,7 +406,7 @@ gigabyte, which is a lot for a tiny compiled Go application. You may also be
wondering what happened to the full suite of Go tools, including the compiler,
after you had built your image.
The answer is that the full toolchain is still there, in the container image.
The answer is that the full toolchain is still there, in the container image.
Not only this is inconvenient because of the large file size, but it may also
present a security risk when the container is deployed.
@ -423,7 +423,6 @@ other optional components.
The `Dockerfile.multistage` in the sample application's repository has the
following content:
```dockerfile
# syntax=docker/dockerfile:1
@ -457,7 +456,6 @@ USER nonroot:nonroot
ENTRYPOINT ["/docker-gs-ping"]
```
Since you have two Dockerfiles now, you have to tell Docker what Dockerfile
you'd like to use to build the image. Tag the new image with `multistage`. This
tag (like any other, apart from `latest`) has no special meaning for Docker,
@ -477,10 +475,10 @@ docker-gs-ping multistage e3fdde09f172 About a minute ago 28.1MB
docker-gs-ping latest 336a3f164d0f About an hour ago 1.11GB
```
This is so because the ["distroless"](https://github.com/GoogleContainerTools/distroless)
This is so because the ["distroless"](https://github.com/GoogleContainerTools/distroless)
base image that you have used in the second stage of the build is very barebones and is designed for lean deployments of static binaries.
There's much more to multi-stage builds, including the possibility of multi-architecture builds,
There's much more to multi-stage builds, including the possibility of multi-architecture builds,
so feel free to check out [multi-stage builds](/manuals/build/building/multi-stage.md). This is, however, not essential for your progress here.
## Next steps

View File

@ -5,7 +5,8 @@ weight: 40
keywords: go, CI/CD, local, development
description: Learn how to Configure CI/CD for your Go application
aliases:
- /language/golang/configure-ci-cd/
- /language/golang/configure-ci-cd/
- /guides/language/golang/configure-ci-cd/
---
## Prerequisites
@ -69,27 +70,24 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -123,8 +121,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, go, local, development
description: Learn how to deploy your Go application
aliases:
- /language/golang/deploy/
- /language/golang/deploy/
- /guides/language/golang/deploy/
---
## Prerequisites
@ -52,7 +53,12 @@ spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command: ['sh', '-c', 'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;']
command:
[
"sh",
"-c",
'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;',
]
containers:
- env:
- name: PGDATABASE
@ -151,14 +157,14 @@ status:
In this Kubernetes YAML file, there are four objects, separated by the `---`. In addition to a Service and Deployment for the database, the other two objects are:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Go application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Go application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -223,7 +229,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
You should get the following message back.
```json
{"value":"Hello, Oliver!"}
{ "value": "Hello, Oliver!" }
```
4. Run the following command to tear down your application.
@ -237,6 +243,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,8 +5,9 @@ weight: 20
keywords: get started, go, golang, local, development
description: Learn how to develop your application locally.
aliases:
- /get-started/golang/develop/
- /language/golang/develop/
- /get-started/golang/develop/
- /language/golang/develop/
- /guides/language/golang/develop/
---
## Prerequisites
@ -94,7 +95,7 @@ $ docker run -d \
# ... output omitted ...
```
Notice a clever use of the tag `latest-v20.1` to make sure that you're pulling the latest patch version of 20.1. The diversity of available tags depend on the image maintainer. Here, your intent was to have the latest patched version of CockroachDB while not straying too far away from the known working version as the time goes by. To see the tags available for the CockroachDB image, you can go to the [CockroachDB page on Docker Hub](https://hub.docker.com/r/cockroachdb/cockroach/tags).
Notice a clever use of the tag `latest-v20.1` to make sure that you're pulling the latest patch version of 20.1. The diversity of available tags depend on the image maintainer. Here, your intent was to have the latest patched version of CockroachDB while not straying too far away from the known working version as the time goes by. To see the tags available for the CockroachDB image, you can go to the [CockroachDB page on Docker Hub](https://hub.docker.com/r/cockroachdb/cockroach/tags).
### Configure the database engine
@ -123,7 +124,7 @@ $ docker exec -it roach ./cockroach sql --insecure
```
3. Give the new user the necessary permissions:
```sql
GRANT ALL ON DATABASE mydb TO totoro;
```
@ -132,7 +133,6 @@ $ docker exec -it roach ./cockroach sql --insecure
The following is an example of interaction with the SQL shell.
```console
$ sudo docker exec -it roach ./cockroach sql --insecure
#
@ -164,15 +164,14 @@ root@:26257/defaultdb> quit
oliver@hki:~$
```
### Meet the example application
Now that you have started and configured the database engine, you can switch your attention to the application.
The example application for this module is an extended version of `docker-gs-ping` application you've used in the previous modules. You have two options:
* You can update your local copy of `docker-gs-ping` to match the new extended version presented in this chapter; or
* You can clone the [docker/docker-gs-ping-dev](https://github.com/docker/docker-gs-ping-dev) repository. This latter approach is recommended.
- You can update your local copy of `docker-gs-ping` to match the new extended version presented in this chapter; or
- You can clone the [docker/docker-gs-ping-dev](https://github.com/docker/docker-gs-ping-dev) repository. This latter approach is recommended.
To checkout the example application, run:
@ -183,17 +182,17 @@ $ git clone https://github.com/docker/docker-gs-ping-dev.git
The application's `main.go` now includes database initialization code, as well as the code to implement a new business requirement:
* An HTTP `POST` request to `/send` containing a `{ "value" : string }` JSON must save the value to the database.
- An HTTP `POST` request to `/send` containing a `{ "value" : string }` JSON must save the value to the database.
You also have an update for another business requirement. The requirement was:
* The application responds with a text message containing a heart symbol ("`<3`") on requests to `/`.
- The application responds with a text message containing a heart symbol ("`<3`") on requests to `/`.
And now it's going to be:
* The application responds with the string containing the count of messages stored in the database, enclosed in the parentheses.
- The application responds with the string containing the count of messages stored in the database, enclosed in the parentheses.
Example output: `Hello, Docker! (7)`
Example output: `Hello, Docker! (7)`
The full source code listing of `main.go` follows.
@ -375,7 +374,7 @@ $ docker run -it --rm -d \
There are a few points to note about this command.
* You map container port `8080` to host port `80` this time. Thus, for `GET` requests you can get away with literally `curl localhost`:
- You map container port `8080` to host port `80` this time. Thus, for `GET` requests you can get away with literally `curl localhost`:
```console
$ curl localhost
@ -389,11 +388,11 @@ There are a few points to note about this command.
Hello, Docker! (0)
```
* The total number of stored messages is `0` for now. This is fine, because you haven't posted anything to your application yet.
* You refer to the database container by its hostname, which is `db`. This is why you had `--hostname db` when you started the database container.
- The total number of stored messages is `0` for now. This is fine, because you haven't posted anything to your application yet.
- You refer to the database container by its hostname, which is `db`. This is why you had `--hostname db` when you started the database container.
* The actual password doesn't matter, but it must be set to something to avoid confusing the example application.
* The container you've just run is named `rest-server`. These names are useful for managing the container lifecycle:
- The actual password doesn't matter, but it must be set to something to avoid confusing the example application.
- The container you've just run is named `rest-server`. These names are useful for managing the container lifecycle:
```console
# Don't do this just yet, it's only an example:
@ -414,7 +413,7 @@ $ curl --request POST \
The application responds with the contents of the message, which means it has been saved in the database:
```json
{"value":"Hello, Docker!"}
{ "value": "Hello, Docker!" }
```
Send another message:
@ -429,7 +428,7 @@ $ curl --request POST \
And again, you get the value of the message back:
```json
{"value":"Hello, Oliver!"}
{ "value": "Hello, Oliver!" }
```
Run curl and see what the message counter says:
@ -524,9 +523,8 @@ In this section, you'll create a Docker Compose file to start your `docker-gs-pi
In your application's directory, create a new text file named `docker-compose.yml` with the following content.
```yaml
version: '3.8'
version: "3.8"
services:
docker-gs-ping-roach:
@ -570,7 +568,6 @@ networks:
driver: bridge
```
This Docker Compose configuration is super convenient as you don't have to type all the parameters to pass to the `docker run` command. You can declaratively do that in the Docker Compose file. The [Docker Compose documentation pages](/manuals/compose/_index.md) are quite extensive and include a full reference for the Docker Compose file format.
### The `.env` file
@ -587,10 +584,10 @@ The exact value doesn't really matter for this example, because you run Cockroac
The file name `docker-compose.yml` is the default file name which `docker compose` command recognizes if no `-f` flag is provided. This means you can have multiple Docker Compose files if your environment has such requirements. Furthermore, Docker Compose files are... composable (pun intended), so multiple files can be specified on the command line to merge parts of the configuration together. The following list is just a few examples of scenarios where such a feature would be very useful:
* Using a bind mount for the source code for local development but not when running the CI tests;
* Switching between using a pre-built image for the frontend for some API application vs creating a bind mount for source code;
* Adding additional services for integration testing;
* And many more...
- Using a bind mount for the source code for local development but not when running the CI tests;
- Switching between using a pre-built image for the frontend for some API application vs creating a bind mount for source code;
- Adding additional services for integration testing;
- And many more...
You aren't going to cover any of these advanced use cases here.
@ -598,8 +595,8 @@ You aren't going to cover any of these advanced use cases here.
One of the really cool features of Docker Compose is [variable substitution](/reference/compose-file/interpolation.md). You can see some examples in the Compose file, `environment` section. By means of an example:
* `PGUSER=${PGUSER:-totoro}` means that inside the container, the environment variable `PGUSER` shall be set to the same value as it has on the host machine where Docker Compose is run. If there is no environment variable with this name on the host machine, the variable inside the container gets the default value of `totoro`.
* `PGPASSWORD=${PGPASSWORD:?database password not set}` means that if the environment variable `PGPASSWORD` isn't set on the host, Docker Compose will display an error. This is OK, because you don't want to hard-code default values for the password. You set the password value in the `.env` file, which is local to your machine. It is always a good idea to add `.env` to `.gitignore` to prevent the secrets being checked into the version control.
- `PGUSER=${PGUSER:-totoro}` means that inside the container, the environment variable `PGUSER` shall be set to the same value as it has on the host machine where Docker Compose is run. If there is no environment variable with this name on the host machine, the variable inside the container gets the default value of `totoro`.
- `PGPASSWORD=${PGPASSWORD:?database password not set}` means that if the environment variable `PGPASSWORD` isn't set on the host, Docker Compose will display an error. This is OK, because you don't want to hard-code default values for the password. You set the password value in the `.env` file, which is local to your machine. It is always a good idea to add `.env` to `.gitignore` to prevent the secrets being checked into the version control.
Other ways of dealing with undefined or empty values exist, as documented in the [variable substitution](/reference/compose-file/interpolation.md) section of the Docker documentation.
@ -724,8 +721,8 @@ Such distributed set-up offers interesting possibilities, such as applying Chaos
If you are interested in experimenting with CockroachDB clusters, check out:
* [Start a CockroachDB Cluster in Docker](https://www.cockroachlabs.com/docs/v20.2/start-a-local-cluster-in-docker-mac.html) article; and
* Documentation for Docker Compose keywords [`deploy`](/reference/compose-file/legacy-versions.md) and [`replicas`](/reference/compose-file/legacy-versions.md).
- [Start a CockroachDB Cluster in Docker](https://www.cockroachlabs.com/docs/v20.2/start-a-local-cluster-in-docker-mac.html) article; and
- Documentation for Docker Compose keywords [`deploy`](/reference/compose-file/legacy-versions.md) and [`replicas`](/reference/compose-file/legacy-versions.md).
### Other databases

View File

@ -5,8 +5,9 @@ weight: 10
keywords: get started, go, golang, run, container
description: Learn how to run the image as a container.
aliases:
- /get-started/golang/run-containers/
- /language/golang/run-containers/
- /get-started/golang/run-containers/
- /language/golang/run-containers/
- /guides/language/golang/run-containers/
---
## Prerequisites

View File

@ -5,8 +5,9 @@ weight: 30
keywords: build, go, golang, test
description: How to build and run your Go tests in a container
aliases:
- /get-started/golang/run-tests/
- /language/golang/run-tests/
- /get-started/golang/run-tests/
- /language/golang/run-tests/
- /guides/language/golang/run-tests/
---
## Prerequisites

View File

@ -11,7 +11,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/java/
- /language/java/
- /guides/language/java/
languages: [java]
levels: [beginner]
params:
@ -20,11 +21,11 @@ params:
The Java getting started guide teaches you how to create a containerized Spring Boot application using Docker. In this module, youll learn how to:
* Containerize and run a Spring Boot application with Maven
* Set up a local development environment to connect a database to the container, configure a debugger, and use Compose Watch for live reload
* Run your unit tests inside a container
* Configure a CI/CD pipeline for your application using GitHub Actions
* Deploy your containerized application locally to Kubernetes to test and debug your deployment
- Containerize and run a Spring Boot application with Maven
- Set up a local development environment to connect a database to the container, configure a debugger, and use Compose Watch for live reload
- Run your unit tests inside a container
- Configure a CI/CD pipeline for your application using GitHub Actions
- Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the Java getting started modules, you should be able to containerize your own Java application based on the examples and instructions provided in this guide.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: java, CI/CD, local, development
description: Learn how to Configure CI/CD for your Java application
aliases:
- /language/java/configure-ci-cd/
- /language/java/configure-ci-cd/
- /guides/language/java/configure-ci-cd/
---
## Prerequisites
@ -72,33 +73,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: test
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -133,8 +130,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -5,9 +5,10 @@ weight: 10
keywords: java, containerize, initialize, maven, build
description: Learn how to containerize a Java application.
aliases:
- /language/java/build-images/
- /language/java/run-containers/
- /language/java/containerize/
- /language/java/build-images/
- /language/java/run-containers/
- /language/java/containerize/
- /guides/language/java/containerize/
---
## Prerequisites
@ -15,6 +16,7 @@ aliases:
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
Docker adds new features regularly and some parts of this guide may
work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this
section use a command-line based Git client, but you can use any client.
@ -76,12 +78,11 @@ exists, so `docker init` overwrites that file rather than creating a new
directory. Both names are supported, but Compose prefers the canonical
`compose.yaml`.
{{< /tab >}}
{{< tab name="Manually create assets" >}}
If you don't have Docker Desktop installed or prefer creating the assets
manually, you can create the following files in your project directory.
manually, you can create the following files in your project directory.
Create a file named `Dockerfile` with the following contents.
@ -198,7 +199,6 @@ services:
context: .
ports:
- 8080:8080
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
@ -232,7 +232,6 @@ services:
# db-password:
# file: db/password.txt
```
Create a file named `.dockerignore` with the following contents.
@ -326,7 +325,8 @@ In this section, you learned how you can containerize and run a Java
application using Docker.
Related information:
- [docker init reference](/reference/cli/docker/init/)
- [docker init reference](/reference/cli/docker/init/)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, java
description: Learn how to develop locally using Kubernetes
aliases:
- /language/java/deploy/
- /language/java/deploy/
- /guides/language/java/deploy/
---
## Prerequisites
@ -45,9 +46,9 @@ spec:
service: server
spec:
containers:
- name: server-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: server-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -59,21 +60,21 @@ spec:
selector:
service: server
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
- port: 8080
targetPort: 8080
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Java application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Java application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -132,6 +133,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
```
You should get output like the following.
```console
{"status":"UP","groups":["liveness","readiness"]}
```
@ -147,6 +149,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: Java, local, development, run,
description: Learn how to develop your application locally.
aliases:
- /language/java/develop/
- /language/java/develop/
- /guides/language/java/develop/
---
## Prerequisites
@ -16,11 +17,11 @@ Work through the steps to containerize your application in [Containerize your ap
In this section, youll walk through setting up a local development environment
for the application you containerized in the previous section. This includes:
- Adding a local database and persisting data
- Creating a development container to connect a debugger
- Configuring Compose to automatically update your running Compose services as
you edit and save your code
- Adding a local database and persisting data
- Creating a development container to connect a debugger
- Configuring Compose to automatically update your running Compose services as
you edit and save your code
## Add a local database and persist data
@ -29,6 +30,7 @@ You can use containers to set up local services, like a database. In this sectio
In the cloned repository's directory, open the `docker-compose.yaml` file in an IDE or text editor. Your Compose file has an example database service, but it'll require a few changes for your unique app.
In the `docker-compose.yaml` file, you need to do the following:
- Uncomment all of the database instructions. You'll now use a database service
instead of local storage for the data.
- Remove the top-level `secrets` element as well as the element inside the `db`
@ -71,7 +73,7 @@ services:
ports:
- 5432:5432
healthcheck:
test: [ "CMD", "pg_isready", "-U", "petclinic" ]
test: ["CMD", "pg_isready", "-U", "petclinic"]
interval: 10s
timeout: 5s
retries: 5
@ -84,7 +86,6 @@ update the instruction to pass in the system property as specified in the
`spring-petclinic/src/resources/db/postgres/petclinic_db_setup_postgres.txt`
file.
```diff
- ENTRYPOINT [ "java", "org.springframework.boot.loader.launch.JarLauncher" ]
+ ENTRYPOINT [ "java", "-Dspring.profiles.active=postgres", "org.springframework.boot.loader.launch.JarLauncher" ]
@ -203,7 +204,7 @@ services:
ports:
- 5432:5432
healthcheck:
test: [ "CMD", "pg_isready", "-U", "petclinic" ]
test: ["CMD", "pg_isready", "-U", "petclinic"]
interval: 10s
timeout: 5s
retries: 5
@ -228,7 +229,61 @@ $ curl --request GET \
You should receive the following response:
```json
{"vetList":[{"id":1,"firstName":"James","lastName":"Carter","specialties":[],"nrOfSpecialties":0,"new":false},{"id":2,"firstName":"Helen","lastName":"Leary","specialties":[{"id":1,"name":"radiology","new":false}],"nrOfSpecialties":1,"new":false},{"id":3,"firstName":"Linda","lastName":"Douglas","specialties":[{"id":3,"name":"dentistry","new":false},{"id":2,"name":"surgery","new":false}],"nrOfSpecialties":2,"new":false},{"id":4,"firstName":"Rafael","lastName":"Ortega","specialties":[{"id":2,"name":"surgery","new":false}],"nrOfSpecialties":1,"new":false},{"id":5,"firstName":"Henry","lastName":"Stevens","specialties":[{"id":1,"name":"radiology","new":false}],"nrOfSpecialties":1,"new":false},{"id":6,"firstName":"Sharon","lastName":"Jenkins","specialties":[],"nrOfSpecialties":0,"new":false}]}
{
"vetList": [
{
"id": 1,
"firstName": "James",
"lastName": "Carter",
"specialties": [],
"nrOfSpecialties": 0,
"new": false
},
{
"id": 2,
"firstName": "Helen",
"lastName": "Leary",
"specialties": [{ "id": 1, "name": "radiology", "new": false }],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 3,
"firstName": "Linda",
"lastName": "Douglas",
"specialties": [
{ "id": 3, "name": "dentistry", "new": false },
{ "id": 2, "name": "surgery", "new": false }
],
"nrOfSpecialties": 2,
"new": false
},
{
"id": 4,
"firstName": "Rafael",
"lastName": "Ortega",
"specialties": [{ "id": 2, "name": "surgery", "new": false }],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 5,
"firstName": "Henry",
"lastName": "Stevens",
"specialties": [{ "id": 1, "name": "radiology", "new": false }],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 6,
"firstName": "Sharon",
"lastName": "Jenkins",
"specialties": [],
"nrOfSpecialties": 0,
"new": false
}
]
}
```
## Connect a Debugger
@ -301,7 +356,7 @@ services:
ports:
- 5432:5432
healthcheck:
test: [ "CMD", "pg_isready", "-U", "petclinic" ]
test: ["CMD", "pg_isready", "-U", "petclinic"]
interval: 10s
timeout: 5s
retries: 5
@ -339,9 +394,9 @@ In this section, you took a look at running a database locally and persisting th
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose Watch](/manuals/compose/how-tos/file-watch.md)
- [Dockerfile reference](/reference/dockerfile/)
- [Compose file reference](/reference/compose-file/)
- [Compose Watch](/manuals/compose/how-tos/file-watch.md)
- [Dockerfile reference](/reference/dockerfile/)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 30
keywords: Java, build, test
description: How to build and run your Java tests
aliases:
- /language/java/run-tests/
- /language/java/run-tests/
- /guides/language/java/run-tests/
---
## Prerequisites
@ -103,6 +104,7 @@ $ docker build -t java-docker-image-test --progress=plain --no-cache --target=te
```
You should see output containing the following
```console
...

View File

@ -11,6 +11,8 @@ summary: |
languages: [python]
levels: [beginner]
subjects: [data-science]
aliases:
- /guides/use-case/jupyter/
params:
time: 20 minutes
---
@ -53,6 +55,7 @@ In a terminal, run the following command to run your JupyterLab container.
```console
$ docker run --rm -p 8889:8888 quay.io/jupyter/base-notebook start-notebook.py --NotebookApp.token='my-token'
```
The following are the notable parts of the command:
- `-p 8889:8888`: Maps port 8889 from the host to port 8888 on the container.
@ -158,6 +161,7 @@ For this example, you'll use the [Iris Dataset](https://scikit-learn.org/stable/
4. Select the play button to run the code.
5. In the notebook, specify the following code.
```python
from sklearn import datasets
@ -171,6 +175,7 @@ For this example, you'll use the [Iris Dataset](https://scikit-learn.org/stable/
scatter.legend_elements()[0], iris.target_names, loc="lower right", title="Classes"
)
```
6. Select the play button to run the code. You should see a scatter plot of the
Iris dataset.
@ -242,7 +247,7 @@ located, and then run the following command.
$ docker build -t my-jupyter-image .
```
The command builds a Docker image from your `Dockerfile` and a context. The
The command builds a Docker image from your `Dockerfile` and a context. The
`-t` option specifies the name and tag of the image, in this case
`my-jupyter-image`. The `.` indicates that the current directory is the context,
which means that the files in that directory can be used in the image creation
@ -384,6 +389,7 @@ $ docker run --rm -p 8889:8888 YOUR-USER-NAME/my-jupyer-image start-notebook.py
This example uses the Docker Desktop [Volumes Backup & Share](https://hub.docker.com/extensions/docker/volumes-backup-extension) extension. Alternatively, in the CLI you can [back up the volume](/engine/storage/volumes/#back-up-a-volume) and then [push it using the ORAS CLI](/manuals/docker-hub/oci-artifacts.md#push-a-volume).
1. Install the Volumes Backup & Share extension.
1. Open the Docker Dashboard and select **Extensions**.
2. Search for `Volumes Backup & Share`.
3. In the search results select **Install** for the extension.

View File

@ -10,13 +10,15 @@ summary: |
subjects: [distributed-systems]
languages: [js]
levels: [intermediate]
aliases:
- /guides/use-case/kafka/
params:
time: 20 minutes
---
With the rise of microservices, event-driven architectures have become increasingly popular.
[Apache Kafka](https://kafka.apache.org/), a distributed event streaming platform, is often at the
heart of these architectures. Unfortunately, setting up and deploying your own Kafka instance for development
With the rise of microservices, event-driven architectures have become increasingly popular.
[Apache Kafka](https://kafka.apache.org/), a distributed event streaming platform, is often at the
heart of these architectures. Unfortunately, setting up and deploying your own Kafka instance for development
is often tricky. Fortunately, Docker and containers make this much easier.
In this guide, you will learn how to:
@ -34,7 +36,6 @@ The following prerequisites are required to follow along with this how-to guide:
- [Node.js](https://nodejs.org/en/download/package-manager) and [yarn](https://yarnpkg.com/)
- Basic knowledge of Kafka and Docker
## Launching Kafka
Beginning with [Kafka 3.3](https://www.confluent.io/blog/apache-kafka-3-3-0-new-features-and-updates/), the deployment of Kafka was greatly simplified by no longer requiring Zookeeper thanks to KRaft (Kafka Raft). With KRaft, setting up a Kafka instance for local development is much easier. Starting with the launch of [Kafka 3.8](https://www.confluent.io/blog/introducing-apache-kafka-3-8/), a new [kafka-native](https://hub.docker.com/r/apache/kafka-native) Docker image is now available, providing a significantly faster startup and lower memory footprint.
@ -49,60 +50,60 @@ Start a basic Kafka cluster by doing the following steps. This example will laun
1. Start a Kafka container by running the following command:
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
2. Once the image pulls, youll have a Kafka instance up and running within a second or two.
3. The apache/kafka image ships with several helpful scripts in the `/opt/kafka/bin` directory. Run the following command to verify the cluster is up and running and get its cluster ID:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-cluster.sh cluster-id --bootstrap-server :9092
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-cluster.sh cluster-id --bootstrap-server :9092
```
Doing so will produce output similar to the following:
Doing so will produce output similar to the following:
```plaintext
Cluster ID: 5L6g3nShT-eMCtK--X86sw
```
```plaintext
Cluster ID: 5L6g3nShT-eMCtK--X86sw
```
4. Create a sample topic and produce (or publish) a few messages by running the following command:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
After running, you can enter a message per line. For example, enter a few messages, one per line. A few examples might be:
After running, you can enter a message per line. For example, enter a few messages, one per line. A few examples might be:
```plaintext
First message
```
```plaintext
First message
```
And
```plaintext
Second message
```
And
Press `enter` to send the last message and then press ctrl+c when youre done. The messages will be published to Kafka.
```plaintext
Second message
```
Press `enter` to send the last message and then press ctrl+c when youre done. The messages will be published to Kafka.
5. Confirm the messages were published into the cluster by consuming the messages:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server :9092 --topic demo --from-beginning
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server :9092 --topic demo --from-beginning
```
You should then see your messages in the output:
You should then see your messages in the output:
```plaintext
First message
Second message
```
```plaintext
First message
Second message
```
If you want, you can open another terminal and publish more messages and see them appear in the consumer.
If you want, you can open another terminal and publish more messages and see them appear in the consumer.
When youre done, hit ctrl+c to stop consuming messages.
When youre done, hit ctrl+c to stop consuming messages.
You have a locally running Kafka cluster and have validated you can connect to it.
@ -114,47 +115,47 @@ Since the cluster is running locally and is exposed at port 9092, the app can co
1. If you dont have the Kafka cluster running from the previous step, run the following command to start a Kafka instance:
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
2. Clone the [GitHub repository](https://github.com/dockersamples/kafka-development-node) locally.
```console
$ git clone https://github.com/dockersamples/kafka-development-node.git
```
```console
$ git clone https://github.com/dockersamples/kafka-development-node.git
```
3. Navigate into the project.
```console
cd kafka-development-node/app
```
```console
cd kafka-development-node/app
```
4. Install the dependencies using yarn.
```console
$ yarn install
```
```console
$ yarn install
```
5. Start the application using `yarn dev`. This will set the `NODE_ENV` environment variable to `development` and use `nodemon` to watch for file changes.
```console
$ yarn dev
```
```console
$ yarn dev
```
6. With the application now running, it will log received messages to the console. In a new terminal, publish a few messages using the following command:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
And then send a message to the cluster:
And then send a message to the cluster:
```plaintext
Test message
```
```plaintext
Test message
```
Remember to press `ctrl+c` when youre done to stop producing messages.
Remember to press `ctrl+c` when youre done to stop producing messages.
## Connecting to Kafka from both containers and native apps
@ -179,7 +180,7 @@ Since there are two different methods clients need to connect, two different lis
![Diagram showing the DOCKER and HOST listeners and how they are exposed to the host and Docker networks](./images/kafka-1.webp)
In order to set this up, the `compose.yaml` for Kafka needs some additional configuration. Once you start overriding some of the defaults, you also need to specify a few other options in order for KRaft mode to work.
In order to set this up, the `compose.yaml` for Kafka needs some additional configuration. Once you start overriding some of the defaults, you also need to specify a few other options in order for KRaft mode to work.
```yaml
services:
@ -212,21 +213,21 @@ Give it a try using the steps below.
2. If you have the Kafka cluster running from the previous section, go ahead and stop that container using the following command:
```console
$ docker rm -f kafka
```
```console
$ docker rm -f kafka
```
3. Start the Compose stack by running the following command at the root of the cloned project directory:
```console
$ docker compose up
```
```console
$ docker compose up
```
After a moment, the application will be up and running.
After a moment, the application will be up and running.
4. In the stack is another service that can be used to publish messages. Open it by going to [http://localhost:3000](http://localhost:3000). As you type in a message and submit the form, you should see the log message of the message being received by the app.
This helps demonstrate how a containerized approach makes it easy to add additional services to help test and troubleshoot your application.
This helps demonstrate how a containerized approach makes it easy to add additional services to help test and troubleshoot your application.
## Adding cluster visualization
@ -241,7 +242,7 @@ services:
ports:
- 8080:8080
environment:
DYNAMIC_CONFIG_ENABLED: 'true'
DYNAMIC_CONFIG_ENABLED: "true"
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9093
depends_on:
@ -258,4 +259,4 @@ If youre interested in learning how you can integrate Kafka easily into your
By using Docker, you can simplify the process of developing and testing event-driven applications with Kafka. Containers simplify the process of setting up and deploying the various services you need to develop. And once theyre defined in Compose, everyone on the team can benefit from the ease of use.
In case you missed it earlier, all of the sample app code can be found at dockersamples/kafka-development-node.
In case you missed it earlier, all of the sample app code can be found at dockersamples/kafka-development-node.

View File

@ -3,7 +3,8 @@ title: Deploy to Kubernetes
keywords: kubernetes, pods, deployments, kubernetes services
description: Learn how to describe and deploy a simple application on Kubernetes.
aliases:
- /get-started/kube-deploy/
- /get-started/kube-deploy/
- /guides/deployment-orchestration/kube-deploy/
summary: |
Learn how to deploy and orchestrate Docker containers using Kubernetes, with
step-by-step guidance on setup, configuration, and best practices to enhance
@ -19,7 +20,7 @@ params:
- Download and install Docker Desktop as described in [Get Docker](/get-started/get-docker.md).
- Work through containerizing an application in [Part 2](02_our_app.md).
- Make sure that Kubernetes is turned on in Docker Desktop:
If Kubernetes isn't running, follow the instructions in [Orchestration](orchestration.md) to finish setting it up.
If Kubernetes isn't running, follow the instructions in [Orchestration](orchestration.md) to finish setting it up.
## Introduction
@ -37,43 +38,45 @@ You already wrote a basic Kubernetes YAML file in the Orchestration overview par
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-demo
namespace: default
name: bb-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: getting-started
imagePullPolicy: Never
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: getting-started
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: bb-entrypoint
namespace: default
name: bb-entrypoint
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
type: NodePort
selector:
bb: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A `Deployment`, describing a scalable group of identical pods. In this case, you'll get just one `replica`, or copy of your pod, and that pod (which is described under the `template:` key) has just one container in it, based off of your `getting-started` image from the previous step in this tutorial.
- A `NodePort` service, which will route traffic from port 30001 on your host to port 3000 inside the pods it routes to, allowing you to reach your Todo app from the network.
Also, notice that while Kubernetes YAML can appear long and complicated at first, it almost always follows the same pattern:
Also, notice that while Kubernetes YAML can appear long and complicated at first, it almost always follows the same pattern:
- The `apiVersion`, which indicates the Kubernetes API that parses this object
- The `kind` indicating what sort of object this is
- Some `metadata` applying things like names to your objects
@ -83,49 +86,49 @@ In this Kubernetes YAML file, there are two objects, separated by the `---`:
1. In a terminal, navigate to where you created `bb.yaml` and deploy your application to Kubernetes:
```console
$ kubectl apply -f bb.yaml
```
```console
$ kubectl apply -f bb.yaml
```
You should see output that looks like the following, indicating your Kubernetes objects were created successfully:
You should see output that looks like the following, indicating your Kubernetes objects were created successfully:
```shell
deployment.apps/bb-demo created
service/bb-entrypoint created
```
```shell
deployment.apps/bb-demo created
service/bb-entrypoint created
```
2. Make sure everything worked by listing your deployments:
```console
$ kubectl get deployments
```
```console
$ kubectl get deployments
```
if all is well, your deployment should be listed as follows:
if all is well, your deployment should be listed as follows:
```shell
NAME READY UP-TO-DATE AVAILABLE AGE
bb-demo 1/1 1 1 40s
```
```shell
NAME READY UP-TO-DATE AVAILABLE AGE
bb-demo 1/1 1 1 40s
```
This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services:
This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services:
```console
$ kubectl get services
```console
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bb-entrypoint NodePort 10.106.145.116 <none> 3000:30001/TCP 53s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 138d
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bb-entrypoint NodePort 10.106.145.116 <none> 3000:30001/TCP 53s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 138d
```
In addition to the default `kubernetes` service, we see our `bb-entrypoint` service, accepting traffic on port 30001/TCP.
In addition to the default `kubernetes` service, we see our `bb-entrypoint` service, accepting traffic on port 30001/TCP.
3. Open a browser and visit your Todo app at `localhost:30001`. You should see your Todo application, the same as when you ran it as a stand-alone container in [Part 2](02_our_app.md) of the tutorial.
4. Once satisfied, tear down your application:
```console
$ kubectl delete -f bb.yaml
```
```console
$ kubectl delete -f bb.yaml
```
## Conclusion
@ -137,6 +140,6 @@ In addition to deploying to Kubernetes, you have also described your application
Further documentation for all new Kubernetes objects used in this article are available here:
- [Kubernetes Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [Kubernetes Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/)
- [Kubernetes Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [Kubernetes Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/)

View File

@ -10,6 +10,8 @@ summary: |
levels: [beginner]
subjects: [ai]
languages: [python]
aliases:
- /guides/use-case/nlp/language-translation/
params:
time: 20 minutes
---
@ -28,8 +30,8 @@ methods as detect and translate.
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Get the sample application
@ -43,6 +45,7 @@ methods as detect and translate.
2. Verify that you cloned the repository.
You should see the following files in your `Docker-NLP` directory.
```text
01_sentiment_analysis.py
02_name_entity_recognition.py
@ -66,15 +69,17 @@ in a text or code editor to explore its contents in the following steps.
```python
from googletrans import Translator
```
This line imports the `Translator` class from `googletrans`.
Googletrans is a Python library that provides an interface to Google
Translate's AJAX API.
2. Specify the main execution block.
```python
if __name__ == "__main__":
```
This Python idiom ensures that the following code block runs only if this
script is the main program. It provides flexibility, allowing the script to
function both as a standalone program and as an imported module.
@ -112,7 +117,7 @@ in a text or code editor to explore its contents in the following steps.
Here, the `translator.translate` method is called with the user input. The
`dest='fr'` argument specifies that the destination language for translation
is French. The `.text` attribute gets the translated string. For more details
about the available language codes, see the
about the available language codes, see the
[Googletrans docs](https://py-googletrans.readthedocs.io/en/latest/).
6. Print the original and translated text.
@ -237,10 +242,10 @@ The following steps explain each part of the `Dockerfile`. For more details, see
ENTRYPOINT ["/app/entrypoint.sh"]
```
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
You can explore the `entrypoint.sh` script by opening it in a code or text
editor. As the sample contains several applications, the script lets you
specify which application to run when the container starts.
@ -293,12 +298,12 @@ To run the application using Docker:
- `docker run`: This is the primary command used to run a new container from
a Docker image.
- `-it`: This is a combination of two options:
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `basic-nlp`: This specifies the name of the Docker image to use for
creating the container. In this case, it's the image named `basic-nlp` that
you created with the `docker build` command.
@ -339,10 +344,10 @@ Docker.
Related information:
* [Docker CLI reference](/reference/cli/docker/)
* [Dockerfile reference](/reference/dockerfile/)
* [Googletrans](https://github.com/ssut/py-googletrans)
* [Python documentation](https://docs.python.org/3/)
- [Docker CLI reference](/reference/cli/docker/)
- [Dockerfile reference](/reference/dockerfile/)
- [Googletrans](https://github.com/ssut/py-googletrans)
- [Python documentation](https://docs.python.org/3/)
## Next steps

View File

@ -11,6 +11,8 @@ summary: |
subjects: [ai]
languages: [python]
levels: [beginner]
aliases:
- /guides/use-case/nlp/named-entity-recognition/
params:
time: 20 minutes
---
@ -25,8 +27,8 @@ The application processes input text to identify and print named entities, like
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Get the sample application
@ -62,7 +64,7 @@ The source code for the name recognition application is in the `Docker-NLP/02_na
```python
import spacy
```
This line imports the `spaCy` library. `spaCy` is a popular library in Python
used for natural language processing (NLP).
@ -71,7 +73,7 @@ The source code for the name recognition application is in the `Docker-NLP/02_na
```python
nlp = spacy.load("en_core_web_sm")
```
Here, the `spacy.load` function loads a language model. The `en_core_web_sm`
model is a small English language model. You can use this model for various
NLP tasks, including tokenization, part-of-speech tagging, and named entity
@ -130,7 +132,6 @@ The source code for the name recognition application is in the `Docker-NLP/02_na
- `for ent in doc.ents:`: This loop iterates over the entities found in the text.
- `print(f"Entity: {ent.text}, Type: {ent.label_}")`: For each entity, it prints the entity text and its type (like PERSON, ORG, or GPE).
8. Create `requirements.txt`.
The sample application already contains the `requirements.txt` file to specify the necessary packages that the application imports. Open `requirements.txt` in a code or text editor to explore its contents.
@ -245,10 +246,10 @@ The following steps explain each part of the `Dockerfile`. For more details, see
ENTRYPOINT ["/app/entrypoint.sh"]
```
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
You can explore the `entrypoint.sh` script by opening it in a code or text
editor. As the sample contains several applications, the script lets you
specify which application to run when the container starts.
@ -301,21 +302,20 @@ To run the application using Docker:
- `docker run`: This is the primary command used to run a new container from
a Docker image.
- `-it`: This is a combination of two options:
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `basic-nlp`: This specifies the name of the Docker image to use for
creating the container. In this case, it's the image named `basic-nlp` that
you created with the `docker build` command.
- `02_name_entity_recognition.py`: This is the script you want to run inside
the Docker container. It gets passed to the `entrypoint.sh` script, which
runs it when the container starts.
For more details, see the [docker run CLI reference](/reference/cli/docker/container/run/).
For more details, see the [docker run CLI reference](/reference/cli/docker/container/run/).
> [!NOTE]
>
@ -335,7 +335,7 @@ To run the application using Docker:
```console
Enter the text for entity recognition (type 'exit' to end): Apple Inc. is planning to open a new store in San Francisco. Tim Cook is the CEO of Apple.
Entity: Apple Inc., Type: ORG
Entity: San Francisco, Type: GPE
Entity: Tim Cook, Type: PERSON
@ -350,10 +350,10 @@ and then set up the environment and run the application using Docker.
Related information:
* [Docker CLI reference](/reference/cli/docker/)
* [Dockerfile reference](/reference/dockerfile/)
* [spaCy](https://spacy.io/)
* [Python documentation](https://docs.python.org/3/)
- [Docker CLI reference](/reference/cli/docker/)
- [Dockerfile reference](/reference/dockerfile/)
- [spaCy](https://spacy.io/)
- [Python documentation](https://docs.python.org/3/)
## Next steps

View File

@ -11,7 +11,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/nodejs/
- /language/nodejs/
- /guides/language/nodejs/
languages: [js]
levels: [beginner]
params:
@ -20,10 +21,10 @@ params:
The Node.js language-specific guide teaches you how to containerize a Node.js application using Docker. In this guide, youll learn how to:
* Containerize and run a Node.js application
* Set up a local environment to develop a Node.js application using containers
* Run tests for a Node.js application using containers
* Configure a CI/CD pipeline for a containerized Node.js application using GitHub Actions
* Deploy your containerized Node.js application locally to Kubernetes to test and debug your deployment
- Containerize and run a Node.js application
- Set up a local environment to develop a Node.js application using containers
- Run tests for a Node.js application using containers
- Configure a CI/CD pipeline for a containerized Node.js application using GitHub Actions
- Deploy your containerized Node.js application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing Node.js application.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: ci/cd, github actions, node.js, node
description: Learn how to configure CI/CD using GitHub Actions for your Node.js application.
aliases:
- /language/nodejs/configure-ci-cd/
- /language/nodejs/configure-ci-cd/
- /guides/language/nodejs/configure-ci-cd/
---
## Prerequisites
@ -69,33 +70,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: test
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64/v8
@ -103,7 +100,7 @@ to Docker Hub.
target: prod
tags: ${{ vars.DOCKER_USERNAME }}/${{ github.event.repository.name }}:latest
```
For more information about the YAML syntax for `docker/build-push-action`,
refer to the [GitHub Action README](https://github.com/docker/build-push-action/blob/master/README.md).
@ -130,8 +127,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your Node.js application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -9,13 +9,14 @@ aliases:
- /language/nodejs/build-images/
- /language/nodejs/run-containers/
- /language/nodejs/containerize/
- /guides/language/nodejs/containerize/
---
## Prerequisites
* You have installed the latest version of [Docker
- You have installed the latest version of [Docker
Desktop](/get-started/get-docker.md).
* You have a [git client](https://git-scm.com/downloads). The examples in this
- You have a [git client](https://git-scm.com/downloads). The examples in this
section use a command-line based git client, but you can use any client.
## Overview
@ -135,7 +136,6 @@ services:
NODE_ENV: production
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
@ -212,7 +212,6 @@ README.md
{{< /tab >}}
{{< /tabs >}}
You should now have at least the following contents in your
`docker-nodejs-sample` directory.
@ -230,10 +229,10 @@ You should now have at least the following contents in your
```
To learn more about the files, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -277,9 +276,10 @@ In this section, you learned how you can containerize and run your Node.js
application using Docker.
Related information:
- [Dockerfile reference](/reference/dockerfile.md)
- [.dockerignore file reference](/reference/dockerfile.md#dockerignore-file)
- [Docker Compose overview](/manuals/compose/_index.md)
- [Dockerfile reference](/reference/dockerfile.md)
- [.dockerignore file reference](/reference/dockerfile.md#dockerignore-file)
- [Docker Compose overview](/manuals/compose/_index.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, node, node.js
description: Learn how to deploy locally to test and debug your Kubernetes deployment
aliases:
- /language/nodejs/deploy/
- /language/nodejs/deploy/
- /guides/language/nodejs/deploy/
---
## Prerequisites
@ -45,9 +46,9 @@ spec:
todo: web
spec:
containers:
- name: todo-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: todo-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -59,21 +60,21 @@ spec:
selector:
todo: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
- port: 3000
targetPort: 3000
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Node.js application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3000 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Node.js application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3000 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -136,6 +137,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,8 +5,9 @@ weight: 20
keywords: node, node.js, development
description: Learn how to develop your Node.js application locally using containers.
aliases:
- /get-started/nodejs/develop/
- /language/nodejs/develop/
- /get-started/nodejs/develop/
- /language/nodejs/develop/
- /guides/language/nodejs/develop/
---
## Prerequisites
@ -16,9 +17,10 @@ Complete [Containerize a Node.js application](containerize.md).
## Overview
In this section, you'll learn how to set up a development environment for your containerized application. This includes:
- Adding a local database and persisting data
- Configuring your container to run a development environment
- Debugging your containerized application
- Adding a local database and persisting data
- Configuring your container to run a development environment
- Debugging your containerized application
## Add a local database and persist data
@ -50,14 +52,14 @@ You can use containers to set up local services, like a database. In this sectio
NODE_ENV: production
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
depends_on:
db:
condition: service_healthy
@ -75,7 +77,7 @@ You can use containers to set up local services, like a database. In this sectio
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -91,11 +93,10 @@ You can use containers to set up local services, like a database. In this sectio
> To learn more about the instructions in the Compose file, see [Compose file
> reference](/reference/compose-file/).
3. Open `src/persistence/postgres.js` in an IDE or text editor. You'll notice
that this application uses a Postgres database and requires some environment
variables in order to connect to the database. The `compose.yaml` file doesn't
have these variables defined yet.
that this application uses a Postgres database and requires some environment
variables in order to connect to the database. The `compose.yaml` file doesn't
have these variables defined yet.
4. Add the environment variables that specify the database configuration. The
following is the updated `compose.yaml` file.
@ -121,14 +122,14 @@ have these variables defined yet.
POSTGRES_DB: example
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
depends_on:
db:
condition: service_healthy
@ -146,7 +147,7 @@ have these variables defined yet.
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -181,14 +182,14 @@ have these variables defined yet.
POSTGRES_DB: example
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
depends_on:
db:
condition: service_healthy
@ -208,7 +209,7 @@ have these variables defined yet.
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -222,7 +223,7 @@ have these variables defined yet.
6. In the `docker-nodejs-sample` directory, create a directory named `db`.
7. In the `db` directory, create a file named `password.txt`. This file will
contain your database password.
You should now have at least the following contents in your
`docker-nodejs-sample` directory.
@ -376,7 +377,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -420,9 +421,10 @@ database and persist data. You also learned how to create a multi-stage
Dockerfile and set up a bind mount for development.
Related information:
- [Volumes top-level element](/reference/compose-file/volumes/)
- [Services top-level element](/reference/compose-file/services/)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Volumes top-level element](/reference/compose-file/volumes/)
- [Services top-level element](/reference/compose-file/services/)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 30
keywords: node.js, node, test
description: Learn how to run your Node.js tests in a container.
aliases:
- /language/nodejs/run-tests/
- /language/nodejs/run-tests/
- /guides/language/nodejs/run-tests/
---
## Prerequisites
@ -165,7 +166,8 @@ You should see output containing the following.
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information:
- [docker compose run](/reference/cli/docker/compose/run/)
- [docker compose run](/reference/cli/docker/compose/run/)
## Next steps

View File

@ -3,7 +3,8 @@ title: Deployment and orchestration
keywords: orchestration, deploy, kubernetes, swarm,
description: Get oriented on some basics of Docker and install Docker Desktop.
aliases:
- /get-started/orchestration/
- /get-started/orchestration/
- /guides/deployment-orchestration/orchestration/
summary: |
Explore the essentials of container orchestration with Docker, including key
concepts, tools like Kubernetes and Docker Swarm, and practical guides to
@ -45,7 +46,7 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
2. Select the checkbox labeled **Enable Kubernetes**, and select **Apply & Restart**. Docker Desktop automatically sets up Kubernetes for you. You'll know that Kubernetes has been successfully enabled when you see a green light beside 'Kubernetes _running_' in **Settings**.
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
```yaml
apiVersion: v1
@ -54,20 +55,20 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
name: demo
spec:
containers:
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
```
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
4. In a terminal, navigate to where you created `pod.yaml` and create your pod:
4. In a terminal, navigate to where you created `pod.yaml` and create your pod:
```console
$ kubectl apply -f pod.yaml
```
5. Check that your pod is up and running:
5. Check that your pod is up and running:
```console
$ kubectl get pods
@ -80,7 +81,7 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
demo 1/1 Running 0 4s
```
6. Check that you get the logs you'd expect for a ping process:
6. Check that you get the logs you'd expect for a ping process:
```console
$ kubectl logs demo
@ -96,7 +97,7 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
...
```
7. Finally, tear down your test pod:
7. Finally, tear down your test pod:
```console
$ kubectl delete -f pod.yaml
@ -113,60 +114,60 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
```
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
4. In PowerShell, navigate to where you created `pod.yaml` and create your pod:
```console
$ kubectl apply -f pod.yaml
```
```console
$ kubectl apply -f pod.yaml
```
5. Check that your pod is up and running:
```console
$ kubectl get pods
```
```console
$ kubectl get pods
```
You should see something like:
You should see something like:
```shell
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 4s
```
```shell
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 4s
```
6. Check that you get the logs you'd expect for a ping process:
```console
$ kubectl logs demo
```
```console
$ kubectl logs demo
```
You should see the output of a healthy ping process:
You should see the output of a healthy ping process:
```shell
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=21.393 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=15.320 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=11.111 ms
...
```
```shell
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=21.393 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=15.320 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=11.111 ms
...
```
7. Finally, tear down your test pod:
```console
$ kubectl delete -f pod.yaml
```
```console
$ kubectl delete -f pod.yaml
```
{{< /tab >}}
{{< /tabs >}}
@ -182,62 +183,62 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
1. Open a terminal, and initialize Docker Swarm mode:
```console
$ docker swarm init
```
```console
$ docker swarm init
```
If all goes well, you should see a message similar to the following:
If all goes well, you should see a message similar to the following:
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
To add a worker to this swarm, run the following command:
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
3. Check that your service created one running container:
```console
$ docker service ps demo
```
```console
$ docker service ps demo
```
You should see something like:
You should see something like:
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
4. Check that you get the logs you'd expect for a ping process:
```console
$ docker service logs demo
```
```console
$ docker service logs demo
```
You should see the output of a healthy ping process:
You should see the output of a healthy ping process:
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
5. Finally, tear down your test service:
```console
$ docker service rm demo
```
```console
$ docker service rm demo
```
{{< /tab >}}
{{< tab name="Windows" >}}
@ -246,62 +247,62 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
1. Open a PowerShell, and initialize Docker Swarm mode:
```console
$ docker swarm init
```
```console
$ docker swarm init
```
If all goes well, you should see a message similar to the following:
If all goes well, you should see a message similar to the following:
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
To add a worker to this swarm, run the following command:
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
3. Check that your service created one running container:
```console
$ docker service ps demo
```
```console
$ docker service ps demo
```
You should see something like:
You should see something like:
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
4. Check that you get the logs you'd expect for a ping process:
```console
$ docker service logs demo
```
```console
$ docker service logs demo
```
You should see the output of a healthy ping process:
You should see the output of a healthy ping process:
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
5. Finally, tear down your test service:
```console
$ docker service rm demo
```
```console
$ docker service rm demo
```
{{< /tab >}}
{{< /tabs >}}

View File

@ -10,7 +10,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/php/
- /language/php/
- /guides/language/php/
languages: [php]
levels: [beginner]
params:
@ -19,11 +20,11 @@ params:
The PHP language-specific guide teaches you how to create a containerized PHP application using Docker. In this guide, you'll learn how to:
* Containerize and run a PHP application
* Set up a local environment to develop a PHP application using containers
* Run tests for a PHP application within containers
* Configure a CI/CD pipeline for a containerized PHP application using GitHub Actions
* Deploy your containerized application locally to Kubernetes to test and debug your deployment
- Containerize and run a PHP application
- Set up a local environment to develop a PHP application using containers
- Run tests for a PHP application within containers
- Configure a CI/CD pipeline for a containerized PHP application using GitHub Actions
- Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the PHP language-specific guide, you should be able to containerize your own PHP application based on the examples and instructions provided in this guide.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: php, CI/CD
description: Learn how to Configure CI/CD for your PHP application
aliases:
- /language/php/configure-ci-cd/
- /language/php/configure-ci-cd/
- /guides/language/php/configure-ci-cd/
---
## Prerequisites
@ -77,33 +78,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: test
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -138,8 +135,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -5,14 +5,15 @@ weight: 10
keywords: php, containerize, initialize, apache, composer
description: Learn how to containerize a PHP application.
aliases:
- /language/php/containerize/
- /language/php/containerize/
- /guides/language/php/containerize/
---
## Prerequisites
* You have installed the latest version of [Docker
- You have installed the latest version of [Docker
Desktop](/get-started/get-docker.md).
* You have a [git client](https://git-scm.com/downloads). The examples in this
- You have a [git client](https://git-scm.com/downloads). The examples in this
section use a command-line based git client, but you can use any client.
## Overview
@ -80,9 +81,10 @@ directory.
```
To learn more about the files that `docker init` added, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -124,7 +126,8 @@ In this section, you learned how you can containerize and run a simple PHP
application using Docker.
Related information:
- [docker init reference](/reference/cli/docker/init.md)
- [docker init reference](/reference/cli/docker/init.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, php, local, development
description: Learn how to deploy your application
aliases:
- /language/php/deploy/
- /language/php/deploy/
- /guides/language/php/deploy/
---
## Prerequisites
@ -47,9 +48,9 @@ spec:
hello-php: web
spec:
containers:
- name: hello-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: hello-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -61,21 +62,21 @@ spec:
selector:
hello-php: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
- port: 80
targetPort: 80
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
PHP application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 80 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
PHP application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 80 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -139,6 +140,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: php, development
description: Learn how to develop your PHP application locally using containers.
aliases:
- /language/php/develop/
- /language/php/develop/
- /guides/language/php/develop/
---
## Prerequisites
@ -15,16 +16,18 @@ Complete [Containerize a PHP application](containerize.md).
## Overview
In this section, you'll learn how to set up a development environment for your containerized application. This includes:
- Adding a local database and persisting data
- Adding phpMyAdmin to interact with the database
- Configuring Compose to automatically update your running Compose services as
you edit and save your code
- Creating a development container that contains the dev dependencies
- Adding a local database and persisting data
- Adding phpMyAdmin to interact with the database
- Configuring Compose to automatically update your running Compose services as
you edit and save your code
- Creating a development container that contains the dev dependencies
## Add a local database and persist data
You can use containers to set up local services, like a database.
To do this for the sample application, you'll need to do the following:
- Update the `Dockerfile` to install extensions to connect to the database
- Update the `compose.yaml` file to add a database service and volume to persist data
@ -63,6 +66,7 @@ already contains commented-out instructions for a PostgreSQL database and volume
Open the `src/database.php` file in an IDE or text editor. You'll notice that it reads environment variables in order to connect to the database.
In the `compose.yaml` file, you'll need to update the following:
1. Uncomment and update the database instructions for MariaDB.
2. Add a secret to the server service to pass in the database password.
3. Add the database connection environment variables to the server service.
@ -101,7 +105,14 @@ services:
expose:
- 3306
healthcheck:
test: ["CMD", "/usr/local/bin/healthcheck.sh", "--su-mysql", "--connect", "--innodb_initialized"]
test:
[
"CMD",
"/usr/local/bin/healthcheck.sh",
"--su-mysql",
"--connect",
"--innodb_initialized",
]
interval: 10s
timeout: 5s
retries: 5
@ -209,7 +220,14 @@ services:
expose:
- 3306
healthcheck:
test: ["CMD", "/usr/local/bin/healthcheck.sh", "--su-mysql", "--connect", "--innodb_initialized"]
test:
[
"CMD",
"/usr/local/bin/healthcheck.sh",
"--su-mysql",
"--connect",
"--innodb_initialized",
]
interval: 10s
timeout: 5s
retries: 5
@ -280,7 +298,14 @@ services:
expose:
- 3306
healthcheck:
test: ["CMD", "/usr/local/bin/healthcheck.sh", "--su-mysql", "--connect", "--innodb_initialized"]
test:
[
"CMD",
"/usr/local/bin/healthcheck.sh",
"--su-mysql",
"--connect",
"--innodb_initialized",
]
interval: 10s
timeout: 5s
retries: 5
@ -298,6 +323,7 @@ secrets:
db-password:
file: db/password.txt
```
Run the following command to run your application with Compose Watch.
```console
@ -320,6 +346,7 @@ Press `ctrl+c` in the terminal to stop Compose Watch. Run `docker compose down`
At this point, when you run your containerized application, Composer isn't installing the dev dependencies. While this small image is good for production, it lacks the tools and dependencies you may need when developing and it doesn't include the `tests` directory. You can use multi-stage builds to build stages for both development and production in the same Dockerfile. For more details, see [Multi-stage builds](/manuals/build/building/multi-stage.md).
In the `Dockerfile`, you'll need to update the following:
1. Split the `deps` staged into two stages. One stage for production
(`prod-deps`) and one stage (`dev-deps`) to install development dependencies.
2. Create a common `base` stage.
@ -348,6 +375,7 @@ COPY --from=deps app/vendor/ /var/www/html/vendor
COPY ./src /var/www/html
USER www-data
```
{{< /tab >}}
{{< tab name="After" >}}
@ -386,7 +414,6 @@ USER www-data
{{< /tab >}}
{{< /tabs >}}
Update your `compose.yaml` file by adding an instruction to target the
development stage.
@ -421,10 +448,11 @@ In this section, you took a look at setting up your Compose file to add a local
database and persist data. You also learned how to use Compose Watch to automatically sync your application when you update your code. And finally, you learned how to create a development container that contains the dependencies needed for development.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Dockerfile reference](/reference/dockerfile.md)
- [Official Docker Image for PHP](https://hub.docker.com/_/php)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Dockerfile reference](/reference/dockerfile.md)
- [Official Docker Image for PHP](https://hub.docker.com/_/php)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 30
keywords: php, test
description: Learn how to run your PHP tests in a container.
aliases:
- /language/php/run-tests/
- /language/php/run-tests/
- /guides/language/php/run-tests/
---
## Prerequisites
@ -109,7 +110,8 @@ You should see output containing the following.
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information:
- [docker compose run](/reference/cli/docker/compose/run/)
- [docker compose run](/reference/cli/docker/compose/run/)
## Next steps

View File

@ -10,7 +10,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/python/
- /language/python/
- /guides/language/python/
languages: [python]
levels: [beginner]
params:
@ -19,9 +20,9 @@ params:
The Python language-specific guide teaches you how to containerize a Python application using Docker. In this guide, youll learn how to:
* Containerize and run a Python application
* Set up a local environment to develop a Python application using containers
* Configure a CI/CD pipeline for a containerized Python application using GitHub Actions
* Deploy your containerized Python application locally to Kubernetes to test and debug your deployment
- Containerize and run a Python application
- Set up a local environment to develop a Python application using containers
- Configure a CI/CD pipeline for a containerized Python application using GitHub Actions
- Deploy your containerized Python application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing Python application.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: ci/cd, github actions, python, flask
description: Learn how to configure CI/CD using GitHub Actions for your Python application.
aliases:
- /language/python/configure-ci-cd/
- /language/python/configure-ci-cd/
- /guides/language/python/configure-ci-cd/
---
## Prerequisites
@ -69,34 +70,31 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ vars.DOCKER_USERNAME }}/${{ github.event.repository.name }}:latest
```
For more information about the YAML syntax for `docker/build-push-action`,
refer to the [GitHub Action README](https://github.com/docker/build-push-action/blob/master/README.md).
@ -123,8 +121,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your Python application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -8,12 +8,13 @@ aliases:
- /language/python/build-images/
- /language/python/run-containers/
- /language/python/containerize/
- /guides/language/python/containerize/
---
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
* You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
## Overview
@ -241,6 +242,7 @@ Create a file named `.dockerignore` with the following contents.
LICENSE
README.md
```
Create a file named `.gitignore` with the following contents.
```text {collapse=true,title=".gitignore"}
@ -318,10 +320,11 @@ directory.
```
To learn more about the files, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [.gitignore](https://git-scm.com/docs/gitignore)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [.gitignore](https://git-scm.com/docs/gitignore)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -367,7 +370,8 @@ In this section, you learned how you can containerize and run your Python
application using Docker.
Related information:
- [Docker Compose overview](/manuals/compose/_index.md)
- [Docker Compose overview](/manuals/compose/_index.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, python
description: Learn how to develop locally using Kubernetes
aliases:
- /language/python/deploy/
- /language/python/deploy/
- /guides/language/python/deploy/
---
## Prerequisites
@ -39,27 +40,27 @@ spec:
app: postgres
spec:
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: example
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: postgres
image: postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: example
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
@ -68,7 +69,7 @@ metadata:
namespace: default
spec:
ports:
- port: 5432
- port: 5432
selector:
app: postgres
---
@ -79,7 +80,7 @@ metadata:
namespace: default
spec:
accessModes:
- ReadWriteOnce
- ReadWriteOnce
resources:
requests:
storage: 1Gi
@ -113,25 +114,25 @@ spec:
service: fastapi
spec:
containers:
- name: fastapi-service
image: technox64/python-docker-dev-example-test:latest
imagePullPolicy: Always
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_DB
value: example
- name: POSTGRES_SERVER
value: postgres
- name: POSTGRES_PORT
value: "5432"
ports:
- containerPort: 8001
- name: fastapi-service
image: technox64/python-docker-dev-example-test:latest
imagePullPolicy: Always
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_DB
value: example
- name: POSTGRES_SERVER
value: postgres
- name: POSTGRES_PORT
value: "5432"
ports:
- containerPort: 8001
---
apiVersion: v1
kind: Service
@ -143,30 +144,30 @@ spec:
selector:
service: fastapi
ports:
- port: 8001
targetPort: 8001
nodePort: 30001
- port: 8001
targetPort: 8001
nodePort: 30001
```
In these Kubernetes YAML file, there are various objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Python application](configure-ci-cd.md).
- A Service, which will define how the ports are mapped in the containers.
- A PersistentVolumeClaim, to define a storage that will be persistent through restarts for the database.
- A Secret, Keeping the database password as a example using secret kubernetes resource.
- A NodePort service, which will route traffic from port 30001 on your host to
port 8001 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Python application](configure-ci-cd.md).
- A Service, which will define how the ports are mapped in the containers.
- A PersistentVolumeClaim, to define a storage that will be persistent through restarts for the database.
- A Secret, Keeping the database password as a example using secret kubernetes resource.
- A NodePort service, which will route traffic from port 30001 on your host to
port 8001 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
> [!NOTE]
>
> * The `NodePort` service is good for development/testing purposes. For production you should implement an [ingress-controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
> - The `NodePort` service is good for development/testing purposes. For production you should implement an [ingress-controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
## Deploy and check your application
@ -250,6 +251,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: python, local, development
description: Learn how to develop your Python application locally.
aliases:
- /language/python/develop/
- /language/python/develop/
- /guides/language/python/develop/
---
## Prerequisites
@ -40,15 +41,15 @@ You'll need to clone a new repository to get a sample application that includes
```console
$ docker init
Welcome to the Docker Init CLI!
This utility will walk you through creating the following files with sensible defaults for your project:
- .dockerignore
- Dockerfile
- compose.yaml
- README.Docker.md
Let's get started!
? What application platform does your project use? Python
? What version of Python do you want to use? 3.11.4
? What port do you want your app to listen on? 8001
@ -56,16 +57,16 @@ You'll need to clone a new repository to get a sample application that includes
```
Create a file named `.gitignore` with the following contents.
```text {collapse=true,title=".gitignore"}
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
@ -85,7 +86,7 @@ You'll need to clone a new repository to get a sample application that includes
.installed.cfg
*.egg
MANIFEST
# Unit test / coverage reports
htmlcov/
.tox/
@ -100,10 +101,10 @@ You'll need to clone a new repository to get a sample application that includes
.hypothesis/
.pytest_cache/
cover/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Environments
.env
.venv
@ -113,36 +114,36 @@ You'll need to clone a new repository to get a sample application that includes
env.bak/
venv.bak/
```
{{< /tab >}}
{{< tab name="Manually create assets" >}}
If you don't have Docker Desktop installed or prefer creating the assets
manually, you can create the following files in your project directory.
Create a file named `Dockerfile` with the following contents.
```dockerfile {collapse=true,title=Dockerfile}
# syntax=docker/dockerfile:1
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Dockerfile reference guide at
# https://docs.docker.com/go/dockerfile-reference/
# Want to help us make this template better? Share your feedback here: https:// forms.gle/ybq9Krt8jtBL3iCk7
ARG PYTHON_VERSION=3.11.4
FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
@ -154,7 +155,7 @@ You'll need to clone a new repository to get a sample application that includes
--no-create-home \
--uid "${UID}" \
appuser
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
@ -162,27 +163,27 @@ You'll need to clone a new repository to get a sample application that includes
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,source=requirements.txt,target=requirements.txt \
python -m pip install -r requirements.txt
# Switch to the non-privileged user to run the application.
USER appuser
# Copy the source code into the container.
COPY . .
# Expose the port that the application listens on.
EXPOSE 8001
# Run the application.
CMD python3 -m uvicorn app:app --host=0.0.0.0 --port=8001
```
Create a file named `compose.yaml` with the following contents.
```yaml {collapse=true,title=compose.yaml}
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Docker Compose reference guide at
# https://docs.docker.com/go/compose-spec-reference/
# Here the instructions define your application as a service called "server".
# This service is built from the Dockerfile in the current directory.
# You can add other services your application may depend on here, such as a
@ -194,7 +195,6 @@ You'll need to clone a new repository to get a sample application that includes
context: .
ports:
- 8001:8001
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
@ -228,16 +228,16 @@ You'll need to clone a new repository to get a sample application that includes
# db-password:
# file: db/password.txt
```
Create a file named `.dockerignore` with the following contents.
```text {collapse=true,title=".dockerignore"}
# Include any files or directories that you don't want to be copied to your
# container here (e.g., local build artifacts, temporary files, etc.).
#
# For more help, visit the .dockerignore file reference guide at
# https://docs.docker.com/go/build-context-dockerignore/
**/.DS_Store
**/__pycache__
**/.venv
@ -267,17 +267,18 @@ You'll need to clone a new repository to get a sample application that includes
LICENSE
README.md
```
Create a file named `.gitignore` with the following contents.
```text {collapse=true,title=".gitignore"}
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
@ -297,7 +298,7 @@ You'll need to clone a new repository to get a sample application that includes
.installed.cfg
*.egg
MANIFEST
# Unit test / coverage reports
htmlcov/
.tox/
@ -312,10 +313,10 @@ You'll need to clone a new repository to get a sample application that includes
.hypothesis/
.pytest_cache/
cover/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Environments
.env
.venv
@ -325,7 +326,7 @@ You'll need to clone a new repository to get a sample application that includes
env.bak/
venv.bak/
```
{{< /tab >}}
{{< /tabs >}}
@ -370,7 +371,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -513,7 +514,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -561,9 +562,10 @@ In this section, you took a look at setting up your Compose file to add a local
database and persist data. You also learned how to use Compose Watch to automatically rebuild and run your container when you update your code.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps

View File

@ -3,24 +3,26 @@ title: R language-specific guide
linkTitle: R
description: Containerize R apps using Docker
keywords: Docker, getting started, R, language
summary: |
summary: |
This guide details how to containerize R applications using Docker, covering
image building, dependency management, optimizing image size, and best
practices for deploying R applications efficiently in containers.
toc_min: 1
toc_max: 2
aliases:
- /languages/r/
- /languages/r/
- /guides/languages/r/
languages: [r]
levels: [beginner]
params:
time: 10 minutes
---
The R language-specific guide teaches you how to containerize a R application using Docker. In this guide, youll learn how to:
* Containerize and run a R application
* Set up a local environment to develop a R application using containers
* Configure a CI/CD pipeline for a containerized R application using GitHub Actions
* Deploy your containerized R application locally to Kubernetes to test and debug your deployment
- Containerize and run a R application
- Set up a local environment to develop a R application using containers
- Configure a CI/CD pipeline for a containerized R application using GitHub Actions
- Deploy your containerized R application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing R application.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: ci/cd, github actions, R, shiny
description: Learn how to configure CI/CD using GitHub Actions for your R application.
aliases:
- /language/r/configure-ci-cd/
- /language/r/configure-ci-cd/
- /guides/language/r/configure-ci-cd/
---
## Prerequisites
@ -69,27 +70,24 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -123,8 +121,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your R application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -8,11 +8,12 @@ aliases:
- /language/R/build-images/
- /language/R/run-containers/
- /language/r/containerize/
- /guides/language/r/containerize/
---
## Prerequisites
* You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
## Overview
@ -43,9 +44,10 @@ directory.
```
To learn more about the files in the repository, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -89,7 +91,8 @@ In this section, you learned how you can containerize and run your R
application using Docker.
Related information:
- [Docker Compose overview](/manuals/compose/_index.md)
- [Docker Compose overview](/manuals/compose/_index.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, R
description: Learn how to develop locally using Kubernetes
aliases:
- /language/r/deploy/
- /language/r/deploy/
- /guides/language/r/deploy/
---
## Prerequisites
@ -42,12 +43,12 @@ spec:
service: shiny
spec:
containers:
- name: shiny-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
env:
- name: POSTGRES_PASSWORD
value: mysecretpassword
- name: shiny-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
env:
- name: POSTGRES_PASSWORD
value: mysecretpassword
---
apiVersion: v1
kind: Service
@ -59,21 +60,21 @@ spec:
selector:
service: shiny
ports:
- port: 3838
targetPort: 3838
nodePort: 30001
- port: 3838
targetPort: 3838
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your R application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3838 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your R application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3838 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -140,6 +141,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: R, local, development
description: Learn how to develop your R application locally.
aliases:
- /language/r/develop/
- /language/r/develop/
- /guides/language/r/develop/
---
## Prerequisites
@ -42,7 +43,7 @@ To try the connection between the Shiny application and the local database you h
You can use containers to set up local services, like a database. In this section, you'll update the `compose.yaml` file to define a database service and a volume to persist data.
In the cloned repository's directory, open the `compose.yaml` file in an IDE or text editor.
In the cloned repository's directory, open the `compose.yaml` file in an IDE or text editor.
In the `compose.yaml` file, you need to un-comment the properties for configuring the database. You must also mount the database password file and set an environment variable on the `shiny-app` service pointing to the location of the file in the container.
@ -77,7 +78,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -140,7 +141,6 @@ You should see a pop-up message:
DB CONNECTED
```
Press `ctrl+c` in the terminal to stop your application.
## Automatically update services
@ -185,7 +185,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -212,9 +212,10 @@ In this section, you took a look at setting up your Compose file to add a local
database and persist data. You also learned how to use Compose Watch to automatically rebuild and run your container when you update your code.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps

View File

@ -1,6 +1,6 @@
---
description: Containerize RAG application using Ollama and Docker
keywords: python, generative ai, genai, llm, ollama, rag, qdrant
keywords: python, generative ai, genai, llm, ollama, rag, qdrant
title: Build a RAG application using Ollama and Docker
linkTitle: RAG Ollama application
summary: |
@ -10,13 +10,15 @@ summary: |
workflow for scalable, containerized deployments.
subjects: [ai]
levels: [beginner]
aliases:
- /guides/use-case/rag-ollama/
params:
time: 20 minutes
---
The Retrieval Augmented Generation (RAG) guide teaches you how to containerize an existing RAG application using Docker. The example application is a RAG that acts like a sommelier, giving you the best pairings between wines and food. In this guide, youll learn how to:
* Containerize and run a RAG application
* Set up a local environment to run the complete RAG stack locally for development
- Containerize and run a RAG application
- Set up a local environment to run the complete RAG stack locally for development
Start by containerizing an existing RAG application.

View File

@ -4,6 +4,8 @@ linkTitle: Containerize your app
weight: 10
keywords: python, generative ai, genai, llm, ollama, containerize, intitialize, qdrant
description: Learn how to containerize a RAG application.
aliases:
- /guides/use-case/rag-ollama/containerize/
---
## Overview
@ -70,24 +72,24 @@ server-1 | URL: http://0.0.0.0:8501
server-1 |
```
Open a browser and view the application at [http://localhost:8501](http://localhost:8501). You should see a simple Streamlit application.
Open a browser and view the application at [http://localhost:8501](http://localhost:8501). You should see a simple Streamlit application.
The application requires a Qdrant database service and an LLM service to work properly. If you have access to services that you ran outside of Docker, specify the connection information in the `docker-compose.yaml`.
```yaml
winy:
build:
context: ./app
context: ./app
dockerfile: Dockerfile
environment:
- QDRANT_CLIENT=http://qdrant:6333 # Specifies the url for the qdrant database
- OLLAMA=http://ollama:11434 # Specifies the url for the ollama service
container_name: winy
container_name: winy
ports:
- "8501:8501"
- "8501:8501"
depends_on:
- qdrant
- ollama
- qdrant
- ollama
```
If you don't have the services running, continue with this guide to learn how you can run some or all of these services with Docker.

View File

@ -4,6 +4,8 @@ linkTitle: Develop your app
weight: 10
keywords: python, local, development, generative ai, genai, llm, rag, ollama
description: Learn how to develop your generative RAG application locally.
aliases:
- /guides/use-case/rag-ollama/develop/
---
## Prerequisites
@ -57,11 +59,13 @@ To run the database service:
## Add a local or remote LLM service
The sample application supports both [Ollama](https://ollama.ai/). This guide provides instructions for the following scenarios:
- Run Ollama in a container
- Run Ollama outside of a container
While all platforms can use any of the previous scenarios, the performance and
GPU support may vary. You can use the following guidelines to help you choose the appropriate option:
- Run Ollama in a container if you're on Linux, and using a native installation of the Docker Engine, or Windows 10/11, and using Docker Desktop, you
have a CUDA-supported GPU, and your system has at least 8 GB of RAM.
- Run Ollama outside of a container if running Docker Desktop on a Linux Machine.
@ -74,6 +78,7 @@ Choose one of the following options for your LLM service.
When running Ollama in a container, you should have a CUDA-supported GPU. While you can run Ollama in a container without a supported GPU, the performance may not be acceptable. Only Linux and Windows 11 support GPU access to containers.
To run Ollama in a container and provide GPU access:
1. Install the prerequisites.
- For Docker Engine on Linux, install the [NVIDIA Container Toolkilt](https://github.com/NVIDIA/nvidia-container-toolkit).
- For Docker Desktop on Windows 10/11, install the latest [NVIDIA driver](https://www.nvidia.com/Download/index.aspx) and make sure you are using the [WSL2 backend](/manuals/desktop/wsl/_index.md#turn-on-docker-desktop-wsl-2)
@ -122,7 +127,7 @@ To run Ollama outside of a container:
3. Remove the `ollama` service from the `docker-compose.yaml` and update properly the connection variables in `winy` service:
```diff
- OLLAMA=http://ollama:11434
- OLLAMA=http://ollama:11434
+ OLLAMA=<your-url>
```
@ -132,6 +137,7 @@ To run Ollama outside of a container:
## Run your RAG application
At this point, you have the following services in your Compose file:
- Server service for your main RAG application
- Database service to store vectors in a Qdrant database
- (optional) Ollama service to run the LLM
@ -148,10 +154,11 @@ In this section, you learned how to set up a development environment to provide
access all the services that your GenAI application needs.
Related information:
- [Dockerfile reference](/reference/dockerfile.md)
- [Compose file reference](/reference/compose-file/_index.md)
- [Ollama Docker image](https://hub.docker.com/r/ollama/ollama)
- [GenAI Stack demo applications](https://github.com/docker/genai-stack)
- [Dockerfile reference](/reference/dockerfile.md)
- [Compose file reference](/reference/compose-file/_index.md)
- [Ollama Docker image](https://hub.docker.com/r/ollama/ollama)
- [GenAI Stack demo applications](https://github.com/docker/genai-stack)
## Next steps

View File

@ -10,7 +10,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/ruby/
- /language/ruby/
- /guides/language/ruby/
languages: [ruby]
levels: [beginner]
params:
@ -19,9 +20,9 @@ params:
The Ruby language-specific guide teaches you how to containerize a Ruby on Rails application using Docker. In this guide, youll learn how to:
* Containerize and run a Ruby on Rails application
* Set up a local environment to develop a Ruby on Rails application using containers
* Configure a CI/CD pipeline for a containerized Ruby on Rails application using GitHub Actions
* Deploy your containerized Ruby on Rails application locally to Kubernetes to test and debug your deployment
- Containerize and run a Ruby on Rails application
- Set up a local environment to develop a Ruby on Rails application using containers
- Configure a CI/CD pipeline for a containerized Ruby on Rails application using GitHub Actions
- Deploy your containerized Ruby on Rails application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing Ruby on Rails application.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: ci/cd, github actions, ruby, flask
description: Learn how to configure CI/CD using GitHub Actions for your Ruby on Rails application.
aliases:
- /language/ruby/configure-ci-cd/
- /language/ruby/configure-ci-cd/
- /guides/language/ruby/configure-ci-cd/
---
## Prerequisites
@ -69,34 +70,31 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64
push: true
tags: ${{ vars.DOCKER_USERNAME }}/${{ github.event.repository.name }}:latest
```
For more information about the YAML syntax for `docker/build-push-action`,
refer to the [GitHub Action README](https://github.com/docker/build-push-action/blob/master/README.md).
@ -123,8 +121,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your Ruby on Rails application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -8,12 +8,13 @@ aliases:
- /language/ruby/build-images/
- /language/ruby/run-containers/
- /language/ruby/containerize/
- /guides/language/ruby/containerize/
---
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
* You have a [Git client](https://git-scm.com/downloads). The examples in this section show the Git CLI, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
- You have a [Git client](https://git-scm.com/downloads). The examples in this section show the Git CLI, but you can use any client.
## Overview
@ -335,16 +336,15 @@ build-iPhoneSimulator/
You should now have the following three files in your `docker-ruby-on-rails`
directory.
- .dockerignore
- compose.yaml
- Dockerfile
To learn more about the files, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -388,7 +388,8 @@ In this section, you learned how you can containerize and run your Ruby
application using Docker.
Related information:
- [Docker Compose overview](/manuals/compose/_index.md)
- [Docker Compose overview](/manuals/compose/_index.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, ruby
description: Learn how to develop locally using Kubernetes
aliases:
- /language/ruby/deploy/
- /language/ruby/deploy/
- /guides/language/ruby/deploy/
---
## Prerequisites
@ -42,9 +43,9 @@ spec:
service: ruby-on-rails
spec:
containers:
- name: ruby-on-rails-container
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: ruby-on-rails-container
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -56,21 +57,21 @@ spec:
selector:
service: ruby-on-rails
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
- port: 3000
targetPort: 3000
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Ruby on Rails application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8001 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Ruby on Rails application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8001 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -118,7 +119,6 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
```
In addition to the default `kubernetes` service, you can see your `docker-ruby-on-rails-demo` service, accepting traffic on port 30001/TCP.
3. To create and migrate the database in a Ruby on Rails application running on Kubernetes, you need to follow these steps.
@ -161,6 +161,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: ruby, local, development
description: Learn how to develop your Ruby on Rails application locally.
aliases:
- /language/ruby/develop/
- /language/ruby/develop/
- /guides/language/ruby/develop/
---
## Prerequisites
@ -106,7 +107,7 @@ Now, run the following `docker compose up` command to start your application.
$ docker compose up --build
```
In Ruby on Rails, `db:migrate` is a Rake task that is used to run migrations on the database. Migrations are a way to alter the structure of your database schema over time in a consistent and easy way.
In Ruby on Rails, `db:migrate` is a Rake task that is used to run migrations on the database. Migrations are a way to alter the structure of your database schema over time in a consistent and easy way.
```console
$ docker exec -it docker-ruby-on-rails-web-1 rake db:migrate RAILS_ENV=test
@ -114,14 +115,14 @@ $ docker exec -it docker-ruby-on-rails-web-1 rake db:migrate RAILS_ENV=test
You will see a similar message like this:
``console
`console
== 20240710193146 CreateWhales: migrating =====================================
-- create_table(:whales)
-> 0.0126s
== 20240710193146 CreateWhales: migrated (0.0127s) ============================
``
`
Refresh <http://localhost:3000> in your browser and add the whales.
Refresh <http://localhost:3000> in your browser and add the whales.
Press `ctrl+c` in the terminal to stop your application and run `docker compose up` again, the whales are being persisted.
@ -192,9 +193,10 @@ In this section, you took a look at setting up your Compose file to add a local
database and persist data. You also learned how to use Compose Watch to automatically rebuild and run your container when you update your code.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps

View File

@ -10,7 +10,8 @@ summary: |
toc_min: 1
toc_max: 2
aliases:
- /language/rust/
- /language/rust/
- /guides/language/rust/
languages: [rust]
levels: [beginner]
params:
@ -19,13 +20,13 @@ params:
The Rust language-specific guide teaches you how to create a containerized Rust application using Docker. In this guide, you'll learn how to:
* Containerize a Rust application
* Build an image and run the newly built image as a container
* Set up volumes and networking
* Orchestrate containers using Compose
* Use containers for development
* Configure a CI/CD pipeline for your application using GitHub Actions
* Deploy your containerized Rust application locally to Kubernetes to test and debug your deployment
- Containerize a Rust application
- Build an image and run the newly built image as a container
- Set up volumes and networking
- Orchestrate containers using Compose
- Use containers for development
- Configure a CI/CD pipeline for your application using GitHub Actions
- Deploy your containerized Rust application locally to Kubernetes to test and debug your deployment
After completing the Rust modules, you should be able to containerize your own Rust application based on the examples and instructions provided in this guide.

View File

@ -5,13 +5,14 @@ weight: 5
keywords: rust, build, images, dockerfile
description: Learn how to build your first Rust Docker image
aliases:
- /language/rust/build-images/
- /language/rust/build-images/
- /guides/language/rust/build-images/
---
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
* You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
## Overview
@ -55,10 +56,11 @@ Let's get started!
You should now have the following new files in your `docker-rust-hello`
directory:
- Dockerfile
- .dockerignore
- compose.yaml
- README.Docker.md
- Dockerfile
- .dockerignore
- compose.yaml
- README.Docker.md
For building an image, only the Dockerfile is necessary. Open the Dockerfile
in your favorite IDE or text editor and see what it contains. To learn more
@ -91,19 +93,19 @@ You should see output like the following.
```console
[+] Building 62.6s (14/14) FINISHED
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 2.70kB 0.0s
=> => transferring dockerfile: 2.70kB 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 2.3s
=> CACHED docker-image://docker.io/docker/dockerfile:1@sha256:39b85bbfa7536a5feceb7372a0817649ecb2724562a38360f4d6a7782a409b14 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 1.9s
=> [internal] load metadata for docker.io/library/rust:1.70.0-slim-bullseye 1.7s
=> [internal] load metadata for docker.io/library/rust:1.70.0-slim-bullseye 1.7s
=> [build 1/3] FROM docker.io/library/rust:1.70.0-slim-bullseye@sha256:585eeddab1ec712dade54381e115f676bba239b1c79198832ddda397c1f 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 35.29kB 0.0s
=> [final 1/3] FROM docker.io/library/debian:bullseye-slim@sha256:7606bef5684b393434f06a50a3d1a09808fee5a0240d37da5d181b1b121e7637 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 35.29kB 0.0s
=> [final 1/3] FROM docker.io/library/debian:bullseye-slim@sha256:7606bef5684b393434f06a50a3d1a09808fee5a0240d37da5d181b1b121e7637 0.0s
=> CACHED [build 2/3] WORKDIR /app 0.0s
=> [build 3/3] RUN --mount=type=bind,source=src,target=src --mount=type=bind,source=Cargo.toml,target=Cargo.toml --mount= 57.7s
=> [build 3/3] RUN --mount=type=bind,source=src,target=src --mount=type=bind,source=Cargo.toml,target=Cargo.toml --mount= 57.7s
=> CACHED [final 2/3] RUN adduser --disabled-password --gecos "" --home "/nonexistent" --shell "/sbin/nologin" 0.0s
=> CACHED [final 3/3] COPY --from=build /bin/server /bin/ 0.0s
=> exporting to image 0.0s
@ -175,11 +177,11 @@ Docker removed the image tagged with `:v1.0.0`, but the `docker-rust-image:lates
This section showed how you can use `docker init` to create a Dockerfile and .dockerignore file for a Rust application. It then showed you how to build an image. And finally, it showed you how to tag an image and list all images.
Related information:
- [Dockerfile reference](/reference/dockerfile.md)
- [.dockerignore file](/reference/dockerfile.md#dockerignore-file)
- [docker init CLI reference](/reference/cli/docker/init.md)
- [docker build CLI reference](/reference/cli/docker/buildx/build.md)
- [Dockerfile reference](/reference/dockerfile.md)
- [.dockerignore file](/reference/dockerfile.md#dockerignore-file)
- [docker init CLI reference](/reference/cli/docker/init.md)
- [docker build CLI reference](/reference/cli/docker/buildx/build.md)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 40
keywords: rust, CI/CD, local, development
description: Learn how to Configure CI/CD for your application
aliases:
- /language/rust/configure-ci-cd/
- /language/rust/configure-ci-cd/
- /guides/language/rust/configure-ci-cd/
---
## Prerequisites
@ -69,33 +70,30 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
push: true
tags: ${{ vars.DOCKER_USERNAME }}/${{ github.event.repository.name }}:latest
```
For more information about the YAML syntax for `docker/build-push-action`,
refer to the [GitHub Action README](https://github.com/docker/build-push-action/blob/master/README.md).
@ -122,8 +120,9 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your Rust application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, rust
description: Learn how to test your Rust deployment locally using Kubernetes
aliases:
- /language/rust/deploy/
- /language/rust/deploy/
- /guides/language/rust/deploy/
---
## Prerequisites
@ -47,7 +48,12 @@ spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command: ['sh', '-c', 'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;']
command:
[
"sh",
"-c",
'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;',
]
containers:
- image: DOCKER_USERNAME/REPO_NAME
name: server
@ -148,14 +154,14 @@ status:
In this Kubernetes YAML file, there are four objects, separated by the `---`. In addition to a Service and Deployment for the database, the other two objects are:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Rust application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 5000 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Rust application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 5000 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -226,6 +232,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,14 +5,15 @@ weight: 20
keywords: rust, local, development, run,
description: Learn how to develop your Rust application locally.
aliases:
- /language/rust/develop/
- /language/rust/develop/
- /guides/language/rust/develop/
---
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
* You have completed the walkthroughs in the Docker Desktop [Learning Center](/manuals/desktop/get-started.md) to learn about Docker concepts.
* You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
- You have completed the walkthroughs in the Docker Desktop [Learning Center](/manuals/desktop/get-started.md) to learn about Docker concepts.
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
## Overview
@ -69,7 +70,6 @@ postgres=#
In the previous command, you logged in to the PostgreSQL database by passing the `psql` command to the `db` container. Press ctrl-d to exit the PostgreSQL interactive terminal.
## Get and run the sample application
For the sample application, you'll use a variation of the backend from the react-rust-postgres application from [Awesome Compose](https://github.com/docker/awesome-compose/tree/master/react-rust-postgres).
@ -109,19 +109,19 @@ For the sample application, you'll use a variation of the backend from the react
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Dockerfile reference guide at
# https://docs.docker.com/reference/dockerfile/
################################################################################
# Create a stage for building the application.
ARG RUST_VERSION=1.70.0
ARG APP_NAME=react-rust-postgres
FROM rust:${RUST_VERSION}-slim-bullseye AS build
ARG APP_NAME
WORKDIR /app
# Build the application.
# Leverage a cache mount to /usr/local/cargo/registry/
# for downloaded dependencies and a cache mount to /app/target/ for
# for downloaded dependencies and a cache mount to /app/target/ for
# compiled dependencies which will speed up subsequent builds.
# Leverage a bind mount to the src directory to avoid having to copy the
# source code into the container. Once built, copy the executable to an
@ -137,7 +137,7 @@ For the sample application, you'll use a variation of the backend from the react
cargo build --locked --release
cp ./target/release/$APP_NAME /bin/server
EOF
################################################################################
# Create a new stage for running the application that contains the minimal
# runtime dependencies for the application. This often uses a different base
@ -150,7 +150,7 @@ For the sample application, you'll use a variation of the backend from the react
# reproducability is important, consider using a digest
# (e.g., debian@sha256:ac707220fbd7b67fc19b112cee8170b41a9e97f703f588b2cdbbcdcecdd8af57).
FROM debian:bullseye-slim AS final
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ #user
ARG UID=10001
@ -163,13 +163,13 @@ For the sample application, you'll use a variation of the backend from the react
--uid "${UID}" \
appuser
USER appuser
# Copy the executable from the "build" stage.
COPY --from=build /bin/server /bin/
# Expose the port that the application listens on.
EXPOSE 8000
# What the container should run when it is started.
CMD ["/bin/server"]
```
@ -206,7 +206,7 @@ For the sample application, you'll use a variation of the backend from the react
You should get a response like the following.
```json
[{"id":1,"login":"root"}]
[{ "id": 1, "login": "root" }]
```
## Use Compose to develop locally
@ -218,8 +218,9 @@ This Compose file is super convenient as you don't have to type all the paramete
In the cloned repository's directory, open the `compose.yaml` file in an IDE or text editor. `docker init` handled creating most of the instructions, but you'll need to update it for your unique application.
You need to update the following items in the `compose.yaml` file:
- Uncomment all of the database instructions.
- Add the environment variables under the server service.
- Uncomment all of the database instructions.
- Add the environment variables under the server service.
The following is the updated `compose.yaml` file.
@ -247,12 +248,12 @@ services:
- PG_PASSWORD=mysecretpassword
- ADDRESS=0.0.0.0:8000
- RUST_LOG=debug
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker compose up`.
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker compose up`.
depends_on:
db:
condition: service_healthy
@ -270,7 +271,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -310,7 +311,7 @@ $ curl http://localhost:8000/users
You should receive the following response:
```json
[{"id":1,"login":"root"}]
[{ "id": 1, "login": "root" }]
```
## Summary
@ -318,8 +319,9 @@ You should receive the following response:
In this section, you took a look at setting up your Compose file to run your Rust application and database with a single command.
Related information:
- [Docker volumes](/manuals/engine/storage/volumes.md)
- [Compose overview](/manuals/compose/_index.md)
- [Docker volumes](/manuals/engine/storage/volumes.md)
- [Compose overview](/manuals/compose/_index.md)
## Next steps

View File

@ -5,12 +5,13 @@ weight: 10
keywords: rust, run, image, container,
description: Learn how to run your Rust image as a container.
aliases:
- /language/rust/run-containers/
- /language/rust/run-containers/
- /guides/language/rust/run-containers/
---
## Prerequisite
You have completed [Build your Rust image](build-images.md) and you have built an image.
You have completed [Build your Rust image](build-images.md) and you have built an image.
## Overview
@ -124,13 +125,13 @@ You can start, stop, and restart Docker containers. When you stop a container, i
```console
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
3074745e412c docker-rust-image "/bin/server" 3 minutes ago Exited (0) 6 seconds ago
3074745e412c docker-rust-image "/bin/server" 3 minutes ago Exited (0) 6 seconds ago
wonderful_kalam
6cfa26e2e3c9 docker-rust-image "/bin/server" 14 minutes ago Exited (0) 5 minutes ago
6cfa26e2e3c9 docker-rust-image "/bin/server" 14 minutes ago Exited (0) 5 minutes ago
friendly_montalcini
4cbe94b2ea0e docker-rust-image "/bin/server" 15 minutes ago Exited (0) 14 minutes ago
4cbe94b2ea0e docker-rust-image "/bin/server" 15 minutes ago Exited (0) 14 minutes ago
tender_bose
```
@ -146,12 +147,12 @@ Now list all the containers again using the `docker ps` command.
```console
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
3074745e412c docker-rust-image "/bin/server" 6 minutes ago Up 4 seconds 0.0.0.0:3001->8000/tcp wonderful_kalam
6cfa26e2e3c9 docker-rust-image "/bin/server" 16 minutes ago Exited (0) 7 minutes ago
6cfa26e2e3c9 docker-rust-image "/bin/server" 16 minutes ago Exited (0) 7 minutes ago
friendly_montalcini
4cbe94b2ea0e docker-rust-image "/bin/server" 18 minutes ago Exited (0) 17 minutes ago
4cbe94b2ea0e docker-rust-image "/bin/server" 18 minutes ago Exited (0) 17 minutes ago
tender_bose
```
@ -196,7 +197,8 @@ Thats better! You can now easily identify your container based on the name.
In this section, you took a look at running containers. You also took a look at managing containers by starting, stopping, and restarting them. And finally, you looked at naming your containers so they are more easily identifiable.
Related information:
- [docker run CLI reference](/reference/cli/docker/container/run.md)
- [docker run CLI reference](/reference/cli/docker/container/run.md)
## Next steps

View File

@ -11,6 +11,8 @@ summary: |
subjects: [ai]
languages: [python]
levels: [beginner]
aliases:
- /guides/use-case/nlp/sentiment-analysis/
params:
time: 20 minutes
---
@ -27,8 +29,8 @@ negative, or neutral.
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Get the sample application
@ -66,7 +68,7 @@ The source code for the sentiment analysis application is in the `Docker-NLP/01_
from nltk.sentiment import SentimentIntensityAnalyzer
import ssl
```
- `nltk`: This is the Natural Language Toolkit library used for working with
human language data in Python.
- `SentimentIntensityAnalyzer`: This is a specific tool from NLTK used for
@ -84,7 +86,7 @@ The source code for the sentiment analysis application is in the `Docker-NLP/01_
else:
ssl._create_default_https_context = _create_unverified_https_context
```
This block is a workaround for certain environments where downloading data through NLTK might fail due to SSL certificate verification issues. It's telling Python to ignore SSL certificate verification for HTTPS requests.
3. Download NLTK resources.
@ -93,7 +95,7 @@ The source code for the sentiment analysis application is in the `Docker-NLP/01_
nltk.download('vader_lexicon')
nltk.download('punkt')
```
- `vader_lexicon`: This is a lexicon used by the `SentimentIntensityAnalyzer`
for sentiment analysis.
- `punkt`: This is used by NLTK for tokenizing sentences. It's necessary for
@ -257,10 +259,10 @@ The following steps explain each part of the `Dockerfile`. For more details, see
ENTRYPOINT ["/app/entrypoint.sh"]
```
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
You can explore the `entrypoint.sh` script by opening it in a code or text
editor. As the sample contains several applications, the script lets you
specify which application to run when the container starts.
@ -313,12 +315,12 @@ To run the application using Docker:
- `docker run`: This is the primary command used to run a new container from
a Docker image.
- `-it`: This is a combination of two options:
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `basic-nlp`: This specifies the name of the Docker image to use for
creating the container. In this case, it's the image named `basic-nlp` that
you created with the `docker build` command.
@ -328,7 +330,6 @@ To run the application using Docker:
For more details, see the [docker run CLI reference](/reference/cli/docker/container/run/).
> [!NOTE]
>
> For Windows users, you may get an error when running the container. Verify
@ -344,7 +345,7 @@ To run the application using Docker:
3. Test the application.
Enter a comment to get the sentiment analysis.
```console
Enter the text for semantic analysis (type 'exit' to end): I love containers!
Sentiment: Positive
@ -360,10 +361,10 @@ and then set up the environment and run the application using Docker.
Related information:
* [Docker CLI reference](/reference/cli/docker/)
* [Dockerfile reference](/reference/dockerfile/)
* [Natural Language Toolkit](https://www.nltk.org/)
* [Python documentation](https://docs.python.org/3/)
- [Docker CLI reference](/reference/cli/docker/)
- [Dockerfile reference](/reference/dockerfile/)
- [Natural Language Toolkit](https://www.nltk.org/)
- [Python documentation](https://docs.python.org/3/)
## Next steps

View File

@ -3,8 +3,9 @@ title: Deploy to Swarm
keywords: swarm, swarm services, stacks
description: Learn how to describe and deploy a simple application on Docker Swarm.
aliases:
- /get-started/part4/
- /get-started/swarm-deploy/
- /get-started/part4/
- /get-started/swarm-deploy/
- /guides/deployment-orchestration/swarm-deploy/
summary: |
Discover how to deploy and manage Docker containers using Docker Swarm, with
step-by-step guides on setup, scaling, networking, and best practices for
@ -40,7 +41,7 @@ Now you can write a simple stack file to run and manage your Todo app, the conta
{{< include "swarm-compose-compat.md" >}}
```yaml
version: '3.7'
version: "3.7"
services:
bb-app:
@ -51,7 +52,7 @@ services:
In this Swarm YAML file, there is one object, a `service`, describing a scalable group of identical containers. In this case, you'll get just one container (the default), and that container will be based on your `getting-started` image created in [Part 2](02_our_app.md) of the tutorial. In addition, you've asked Swarm to forward all traffic arriving at port 8000 on your development machine to port 3000 inside our getting-started container.
> **Kubernetes Services and Swarm Services are very different**
> **Kubernetes Services and Swarm Services are very different**
>
> Despite the similar name, the two orchestrators mean very different things by
> the term 'service'. In Swarm, a service provides both scheduling and
@ -65,41 +66,41 @@ In this Swarm YAML file, there is one object, a `service`, describing a scalable
1. Deploy your application to Swarm:
```console
$ docker stack deploy -c bb-stack.yaml demo
```
```console
$ docker stack deploy -c bb-stack.yaml demo
```
If all goes well, Swarm will report creating all your stack objects with no complaints:
If all goes well, Swarm will report creating all your stack objects with no complaints:
```shell
Creating network demo_default
Creating service demo_bb-app
```
```shell
Creating network demo_default
Creating service demo_bb-app
```
Notice that in addition to your service, Swarm also creates a Docker network by default to isolate the containers deployed as part of your stack.
Notice that in addition to your service, Swarm also creates a Docker network by default to isolate the containers deployed as part of your stack.
2. Make sure everything worked by listing your service:
```console
$ docker service ls
```
```console
$ docker service ls
```
If all has gone well, your service will report with 1/1 of its replicas created:
If all has gone well, your service will report with 1/1 of its replicas created:
```shell
ID NAME MODE REPLICAS IMAGE PORTS
il7elwunymbs demo_bb-app replicated 1/1 getting-started:latest *:8000->3000/tcp
```
```shell
ID NAME MODE REPLICAS IMAGE PORTS
il7elwunymbs demo_bb-app replicated 1/1 getting-started:latest *:8000->3000/tcp
```
This indicates 1/1 containers you asked for as part of your services are up and running. Also, you see that port 8000 on your development machine is getting forwarded to port 3000 in your getting-started container.
This indicates 1/1 containers you asked for as part of your services are up and running. Also, you see that port 8000 on your development machine is getting forwarded to port 3000 in your getting-started container.
3. Open a browser and visit your Todo app at `localhost:8000`; you should see your Todo application, the same as when you ran it as a stand-alone container in [Part 2](02_our_app.md) of the tutorial.
4. Once satisfied, tear down your application:
```console
$ docker stack rm demo
```
```console
$ docker stack rm demo
```
## Conclusion
@ -111,8 +112,8 @@ In addition to deploying to Swarm, you've also described your application as a s
Further documentation for all new Swarm objects and CLI commands used in this article are available here:
- [Swarm Mode](/manuals/engine/swarm/_index.md)
- [Swarm Mode Services](/manuals/engine/swarm/how-swarm-mode-works/services.md)
- [Swarm Stacks](/manuals/engine/swarm/stack-deploy.md)
- [`docker stack *`](/reference/cli/docker/stack/)
- [`docker service *`](/reference/cli/docker/service/)
- [Swarm Mode](/manuals/engine/swarm/_index.md)
- [Swarm Mode Services](/manuals/engine/swarm/how-swarm-mode-works/services.md)
- [Swarm Stacks](/manuals/engine/swarm/stack-deploy.md)
- [`docker stack *`](/reference/cli/docker/stack/)
- [`docker service *`](/reference/cli/docker/service/)

View File

@ -9,6 +9,8 @@ summary: |
subjects: [ai]
languages: [js]
levels: [beginner]
aliases:
- /guides/use-case/tensorflowjs/
params:
time: 20 minutes
---
@ -30,9 +32,9 @@ perform face detection. In this guide, you'll explore how to:
## Prerequisites
* You have installed the latest version of
- You have installed the latest version of
[Docker Desktop](/get-started/get-docker.md).
* You have a [Git client](https://git-scm.com/downloads). The examples in this
- You have a [Git client](https://git-scm.com/downloads). The examples in this
guide use a command-line based Git client, but you can use any client.
## What is TensorFlow.js?
@ -56,7 +58,6 @@ ML tasks accessible to web developers without deep ML expertise.
secure environments, minimizing conflicts and security vulnerabilities while
running applications with limited permissions.
## Get and run the sample application
In a terminal, clone the sample application using the following command.
@ -95,6 +96,7 @@ at [http://localhost:80](http://localhost:80). You may need to grant access to
your webcam for the application.
In the web application, you can change the backend to use one of the following:
- WASM
- WebGL
- CPU
@ -208,22 +210,28 @@ It also uses the following additional libraries:
<body>
<div id="main">
<video id="video" playsinline style="
<video
id="video"
playsinline
style="
-webkit-transform: scaleX(-1);
transform: scaleX(-1);
width: auto;
height: auto;
">
</video>
"
></video>
<canvas id="output"></canvas>
<video id="video" playsinline style="
<video
id="video"
playsinline
style="
-webkit-transform: scaleX(-1);
transform: scaleX(-1);
visibility: hidden;
width: auto;
height: auto;
">
</video>
"
></video>
</div>
</body>
<script src="https://unpkg.com/@tensorflow/tfjs-core@2.1.0/dist/tf-core.js"></script>
@ -270,7 +278,6 @@ breakdown of some of its key components and functionalities:
feed. For each detected face, it draws a red rectangle around the face and
blue dots for facial landmarks on a canvas overlaying the video.
{{< accordion title="index.js" >}}
```javascript
@ -281,41 +288,45 @@ document.body.prepend(stats.domElement);
let model, ctx, videoWidth, videoHeight, video, canvas;
const state = {
backend: 'wasm'
backend: "wasm",
};
const gui = new dat.GUI();
gui.add(state, 'backend', ['wasm', 'webgl', 'cpu']).onChange(async backend => {
await tf.setBackend(backend);
addFlagLables();
});
gui
.add(state, "backend", ["wasm", "webgl", "cpu"])
.onChange(async (backend) => {
await tf.setBackend(backend);
addFlagLables();
});
async function addFlagLables() {
if(!document.querySelector("#simd_supported")) {
if (!document.querySelector("#simd_supported")) {
const simdSupportLabel = document.createElement("div");
simdSupportLabel.id = "simd_supported";
simdSupportLabel.style = "font-weight: bold";
const simdSupported = await tf.env().getAsync('WASM_HAS_SIMD_SUPPORT');
const simdSupported = await tf.env().getAsync("WASM_HAS_SIMD_SUPPORT");
simdSupportLabel.innerHTML = `SIMD supported: <span class=${simdSupported}>${simdSupported}<span>`;
document.querySelector("#description").appendChild(simdSupportLabel);
}
if(!document.querySelector("#threads_supported")) {
if (!document.querySelector("#threads_supported")) {
const threadSupportLabel = document.createElement("div");
threadSupportLabel.id = "threads_supported";
threadSupportLabel.style = "font-weight: bold";
const threadsSupported = await tf.env().getAsync('WASM_HAS_MULTITHREAD_SUPPORT');
const threadsSupported = await tf
.env()
.getAsync("WASM_HAS_MULTITHREAD_SUPPORT");
threadSupportLabel.innerHTML = `Threads supported: <span class=${threadsSupported}>${threadsSupported}</span>`;
document.querySelector("#description").appendChild(threadSupportLabel);
}
}
async function setupCamera() {
video = document.getElementById('video');
video = document.getElementById("video");
const stream = await navigator.mediaDevices.getUserMedia({
'audio': false,
'video': { facingMode: 'user' },
audio: false,
video: { facingMode: "user" },
});
video.srcObject = stream;
@ -333,7 +344,11 @@ const renderPrediction = async () => {
const flipHorizontal = true;
const annotateBoxes = true;
const predictions = await model.estimateFaces(
video, returnTensors, flipHorizontal, annotateBoxes);
video,
returnTensors,
flipHorizontal,
annotateBoxes,
);
if (predictions.length > 0) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
@ -382,10 +397,10 @@ const setupPage = async () => {
video.width = videoWidth;
video.height = videoHeight;
canvas = document.getElementById('output');
canvas = document.getElementById("output");
canvas.width = videoWidth;
canvas.height = videoHeight;
ctx = canvas.getContext('2d');
ctx = canvas.getContext("2d");
ctx.fillStyle = "rgba(255, 0, 0, 0.5)";
model = await blazeface.load();

View File

@ -11,6 +11,8 @@ summary: |
subjects: [ai]
languages: [python]
levels: [beginner]
aliases:
- /guides/use-case/nlp/text-classification/
params:
time: 20 minutes
---
@ -30,8 +32,8 @@ analysis model based on a predefined dataset.
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Get the sample application
@ -71,7 +73,7 @@ The source code for the text classification application is in the `Docker-NLP/03
from sklearn.model_selection import train_test_split
import ssl
```
- `nltk`: A popular Python library for natural language processing (NLP).
- `SentimentIntensityAnalyzer`: A component of `nltk` for sentiment analysis.
- `accuracy_score`, `classification_report`: Functions from scikit-learn for
@ -91,7 +93,7 @@ The source code for the text classification application is in the `Docker-NLP/03
else:
ssl._create_default_https_context = _create_unverified_https_context
```
This block is a workaround for certain environments where downloading data
through NLTK might fail due to SSL certificate verification issues. It's
telling Python to ignore SSL certificate verification for HTTPS requests.
@ -101,10 +103,9 @@ The source code for the text classification application is in the `Docker-NLP/03
```python
nltk.download('vader_lexicon')
```
The `vader_lexicon` is a lexicon used by the `SentimentIntensityAnalyzer` for
sentiment analysis.
The `vader_lexicon` is a lexicon used by the `SentimentIntensityAnalyzer` for
sentiment analysis.
4. Define text for testing and corresponding labels.
@ -165,55 +166,56 @@ The source code for the text classification application is in the `Docker-NLP/03
10. Create an infinite loop for continuous input.
```python
while True:
input_text = input("Enter the text for classification (type 'exit' to end): ")
if input_text.lower() == 'exit':
print("Exiting...")
break
```
This while loop runs indefinitely until it's explicitly broken. It lets the
user continuously enter text for entity recognition until they decide to
exit.
```python
while True:
input_text = input("Enter the text for classification (type 'exit' to end): ")
if input_text.lower() == 'exit':
print("Exiting...")
break
```
This while loop runs indefinitely until it's explicitly broken. It lets the
user continuously enter text for entity recognition until they decide to
exit.
11. Analyze the text.
```python
input_text_score = sia.polarity_scores(input_text)["compound"]
input_text_classification = 0 if input_text_score > threshold else 1
```
```python
input_text_score = sia.polarity_scores(input_text)["compound"]
input_text_classification = 0 if input_text_score > threshold else 1
```
12. Print the VADER Classification Report and the sentiment analysis.
```python
print(f"Accuracy: {accuracy:.2f}")
print("\nVADER Classification Report:")
print(report_vader)
print(f"\nTest Text (Positive): '{input_text}'")
print(f"Predicted Sentiment: {'Positive' if input_text_classification == 0 else 'Negative'}")
```
```python
print(f"Accuracy: {accuracy:.2f}")
print("\nVADER Classification Report:")
print(report_vader)
print(f"\nTest Text (Positive): '{input_text}'")
print(f"Predicted Sentiment: {'Positive' if input_text_classification == 0 else 'Negative'}")
```
13. Create `requirements.txt`. The sample application already contains the
`requirements.txt` file to specify the necessary packages that the
application imports. Open `requirements.txt` in a code or text editor to
explore its contents.
`requirements.txt` file to specify the necessary packages that the
application imports. Open `requirements.txt` in a code or text editor to
explore its contents.
```text
# 01 sentiment_analysis
nltk==3.6.5
```text
# 01 sentiment_analysis
nltk==3.6.5
...
...
# 03 text_classification
scikit-learn==1.3.2
# 03 text_classification
scikit-learn==1.3.2
...
```
...
```
Both the `nltk` and `scikit-learn` modules are required for the text
classification application.
Both the `nltk` and `scikit-learn` modules are required for the text
classification application.
## Explore the application environment
@ -316,15 +318,14 @@ The following steps explain each part of the `Dockerfile`. For more details, see
ENTRYPOINT ["/app/entrypoint.sh"]
```
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
You can explore the `entrypoint.sh` script by opening it in a code or text
editor. As the sample contains several applications, the script lets you
specify which application to run when the container starts.
## Run the application
To run the application using Docker:
@ -373,12 +374,12 @@ To run the application using Docker:
- `docker run`: This is the primary command used to run a new container from
a Docker image.
- `-it`: This is a combination of two options:
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `basic-nlp`: This specifies the name of the Docker image to use for
creating the container. In this case, it's the image named `basic-nlp` that
you created with the `docker build` command.
@ -431,11 +432,11 @@ the application using Docker.
Related information:
* [Docker CLI reference](/reference/cli/docker/)
* [Dockerfile reference](/reference/dockerfile/)
* [Natural Language Toolkit](https://www.nltk.org/)
* [Python documentation](https://docs.python.org/3/)
* [scikit-learn](https://scikit-learn.org/)
- [Docker CLI reference](/reference/cli/docker/)
- [Dockerfile reference](/reference/dockerfile/)
- [Natural Language Toolkit](https://www.nltk.org/)
- [Python documentation](https://docs.python.org/3/)
- [scikit-learn](https://scikit-learn.org/)
## Next steps

View File

@ -11,6 +11,8 @@ summary: |
subjects: [ai]
languages: [python]
levels: [beginner]
aliases:
- /guides/use-case/nlp/text-summarization/
params:
time: 20 minutes
---
@ -30,8 +32,8 @@ cluster's centroids.
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Get the sample application
@ -67,7 +69,7 @@ The source code for the text summarization application is in the `Docker-NLP/04_
```python
from summarizer import Summarizer
```
This line of code imports the `Summarizer` class from the `summarizer`
package, essential for your text summarization application. The summarizer
module implements the Bert Extractive Summarizer, leveraging the HuggingFace
@ -108,7 +110,6 @@ The source code for the text summarization application is in the `Docker-NLP/04_
input, ensuring interactivity. The loop breaks when you type `exit`, allowing
you to control the application flow effectively.
4. Create an instance of Summarizer.
```python
@ -252,10 +253,10 @@ The following steps explain each part of the `Dockerfile`. For more details, see
ENTRYPOINT ["/app/entrypoint.sh"]
```
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
You can explore the `entrypoint.sh` script by opening it in a code or text
editor. As the sample contains several applications, the script lets you
specify which application to run when the container starts.
@ -308,12 +309,12 @@ To run the application using Docker:
- `docker run`: This is the primary command used to run a new container from
a Docker image.
- `-it`: This is a combination of two options:
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `basic-nlp`: This specifies the name of the Docker image to use for
creating the container. In this case, it's the image named `basic-nlp` that
you created with the `docker build` command.
@ -338,7 +339,7 @@ To run the application using Docker:
3. Test the application.
Enter some text to get the text summarization.
```console
Enter the text for summarization (type 'exit' to end): Artificial intelligence (AI) is a branch of computer science that aims to create machines capable of intelligent behavior. These machines are designed to mimic human cognitive functions such as learning, problem-solving, and decision-making. AI technologies can be classified into two main types: narrow or weak AI, which is designed for a particular task, and general or strong AI, which possesses the ability to understand, learn, and apply knowledge across various domains. One of the most popular approaches in AI is machine learning, where algorithms are trained on large datasets to recognize patterns and make predictions.
@ -354,11 +355,11 @@ using Docker.
Related information:
* [Docker CLI reference](/reference/cli/docker/)
* [Dockerfile reference](/reference/dockerfile/)
* [Bert Extractive Summarizer](https://github.com/dmmiller612/bert-extractive-summarizer)
* [PyTorch](https://pytorch.org/)
* [Python documentation](https://docs.python.org/3/)
- [Docker CLI reference](/reference/cli/docker/)
- [Dockerfile reference](/reference/dockerfile/)
- [Bert Extractive Summarizer](https://github.com/dmmiller612/bert-extractive-summarizer)
- [PyTorch](https://pytorch.org/)
- [Python documentation](https://docs.python.org/3/)
## Next steps