Merge pull request #20999 from dvdksn/guides-filters

site: use filter-based nav for guides
This commit is contained in:
David Karlsson 2024-10-04 08:22:30 +02:00 committed by GitHub
commit 52a41bf3b6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
179 changed files with 2784 additions and 2325 deletions

View File

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

View File

@ -51,7 +51,3 @@ Docker recommends watching the video workshop from DockerCon 2022. Watch the ent
If you'd like to see how containers are built from scratch, Liz Rice from Aqua Security has a fantastic talk in which she creates a container from scratch in Go. While the talk does not go into networking, using images for the filesystem, and other advanced topics, it gives a deep dive into how things are working.
<iframe src="https://www.youtube-nocookie.com/embed/8fi7uSYlOdc" style="max-width: 100%; aspect-ratio: 16 / 9;" width="560" height="auto" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Language-specific guides
If you are looking for information on how to containerize an application using your favorite language, see the [Language-specific guides](/guides/language/_index.md).

View File

@ -4,42 +4,9 @@ keywords: Docker guides
description: Explore the Docker guides
params:
icon: developer_guide
notoc: true
dive-deeper:
- title: Language-specific guides
description: Learn how to containerize, develop, and test language-specific apps using Docker.
link: /language/
icon: code
- title: Use-case guides
description: Walk through practical Docker applications for specific scenarios.
link: /guides/use-case/
icon: task
- title: Deployment and Orchestration
description: Deploy and manage Docker containers at scale.
link: /guides/deployment-orchestration/orchestration/
icon: workspaces
resources:
- title: Educational resources
description: Explore diverse Docker training and hands-on experiences.
link: /guides/resources/
icon: book
- title: Contribute to Docker's docs
description: Learn how to help contribute to Docker docs.
link: /contribute/
icon: edit
layout: wide
layout: landing
aliases:
- /learning-paths/
---
This section contains more advanced guides to help you learn how Docker can optimize your development workflows.
## Advancing with Docker
Explore more advanced concepts and scenarios in Docker.
{{< grid items="dive-deeper" >}}
## Educational resources and contributions
Discover community-driven resources and learn how to contribute to Docker docs.
{{< grid items="resources" >}}

View File

@ -1,12 +1,21 @@
---
description: Containerize and develop C++ applications using Docker.
keywords: getting started, c++
title: C++ language-specific guide
linkTitle: C++
description: Containerize and develop C++ applications using Docker.
keywords: getting started, c++
summary: |
This guide explains how to containerize C++ applications using Docker,
covering how to build Docker images, manage dependencies, and deploy C++ apps
efficiently in containers.
toc_min: 1
toc_max: 2
aliases:
- /language/cpp/
- /language/cpp/
- /guides/language/cpp/
languages: [cpp]
levels: [beginner]
params:
time: 10 minutes
---
The C++ getting started guide teaches you how to create a containerized C++ application using Docker. In this guide, you'll learn how to:
@ -15,13 +24,11 @@ The C++ getting started guide teaches you how to create a containerized C++ appl
>
> Docker would like to thank [Pradumna Saraf](https://twitter.com/pradumna_saraf) for his contribution to this guide.
* Containerize and run a C++ application
* Set up a local environment to develop a C++ application using containers
* Configure a CI/CD pipeline for a containerized C++ application using GitHub Actions
* Deploy your containerized application locally to Kubernetes to test and debug your deployment
- Containerize and run a C++ application
- Set up a local environment to develop a C++ application using containers
- Configure a CI/CD pipeline for a containerized C++ application using GitHub Actions
- Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the C++ getting started modules, you should be able to containerize your own C++ application based on the examples and instructions provided in this guide.
Start by containerizing an existing C++ application.
{{< button text="Containerize a C++ app" url="containerize.md" >}}

View File

@ -5,7 +5,8 @@ weight: 40
keywords: ci/cd, github actions, c++, shiny
description: Learn how to configure CI/CD using GitHub Actions for your C++ application.
aliases:
- /language/cpp/configure-ci-cd/
- /language/cpp/configure-ci-cd/
- /guides/language/cpp/configure-ci-cd/
---
## Prerequisites
@ -69,27 +70,24 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -123,11 +121,10 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your C++ application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.
{{< button text="Test your deployment" url="./deploy.md" >}}

View File

@ -5,12 +5,13 @@ weight: 10
keywords: C++, containerize, initialize
description: Learn how to containerize a C++ application.
aliases:
- /language/cpp/containerize/
- /language/cpp/containerize/
- /guides/language/cpp/containerize/
---
## Prerequisites
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Overview
@ -38,9 +39,10 @@ directory.
```
To learn more about the files in the repository, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yml](/reference/compose-file/_index.md)
## Run the application
@ -67,7 +69,6 @@ $ docker compose up --build -d
Open a browser and view the application at [http://localhost:8080](http://localhost:8080).
In the terminal, run the following command to stop the application.
```console
@ -83,11 +84,10 @@ In this section, you learned how you can containerize and run your C++
application using Docker.
Related information:
- [Docker Compose overview](/manuals/compose/_index.md)
- [Docker Compose overview](/manuals/compose/_index.md)
## Next steps
In the next section, you'll learn how you can develop your application using
containers.
{{< button text="Develop your application" url="develop.md" >}}

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, c++
description: Learn how to develop locally using Kubernetes
aliases:
- /language/cpp/deploy/
- /language/cpp/deploy/
- /guides/language/cpp/deploy/
---
## Prerequisites
@ -42,9 +43,9 @@ spec:
service: ok-api
spec:
containers:
- name: ok-api-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: ok-api-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -56,21 +57,21 @@ spec:
selector:
service: ok-api
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
- port: 8080
targetPort: 8080
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your C++ application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your C++ application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -133,9 +134,10 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
## Summary
In this section, you learned how to use Docker Desktop to deploy your C++ application to a fully-featured Kubernetes environment on your development machine.
In this section, you learned how to use Docker Desktop to deploy your C++ application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: C++, local, development
description: Learn how to develop your C++ application locally.
aliases:
- /language/cpp/develop/
- /language/cpp/develop/
- /guides/language/cpp/develop/
---
## Prerequisites
@ -66,12 +67,11 @@ Press `ctrl+c` in the terminal to stop your application.
In this section, you also learned how to use Compose Watch to automatically rebuild and run your container when you update your code.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps
In the next section, you'll take a look at how to set up a CI/CD pipeline using GitHub Actions.
{{< button text="Configure CI/CD" url="configure-ci-cd.md" >}}

View File

@ -2,6 +2,16 @@
description: Learn how to run, connect to, and persist data in a local containerized database.
keywords: database, mysql
title: Use containerized databases
summary: |
Learn how to effectively run and manage databases using Docker containers,
with guides on setup, data persistence, networking, and best practices to
streamline your development and deployment processes.
levels: [beginner]
subjects: [databases]
aliases:
- /guides/use-case/databases/
params:
time: 20 minutes
---
Using a local containerized database offers flexibility and ease of setup,
@ -61,7 +71,7 @@ In this command:
- `mysql:latest` specifies that you want to use the latest version of the MySQL
image.
To verify that you container is running, run `docker ps` in a terminal
To verify that you container is running, run `docker ps` in a terminal
{{< /tab >}}
{{< tab name="GUI" >}}
@ -75,11 +85,12 @@ To run a container using the GUI:
The **Run a new container** model appears.
4. Expand **Optional settings**.
5. In the optional settings, specify the following:
- **Container name**: `my-mysql`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
![The optional settings screen with the options specified.](images/databases-1.webp)
6. Select `Run`.
@ -178,7 +189,6 @@ guide. To stop and remove a container, either:
- Or, in the Docker Dashboard, select the **Delete** icon next to your
container in the **Containers** view.
Next, you can use either the Docker Desktop GUI or CLI to run the container with
the port mapped.
@ -222,9 +232,9 @@ To run a container using the GUI:
- **Container name**: `my-mysql`
- **Host port** for the **3306/tcp** port: `3307`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
![The optional settings screen with the options specified.](images/databases-2.webp)
6. Select `Run`.
@ -323,7 +333,7 @@ To run a database container with a volume attached, and then verify that the
data persists:
1. Run the container and attach the volume.
```console
$ docker run --name my-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -e MYSQL_DATABASE=mydb -v my-db-volume:/var/lib/mysql -d mysql:latest
```
@ -332,11 +342,11 @@ data persists:
2. Create some data in the database. Use the `docker exec` command to run
`mysql` inside the container and create a table.
```console
$ docker exec my-mysql mysql -u root -pmy-secret-pw -e "CREATE TABLE IF NOT EXISTS mydb.mytable (column_name VARCHAR(255)); INSERT INTO mydb.mytable (column_name) VALUES ('value');"
```
This command uses the `mysql` tool in the container to create a table named
`mytable` with a column named `column_name`, and finally inserts a value of
`value`.
@ -355,6 +365,7 @@ data persists:
```console
$ docker run --name my-mysql -v my-db-volume:/var/lib/mysql -d mysql:latest
```
5. Verify that the table you created still exists. Use the `docker exec` command
again to run `mysql` inside the container.
@ -366,6 +377,7 @@ data persists:
records from the `mytable` table.
You should see output like the following.
```console
column_name
value
@ -378,32 +390,35 @@ To run a database container with a volume attached, and then verify that the
data persists:
1. Run a container with a volume attached.
1. In the Docker Dashboard, select the global search at the top of the window.
2. Specify `mysql` in the search box, and select the **Images** tab if not
already selected.
already selected.
3. Hover over the **mysql** image and select **Run**.
The **Run a new container** model appears.
The **Run a new container** model appears.
4. Expand **Optional settings**.
5. In the optional settings, specify the following:
- **Container name**: `my-mysql`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- **Volumes**:
- `my-db-volume`:`/var/lib/mysql`
- `my-db-volume`:`/var/lib/mysql`
![The optional settings screen with the options specified.](images/databases-3.webp)
Here, the name of the volume is `my-db-volume` and it is mounted in the
container at `/var/lib/mysql`.
container at `/var/lib/mysql`.
6. Select `Run`.
2. Create some data in the database.
1. In the **Containers** view, next to your container select the **Show
container actions** icon, and then select **Open in terminal**.
2. Run the following command in the container's terminal to add a table.
```console
# mysql -u root -pmy-secret-pw -e "CREATE TABLE IF NOT EXISTS mydb.mytable (column_name VARCHAR(255)); INSERT INTO mydb.mytable (column_name) VALUES ('value');"
```
@ -412,35 +427,37 @@ data persists:
named `mytable` with a column named `column_name`, and finally inserts a
value of value`.
3. In the **Containers** view, select the **Delete** icon next to your
container, and then select **Delete forever**. Without a volume, the table
you created would be lost when deleting the container.
4. Run a container with a volume attached.
1. In the Docker Dashboard, select the global search at the top of the window.
2. Specify `mysql` in the search box, and select the **Images** tab if not
already selected.
already selected.
3. Hover over the **mysql** image and select **Run**.
The **Run a new container** model appears.
The **Run a new container** model appears.
4. Expand **Optional settings**.
5. In the optional settings, specify the following:
- **Container name**: `my-mysql`
- **Environment variables**:
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- `MYSQL_ROOT_PASSWORD`:`my-secret-pw`
- `MYSQL_DATABASE`:`mydb`
- **Volumes**:
- `my-db-volume`:`/var/lib/mysql`
- `my-db-volume`:`/var/lib/mysql`
![The optional settings screen with the options specified.](images/databases-3.webp)
6. Select `Run`.
5. Verify that the table you created still exists.
1. In the **Containers** view, next to your container select the **Show
container actions** icon, and then select **Open in terminal**.
2. Run the following command in the container's terminal to verify that table
you created still exists.
```console
# mysql -u root -pmy-secret-pw -e "SELECT * FROM mydb.mytable;"
```
@ -448,7 +465,6 @@ data persists:
This command uses the `mysql` tool in the container to select all the
records from the `mytable` table.
You should see output like the following.
```console
@ -481,6 +497,7 @@ guide. To stop and remove a container, either:
To build and run your custom image:
1. Create a Dockerfile.
1. Create a file named `Dockerfile` in your project directory. For this
example, you can create the `Dockerfile` in an empty directory of your
choice. This file will define how to build your custom MySQL image.
@ -513,13 +530,13 @@ To build and run your custom image:
`scripts`, and then create a file named `create_table.sql` with the
following content.
```text
CREATE TABLE IF NOT EXISTS mydb.myothertable (
column_name VARCHAR(255)
);
```text
CREATE TABLE IF NOT EXISTS mydb.myothertable (
column_name VARCHAR(255)
);
INSERT INTO mydb.myothertable (column_name) VALUES ('other_value');
```
INSERT INTO mydb.myothertable (column_name) VALUES ('other_value');
```
You should now have the following directory structure.
@ -531,6 +548,7 @@ To build and run your custom image:
```
2. Build your image.
1. In a terminal, change directory to the directory where your `Dockerfile`
is located.
2. Run the following command to build the image.
@ -538,6 +556,7 @@ To build and run your custom image:
```console
$ docker build -t my-custom-mysql .
```
In this command, `-t my-custom-mysql` tags (names) your new image as
`my-custom-mysql`. The period (.) at the end of the command specifies the
current directory as the context for the build, where Docker looks for the
@ -574,6 +593,7 @@ To build and run your custom image:
```
You should see output like the following.
```console
column_name
other_value
@ -592,10 +612,11 @@ you'll create a Compose file and use it to run a MySQL database container and a
To run your containers with Docker Compose:
1. Create a Docker Compose file.
1. Create a file named `compose.yaml` in your project directory. This file
will define the services, networks, and volumes.
2. Add the following content to the `compose.yaml` file.
```yaml
services:
db:
@ -635,7 +656,7 @@ To run your containers with Docker Compose:
allowing you to connect to the database from your host machine.
- `volumes` mounts `my-db-volume` to `/var/lib/mysql` inside the container
to persist database data.
In addition to the database service, there is a phpMyAdmin service. By
default Compose sets up a single network for your app. Each container for
a service joins the default network and is both reachable by other
@ -644,13 +665,15 @@ To run your containers with Docker Compose:
service name, `db`, in order to connect to the database service. For more details about Compose, see the [Compose file reference](/reference/compose-file/).
2. Run Docker Compose.
1. Open a terminal and change directory to the directory where your
`compose.yaml` file is located.
2. Run Docker Compose using the following command.
```console
$ docker compose up
```
You can now access phpMyAdmin at
[http://localhost:8080](http://localhost:8080) and connect to your
database using `root` as the username and `my-secret-pw` as the password.

View File

@ -1,7 +0,0 @@
---
title: Deployment and orchestration
weight: 30
build:
render: never
---

View File

@ -0,0 +1,61 @@
---
title: "Docker Build Cloud: Reclaim your time with fast, multi-architecture builds"
linkTitle: Docker Build Cloud
description: |
Learn how to build and deploy Docker images to the cloud with Docker Build
Cloud.
summary: |
Create applications up to 39x faster using cloud-based resources, shared team
cache, and native multi-architecture support.
levels: [beginner]
products: [dbc]
aliases:
- /learning-paths/docker-build-cloud/
params:
featured: true
image: images/learning-paths/build-cloud.png
time: 10 minutes
resource_links:
- title: Product page
url: https://www.docker.com/products/build-cloud/
- title: Docker Build Cloud overview
url: /build-cloud/
- title: Subscriptions and features
url: /subscription/build-cloud/build-details/
- title: Using Docker Build Cloud
url: /build-cloud/usage/
---
<!-- vale Vale.Spelling = NO -->
98% of developers spend up to an hour every day waiting for builds to finish
([Incredibuild: 2022 Big Dev Build Times](https://www.incredibuild.com/survey-report-2022)).
Heavy, complex builds can become a major roadblock for development teams,
slowing down both local development and CI/CD pipelines.
<!-- vale Vale.Spelling = YES -->
Docker Build Cloud speeds up image build times to improve developer
productivity, reduce frustrations, and help you shorten the release cycle.
## Whos this for?
- Anyone who wants to tackle common causes of slow image builds: limited local
resources, slow emulation, and lack of build collaboration across a team.
- Developers working on older machines who want to build faster.
- Development teams working on the same repository who want to cut wait times
with a shared cache.
- Developers performing multi-architecture builds who dont want to spend hours
configuring and rebuilding for emulators.
## What youll learn
- Building container images faster locally and in CI
- Accelerating builds for multi-platform images
- Reusing pre-built images to expedite workflows
## Tools integration
Works well with Docker Compose, GitHub Actions, and other CI solutions
<div id="dbc-lp-survey-anchor"></div>

View File

@ -0,0 +1,22 @@
---
title: "Demo: Using Docker Build Cloud in CI"
description: Learn how to use Docker Build Cloud to build your app faster in CI.
weight: 30
---
Docker Build Cloud can significantly decrease the time it takes for your CI builds
take to run, saving you time and money.
Since the builds run remotely, your CI runner can still use the Docker tooling CLI
without needing elevated permissions, making your builds more secure by default.
In this demo, you will see:
- How to integrate Docker Build Cloud into a variety of CI platforms
- How to use Docker Build Cloud in GitHub Actions to build multi-architecture images
- Speed differences between a workflow using Docker Build Cloud and a workflow running natively
- How to use Docker Build Cloud in a GitLab Pipeline
{{< youtube-embed "wvLdInoVBGg" >}}
<div id="dbc-lp-survey-anchor"></div>

View File

@ -0,0 +1,69 @@
---
title: Common challenges and questions
description: Explore common challenges and questions related to Docker Build Cloud.
weight: 40
---
### Is Docker Build Cloud a standalone product or a part of Docker Desktop?
Docker Build Cloud is a service that can be used both with Docker Desktop and
standalone. It lets you build your container images faster, both locally and in
CI, with builds running on cloud infrastructure. The service uses a remote
build cache, ensuring fast builds anywhere and for all team members.
When used with Docker Desktop, the [Builds view](/desktop/use-desktop/builds/)
works with Docker Build Cloud out-of-the-box. It shows information about your
builds and those initiated by your team members using the same builder,
enabling collaborative troubleshooting.
To use Docker Build Cloud without Docker Desktop, you must
[download and install](/build-cloud/setup/#use-docker-build-cloud-without-docker-desktop)
a version of Buildx with support for Docker Build Cloud (the `cloud` driver).
If you plan on building with Docker Build Cloud using the `docker compose
build` command, you also need a version of Docker Compose that supports Docker
Build Cloud.
### How does Docker Build Cloud work with Docker Compose?
Docker Compose works out of the box with Docker Build Cloud. Install the Docker
Build Cloud-compatible client (buildx) and it works with both commands.
### How many minutes are included in Docker Build Cloud Team plans?
You receive 200 minutes per month per purchased seat. If you are also a Docker
subscriber (Personal, Pro, Team, Business), you will also receive your included
build minutes from that plan.
For example, if a Docker Team customer purchases 5 Build Cloud Team seats, they
will have 400 minutes from their Docker Team plan plus 1000 minutes (200 min/mo * 5 seats)
for a total of 1400 minutes per month.
### Im a Docker personal user. Can I try Docker Build Cloud?
Docker subscribers (Pro, Team, Business) receive a set number of minutes each
month, shared across the account, to use Build Cloud.
If you do not have a Docker subscription, you may sign up for a free Personal
account and get 50 minutes per month. Personal accounts are limited to a single
user.
For teams to receive the shared cache benefit, they must either be on a Docker
Team, Docker Business, or paid Build Cloud Team plan. You may buy a month of
Build Cloud Team for the number of seats testing.
### Does Docker Build Cloud support CI platforms? Does it work with GitHub Actions?
Yes, Docker Build Cloud can be used with various CI platforms including GitHub
Actions, CircleCI, Jenkins, and others. It can speed up your build pipelines,
which means less time spent waiting and context switching.
Docker Build Cloud can be used with GitHub Actions to automate your build,
test, and deployment pipeline. Docker provides a set of official GitHub Actions
that you can use in your workflows.
Using GitHub Actions with Docker Build Cloud is straightforward. With a
one-line change in your GitHub Actions configuration, everything else stays the
same. You don't need to create new pipelines. Learn more in the [CI
documentation](/build-cloud/ci/) for Docker Build Cloud.
<div id="dbc-lp-survey-anchor"></div>

View File

@ -0,0 +1,18 @@
---
title: "Demo: set up and use Docker Build Cloud in development"
description: Learn how to use Docker Buld Cloud for local builds.
weight: 20
---
With Docker Build Cloud, you can easily shift the build workload from local machines
to the cloud, helping you achieve faster build times, especially for multi-platform builds.
In this demo, you'll see:
- How to setup the builder locally
- How to use Docker Build Cloud with Docker Compose
- How the image cache speeds up builds for others on your team
{{< youtube-embed "oPGq2AP5OtQ" >}}
<div id="dbc-lp-survey-anchor"></div>

View File

@ -0,0 +1,27 @@
---
title: Why Docker Build Cloud?
description: Learn how Docker Build Cloud makes your builds faster.
weight: 10
---
Docker Build Cloud is a service that lets you build container images faster,
both locally and in CI. Builds run on cloud infrastructure optimally
dimensioned for your workloads, with no configuration required. The service
uses a remote build cache, ensuring fast builds anywhere and for all team
members.
Docker Build Cloud provides several benefits over local builds:
- Improved build speed
- Shared build cache
- Native multi-platform builds
Theres no need to worry about managing builders or infrastructure — simply
connect to your builders and start building. Each cloud builder provisioned to
an organization is completely isolated to a single Amazon EC2 instance, with a
dedicated EBS volume for build cache and encryption in transit. That means
there are no shared processes or data between cloud builders.
{{< youtube-embed "8AqKhEO2PQA" >}}
<div id="dbc-lp-survey-anchor"></div>

View File

@ -0,0 +1,62 @@
---
title: Defining and running multi-container applications with Docker Compose
linkTitle: Docker Compose
summary: Simplify the process of defining, configuring, and running multi-container Docker applications to enable efficient development, testing, and deployment.
description: Learn how to use Docker Compose to define and run multi-container Docker applications.
levels: [beginner]
products: [compose]
aliases:
- /learning-paths/docker-compose/
params:
featured: true
image: images/learning-paths/compose.png
time: 10 minutes
resource_links:
- title: Overview of Docker Compose CLI
url: /compose/reference/
- title: Overview of Docker Compose
url: /compose/
- title: How Compose works
url: /compose/intro/compose-application-model/
- title: Using profiles with Compose
url: /compose/how-tos/profiles/
- title: Control startup and shutdown order with Compose
url: /compose/how-tos/startup-order/
- title: Compose Build Specification
url: /compose/compose-file/build/
---
Developers face challenges with multi-container Docker applications, including
complex configuration, dependency management, and maintaining consistent
environments. Networking, resource allocation, data persistence, logging, and
monitoring add to the difficulty. Security concerns and troubleshooting issues
further complicate the process, requiring effective tools and practices for
efficient management.
Docker Compose solves the problem of managing multi-container Docker
applications by providing a simple way to define, configure, and run all the
containers needed for an application using a single YAML file. This approach
helps developers to easily set up, share, and maintain consistent development,
testing, and production environments, ensuring that complex applications can be
deployed with all their dependencies and services properly configured and
orchestrated.
## What youll learn
- What Docker Compose is and what it does
- How to define services
- Use cases for Docker Compose
- How things would be different without Docker Compose
## Whos this for?
- Developers and DevOps engineers who need to define, manage, and orchestrate
multi-container Docker applications efficiently across multiple environments.
- Development teams that want to increase productivity by streamlining
development workflows and reducing setup time.
## Tools integration
Works well with Docker CLI, CI/CD tools, and container orchestration tools.
<div id="compose-lp-survey-anchor"></div>

View File

@ -0,0 +1,77 @@
---
title: Common challenges and questions
description: Explore common challenges and questions related to Docker Compose.
weight: 30
---
<!-- vale Docker.HeadingLength = NO -->
### Do I need to maintain a separate Compose file for my development, testing, and staging environments?
You don't necessarily need to maintain entirely separate Compose files for your
development, testing, and staging environments. You can define all your
services in a single Compose file (`compose.yaml`). You can use profiles to
group service configurations specific to each environment (`dev`, `test`,
`staging`).
When you need to spin up an environment, you can activate the corresponding
profiles. For example, to set up the development environment:
```console
$ docker compose --profile dev up
```
This command starts only the services associated with the `dev` profile,
leaving the rest inactive.
For more information on using profiles, see [Using profiles with
Compose](/compose/how-tos/profiles/).
### How can I enforce the database service to start up before the frontend service?
Docker Compose ensures services start in a specific order by using the
`depends_on` property. This tells Compose to start the database service before
even attempting to launch the frontend service. This is crucial since
applications often rely on databases being ready for connections.
However, `depends_on` only guarantees the order, not that the database is fully
initialized. For a more robust approach, especially if your application relies
on a prepared database (e.g., after migrations), consider [health
checks](/reference/compose-file/services.md#healthcheck). Here, you can
configure the frontend to wait until the database passes its health check
before starting. This ensures the database is not only up but also ready to
handle requests.
For more information on setting the startup order of your services, see
[Control startup and shutdown order in Compose](/compose/how-tos/startup-order/).
### Can I use Compose to build a Docker image?
Yes, you can use Docker Compose to build Docker images. Docker Compose is a
tool for defining and running multi-container applications. Even if your
application isn't a multi-container application, Docker Compose can make it
easier to run by defining all the `docker run` options in a file.
To use Compose, you need a `compose.yaml` file. In this file, you can specify
the build context and Dockerfile for each service. When you run the command
`docker compose up --build`, Docker Compose will build the images for each
service and then start the containers.
For more information on building Docker images using Compose, see the [Compose
Build Specification](/compose/compose-file/build/).
### What is the difference between Docker Compose and Dockerfile?
A Dockerfile provides instructions to build a container image while a Compose
file defines your running containers. Quite often, a Compose file references a
Dockerfile to build an image to use for a particular service.
### What is the difference between the `docker compose up` and `docker compose run` commands?
The `docker compose up` command creates and starts all your services. It's
perfect for launching your development environment or running the entire
application. The `docker compose run` command focuses on individual services.
It starts a specified service along with its dependencies, allowing you to run
tests or perform one-off tasks within that container.
<div id="compose-lp-survey-anchor"></div>

View File

@ -0,0 +1,16 @@
---
title: "Demo: set up and use Docker Compose"
description: Learn how to get started with Docker Compose.
weight: 20
---
This Docker Compose demo shows how to orchestrate a multi-container application
environment, streamlining development and deployment processes.
- Compare Docker Compose to the `docker run` command
- Configure a multi-container web app using a Compose file
- Run a multi-container web app using one command
{{< youtube-embed P5RBKmOLPH4 >}}
<div id="compose-lp-survey-anchor"></div>

View File

@ -0,0 +1,22 @@
---
title: Why Docker Compose?
description: Learn how Docker Compose can help you simplify app development.
weight: 10
---
Docker Compose is an essential tool for defining and running multi-container
Docker applications. Docker Compose simplifies the Docker experience, making it
easier for developers to create, manage, and deploy applications by using YAML
files to configure application services.
Docker Compose provides several benefits:
- Lets you define multi-container applications in a single YAML file.
- Ensures consistent environments across development, testing, and production.
- Manages the startup and linking of multiple containers effortlessly.
- Streamlines development workflows and reduces setup time.
- Ensures that each service runs in its own container, avoiding conflicts.
{{< youtube-embed 2EqarOM2V4U >}}
<div id="compose-lp-survey-anchor"></div>

View File

@ -0,0 +1,69 @@
---
title: Securing your software supply chain with Docker Scout
linkTitle: Docker Scout
summary: |
Enhance container security by automating vulnerability detection and
remediation, ensuring compliance, and protecting your development workflow.
description: |
Learn how to use Docker Scout to enhance container security by automating
vulnerability detection and remediation, ensuring compliance, and protecting
your development workflow.
levels: [Beginner]
products: [scout]
aliases:
- /learning-paths/docker-scout/
params:
featured: true
image: images/learning-paths/scout.png
time: 10 minutes
resource_links:
- title: Docker Scout overview
url: /scout/
- title: Docker Scout quickstart
url: /scout/quickstart/
- title: Install Docker Scout
url: /scout/install/
- title: Software Bill of Materials
url: /scout/concepts/sbom/
---
When container images are insecure, significant risks can arise. Around 60% of
organizations have reported experiencing at least one security breach or
vulnerability incident within a year, resulting in operational
disruption.[^CSA] These incidents often result in considerable downtime, with
44% of affected companies experiencing over an hour of downtime per event. The
financial impact is substantial, with the average data breach cost reaching
$4.45 million.[^IBM] This highlights the critical importance of maintaining
robust container security measures.
Docker Scout enhances container security by providing automated vulnerability
detection and remediation, addressing insecure container images, and ensuring
compliance with security standards.
[^CSA]: https://cloudsecurityalliance.org/blog/2023/09/21/2023-global-cloud-threat-report-cloud-attacks-are-lightning-fast
[^IBM]: https://www.ibm.com/reports/data-breach
## What you'll learn
- Define secure software supply chain (SSSC)
- Review SBOMs and how to use them
- Detect and monitor vulnerabilities
## Tools integration
Works well with Docker Desktop, GitHub Actions, Jenkins, Kubernetes, and
other CI solutions.
## Whos this for?
- DevOps engineers who need to integrate automated security checks into CI/CD
pipelines to enhance the security and efficiency of their workflows.
- Developers who want to use Docker Scout to identify and remediate
vulnerabilities early in the development process, ensuring the production of
secure container images.
- Security professionals who must enforce security compliance, conduct
vulnerability assessments, and ensure the overall security of containerized
applications.
<div id="scout-lp-survey-anchor"></div>

View File

@ -0,0 +1,61 @@
---
title: Common challenges and questions
description: Explore common challenges and questions related to Docker Scout.
weight: 30
---
<!-- vale Docker.HeadingLength = NO -->
### How is Docker Scout different from other security tools?
Docker Scout takes a broader approach to container security compared to
third-party security tools. Third-party security tools, if they offer
remediation guidance at all, miss the mark on their limited scope of
application security posture within the software supply chain, and often
limited guidance when it comes to suggested fixes. Such tools have either
limitations on runtime monitoring or no runtime protection at all. When they do
offer runtime monitoring, its limited in its adherence to key policies.
Third-party security tools offer a limited scope of policy evaluation for
Docker-specific builds. By focusing on the entire software supply chain,
providing actionable guidance, and offering comprehensive runtime protection
with strong policy enforcement, Docker Scout goes beyond just identifying
vulnerabilities in your containers. It helps you build secure applications from
the ground up.
### Can I use Docker Scout with external registries other than Docker Hub?
You can use Scout with registries other than Docker Hub. Integrating Docker Scout
with third-party container registries enables Docker Scout to run image
analysis on those repositories so that you can get insights into the
composition of those images even if they aren't hosted on Docker Hub.
The following container registry integrations are available:
- Artifactory
- Amazon Elastic Container Registry
- Azure Container Registry
Learn more about configuring Scout with your registries in [Integrating Docker Scout with third-party registries](/scout/integrations/#container-registries).
### Does Docker Scout CLI come by default with Docker Desktop?
Yes, the Docker Scout CLI plugin comes pre-installed with Docker Desktop.
### Is it possible to run `docker scout` commands on a Linux system without Docker Desktop?
If you run Docker Engine without Docker Desktop, Docker Scout doesn't come
pre-installed, but you can [install it as a standalone binary](/scout/install/).
### How is Docker Scout using an SBOM?
An SBOM, or software bill of materials, is a list of ingredients that make up
software components. [Docker Scout uses SBOMs](/scout/concepts/sbom/) to
determine the components that are used in a Docker image. When you analyze an
image, Docker Scout will either use the SBOM that is attached to the image (as
an attestation), or generate an SBOM on the fly by analyzing the contents of
the image.
The SBOM is cross-referenced with the advisory database to determine if any of
the components in the image have known vulnerabilities.
<div id="scout-lp-survey-anchor"></div>

View File

@ -0,0 +1,20 @@
---
title: Docker Scout demo
description: Learn about Docke rScout's powerful features for enhanced supply chain security.
weight: 20
---
Docker Scout has powerful features for enhancing containerized application
security and ensuring a robust software supply chain.
- Define vulnerability remediation
- Discuss why remediation is essential to maintain the security and integrity
of containerized applications
- Discuss common vulnerabilities
- Implement remediation techniques: updating base images, applying patches,
removing unnecessary packages
- Verify and validate remediation efforts using Docker Scout
{{< youtube-embed "TkLwJ0p46W8" >}}
<div id="scout-lp-survey-anchor"></div>

View File

@ -0,0 +1,27 @@
---
title: Why Docker Scout?
description: Learn how Docker Scout can help you secure your supply chain.
weight: 10
---
Organizations face significant challenges from data breaches,
including financial losses, operational disruptions, and long-term damage to
brand reputation and customer trust. Docker Scout addresses critical problems
such as identifying insecure container images, preventing security breaches,
and reducing the risk of operational downtime due to vulnerabilities.
Docker Scout provides several benefits:
- Secure and trusted content
- A system of record for your Software Development Lifecycle (SDLC)
- Continuous security posture improvement
Docker Scout offers automated vulnerability detection and remediation, helping
organizations identify and fix security issues in container images early in the
development process. It also integrates with popular development tools like
Docker Desktop and GitHub Actions, providing seamless security management and
compliance checks within existing workflows.
{{< youtube-embed "-omsQ7Uqyc4" >}}
<div id="scout-lp-survey-anchor"></div>

View File

@ -0,0 +1,28 @@
---
title: .NET language-specific guide
linkTitle: C# (.NET)
description: Containerize and develop .NET apps using Docker
summary: Learn how to containerize .NET applications using Docker, including building, running, and deploying .NET apps in Docker containers, with best practices and step-by-step examples.
keywords: getting started, .net
aliases:
- /language/dotnet/
- /guides/language/dotnet/
languages: [c-sharp]
levels: [beginner]
params:
time: 20 minutes
toc_min: 1
toc_max: 2
---
The .NET getting started guide teaches you how to create a containerized .NET application using Docker. In this guide, you'll learn how to:
- Containerize and run a .NET application
- Set up a local environment to develop a .NET application using containers
- Run tests for a .NET application using containers
- Configure a CI/CD pipeline for a containerized .NET application using GitHub Actions
- Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the .NET getting started modules, you should be able to containerize your own .NET application based on the examples and instructions provided in this guide.
Start by containerizing an existing .NET application.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: .net, CI/CD
description: Learn how to Configure CI/CD for your .NET application
aliases:
- /language/dotnet/configure-ci-cd/
- /language/dotnet/configure-ci-cd/
- /guides/language/dotnet/configure-ci-cd/
---
## Prerequisites
@ -77,33 +78,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: build
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -138,11 +135,10 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.
{{< button text="Test your deployment" url="./deploy.md" >}}

View File

@ -8,6 +8,7 @@ aliases:
- /language/dotnet/build-images/
- /language/dotnet/run-containers/
- /language/dotnet/containerize/
- /guides/language/dotnet/containerize/
---
## Prerequisites
@ -130,5 +131,3 @@ Related information:
In the next section, you'll learn how you can develop your application using
Docker containers.
{{< button text="Develop your application" url="develop.md" >}}

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, .net, local, development
description: Learn how to deploy your application
aliases:
- /language/dotnet/deploy/
- /language/dotnet/deploy/
- /guides/language/dotnet/deploy/
---
## Prerequisites
@ -52,7 +53,12 @@ spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command: ['sh', '-c', 'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;']
command:
[
"sh",
"-c",
'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;',
]
containers:
- image: DOCKER_USERNAME/REPO_NAME
name: server
@ -138,14 +144,14 @@ status:
In this Kubernetes YAML file, there are four objects, separated by the `---`. In addition to a Service and Deployment for the database, the other two objects are:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
.NET application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
.NET application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -212,6 +218,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: .net, development
description: Learn how to develop your .NET application locally using containers.
aliases:
- /language/dotnet/develop/
- /language/dotnet/develop/
- /guides/language/dotnet/develop/
---
## Prerequisites
@ -15,9 +16,10 @@ Complete [Containerize a .NET application](containerize.md).
## Overview
In this section, you'll learn how to set up a development environment for your containerized application. This includes:
- Adding a local database and persisting data
- Configuring Compose to automatically update your running Compose services as you edit and save your code
- Creating a development container that contains the .NET Core SDK tools and dependencies
- Adding a local database and persisting data
- Configuring Compose to automatically update your running Compose services as you edit and save your code
- Creating a development container that contains the .NET Core SDK tools and dependencies
## Update the application
@ -69,7 +71,6 @@ You should now have the following in your `docker-dotnet-sample` directory.
│ └── README.md
```
## Add a local database and persist data
You can use containers to set up local services, like a database. In this section, you'll update the `compose.yaml` file to define a database service and a volume to persist data.
@ -109,7 +110,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -233,7 +234,7 @@ Use Compose Watch to automatically update your running Compose services as you e
Open your `compose.yaml` file in an IDE or text editor and then add the Compose Watch instructions. The following is the updated `compose.yaml` file.
```yaml {hl_lines="11-14"}
```yaml {hl_lines="11-14"}
services:
server:
build:
@ -262,7 +263,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -272,6 +273,7 @@ secrets:
db-password:
file: db/password.txt
```
Run the following command to run your application with Compose Watch.
```console
@ -335,7 +337,7 @@ ENTRYPOINT ["dotnet", "myWebApp.dll"]
The following is the updated `compose.yaml` file.
```yaml {hl_lines="5"}
```yaml {hl_lines="5"}
services:
server:
build:
@ -351,8 +353,8 @@ services:
- action: rebuild
path: .
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80'
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80'
db:
image: postgres
restart: always
@ -367,7 +369,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -386,12 +388,11 @@ In this section, you took a look at setting up your Compose file to add a local
database and persist data. You also learned how to use Compose Watch to automatically rebuild and run your container when you update your code. And finally, you learned how to create a development container that contains the SDK tools and dependencies needed for development.
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Compose file reference](/reference/compose-file/)
- [Compose file watch](/manuals/compose/how-tos/file-watch.md)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps
In the next section, you'll learn how to run unit tests using Docker.
{{< button text="Run your tests" url="run-tests.md" >}}

View File

@ -5,7 +5,8 @@ weight: 30
keywords: .NET, test
description: Learn how to run your .NET tests in a container.
aliases:
- /language/dotnet/run-tests/
- /language/dotnet/run-tests/
- /guides/language/dotnet/run-tests/
---
## Prerequisites
@ -46,7 +47,7 @@ To run your tests when building, you need to update your Dockerfile. You can cre
The following is the updated Dockerfile.
```dockerfile {hl_lines="9"}
```dockerfile {hl_lines="9"}
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
@ -109,10 +110,9 @@ You should see output containing the following.
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information:
- [docker compose run](/reference/cli/docker/compose/run/)
- [docker compose run](/reference/cli/docker/compose/run/)
## Next steps
Next, youll learn how to set up a CI/CD pipeline using GitHub Actions.
{{< button text="Configure CI/CD" url="configure-ci-cd.md" >}}

View File

@ -0,0 +1,24 @@
---
title: PDF analysis and chat
description: Containerize generative AI (GenAI) apps using Docker
keywords: python, generative ai, genai, llm, neo4j, ollama, langchain
summary: |
This guide explains how to build a PDF bot using Docker and generative AI,
focusing on setting up a containerized environment for parsing PDF documents
and generating intelligent responses based on the content.
levels: [beginner]
subjects: [ai]
aliases:
- /guides/use-case/genai-pdf-bot/
params:
time: 20 minutes
---
The generative AI (GenAI) guide teaches you how to containerize an existing GenAI application using Docker. In this guide, youll learn how to:
- Containerize and run a Python-based GenAI application
- Set up a local environment to run the complete GenAI stack locally for development
Start by containerizing an existing GenAI application.
{{< button text="Containerize a GenAI app" url="containerize.md" >}}

View File

@ -4,6 +4,8 @@ linkTitle: Containerize your app
weight: 10
keywords: python, generative ai, genai, llm, neo4j, ollama, containerize, intitialize, langchain, openai
description: Learn how to containerize a generative AI (GenAI) application.
aliases:
- /guides/use-case/genai-pdf-bot/containerize/
---
## Prerequisites
@ -12,8 +14,8 @@ description: Learn how to containerize a generative AI (GenAI) application.
>
> GenAI applications can often benefit from GPU acceleration. Currently Docker Desktop supports GPU acceleration only on [Windows with the WSL2 backend](/manuals/desktop/gpu.md#using-nvidia-gpus-with-wsl2). Linux users can also access GPU acceleration using a native installation of the [Docker Engine](/manuals/engine/install/_index.md).
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md) or, if you are a Linux user and are planning to use GPU acceleration, [Docker Engine](/manuals/engine/install/_index.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md) or, if you are a Linux user and are planning to use GPU acceleration, [Docker Engine](/manuals/engine/install/_index.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.
## Overview
@ -91,10 +93,10 @@ directory.
```
To learn more about the files that `docker init` added, see the following:
- [Dockerfile](../../../reference/dockerfile.md)
- [.dockerignore](../../../reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](../../../reference/dockerfile.md)
- [.dockerignore](../../../reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -130,10 +132,9 @@ In this section, you learned how you can containerize and run your GenAI
application using Docker.
Related information:
- [docker init CLI reference](../../../reference/cli/docker/init.md)
- [docker init CLI reference](../../../reference/cli/docker/init.md)
## Next steps
In the next section, you'll learn how you can run your application, database, and LLM service all locally using Docker.
{{< button text="Develop your application" url="develop.md" >}}

View File

@ -4,6 +4,8 @@ linkTitle: Develop your app
weight: 20
keywords: python, local, development, generative ai, genai, llm, neo4j, ollama, langchain, openai
description: Learn how to develop your generative AI (GenAI) application locally.
aliases:
- /guides/use-case/genai-pdf-bot/develop/
---
## Prerequisites
@ -31,6 +33,7 @@ To run the database service:
This file contains the environment variables that the containers will use.
2. In the cloned repository's directory, open the `compose.yaml` file in an IDE or text editor.
3. In the `compose.yaml` file, add the following:
- Add instructions to run a Neo4j database
- Specify the environment file under the server service in order to pass in the environment variables for the connection
@ -67,7 +70,7 @@ To run the database service:
> To learn more about Neo4j, see the [Neo4j Official Docker Image](https://hub.docker.com/_/neo4j).
4. Run the application. Inside the `docker-genai-sample` directory,
run the following command in a terminal.
run the following command in a terminal.
```console
$ docker compose up --build
@ -80,12 +83,14 @@ run the following command in a terminal.
## Add a local or remote LLM service
The sample application supports both [Ollama](https://ollama.ai/) and [OpenAI](https://openai.com/). This guide provides instructions for the following scenarios:
- Run Ollama in a container
- Run Ollama outside of a container
- Use OpenAI
While all platforms can use any of the previous scenarios, the performance and
GPU support may vary. You can use the following guidelines to help you choose the appropriate option:
- Run Ollama in a container if you're on Linux, and using a native installation of the Docker Engine, or Windows 10/11, and using Docker Desktop, you
have a CUDA-supported GPU, and your system has at least 8 GB of RAM.
- Run Ollama outside of a container if you're on an Apple silicon Mac.
@ -99,6 +104,7 @@ Choose one of the following options for your LLM service.
When running Ollama in a container, you should have a CUDA-supported GPU. While you can run Ollama in a container without a supported GPU, the performance may not be acceptable. Only Linux and Windows 11 support GPU access to containers.
To run Ollama in a container and provide GPU access:
1. Install the prerequisites.
- For Docker Engine on Linux, install the [NVIDIA Container Toolkilt](https://github.com/NVIDIA/nvidia-container-toolkit).
- For Docker Desktop on Windows 10/11, install the latest [NVIDIA driver](https://www.nvidia.com/Download/index.aspx) and make sure you are using the [WSL2 backend](/manuals/desktop/wsl/_index.md#turn-on-docker-desktop-wsl-2)
@ -125,7 +131,11 @@ To run Ollama in a container and provide GPU access:
environment:
- NEO4J_AUTH=${NEO4J_USERNAME}/${NEO4J_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
test:
[
"CMD-SHELL",
"wget --no-verbose --tries=1 --spider localhost:7474 || exit 1",
]
interval: 5s
timeout: 3s
retries: 5
@ -181,6 +191,7 @@ To run Ollama in a container and provide GPU access:
{{< tab name="Run Ollama outside of a container" >}}
To run Ollama outside of a container:
1. [Install](https://github.com/jmorganca/ollama) and run Ollama on your host
machine.
2. Update the `OLLAMA_BASE_URL` value in your `.env` file to
@ -208,6 +219,7 @@ To run Ollama outside of a container:
## Run your GenAI application
At this point, you have the following services in your Compose file:
- Server service for your main GenAI application
- Database service to store vectors in a Neo4j database
- (optional) Ollama service to run the LLM
@ -237,11 +249,12 @@ In this section, you learned how to set up a development environment to provide
access all the services that your GenAI application needs.
Related information:
- [Dockerfile reference](../../../reference/dockerfile.md)
- [Compose file reference](/reference/compose-file/_index.md)
- [Ollama Docker image](https://hub.docker.com/r/ollama/ollama)
- [Neo4j Official Docker Image](https://hub.docker.com/_/neo4j)
- [GenAI Stack demo applications](https://github.com/docker/genai-stack)
- [Dockerfile reference](../../../reference/dockerfile.md)
- [Compose file reference](/reference/compose-file/_index.md)
- [Ollama Docker image](https://hub.docker.com/r/ollama/ollama)
- [Neo4j Official Docker Image](https://hub.docker.com/_/neo4j)
- [GenAI Stack demo applications](https://github.com/docker/genai-stack)
## Next steps

View File

Before

Width:  |  Height:  |  Size: 65 KiB

After

Width:  |  Height:  |  Size: 65 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -2,7 +2,17 @@
title: GenAI video transcription and chat
linkTitle: Video transcription and chat
description: Explore a generative AI video analysis app that uses Docker, OpenAI, and Pinecone.
keywords: python, generative ai, genai, llm, whisper, pinecone, openai, whisper
keywords: python, generative ai, genai, llm, whisper, pinecone, openai, whisper
summary: |
Learn how to build and deploy a generative AI video bot using Docker, with
step-by-step instructions for setup, integration, and optimization to enhance
your AI development projects.
subjects: [ai]
levels: [beginner]
aliases:
- /guides/use-case/genai-video-bot/
params:
time: 20 minutes
---
## Overview
@ -12,6 +22,7 @@ technologies related to the
[GenAI Stack](https://www.docker.com/blog/introducing-a-new-genai-stack/).
The project showcases the following technologies:
- [Docker and Docker Compose](#docker-and-docker-compose)
- [OpenAI](#openai-api)
- [Whisper](#whisper)
@ -34,7 +45,6 @@ The project showcases the following technologies:
>
> OpenAI is a third-party hosted service and [charges](https://openai.com/pricing) may apply.
- You have a [Pinecone API Key](https://app.pinecone.io/).
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
@ -48,10 +58,13 @@ addition, it provides timestamps from the video that can help you find the sourc
1. Clone the sample application's repository. In a terminal, run the following
command.
```console
$ git clone https://github.com/Davidnet/docker-genai.git
```
The project contains the following directories and files:
```text
├── docker-genai/
│ ├── docker-bot/
@ -80,9 +93,11 @@ addition, it provides timestamps from the video that can help you find the sourc
3. Build and run the application. In a terminal, change directory to your
`docker-genai` directory and run the following command.
```console
$ docker compose up --build
```
Docker Compose builds and runs the application based on the services defined
in the `docker-compose.yaml` file. When the application is running, you'll
see the logs of 2 services in the terminal.
@ -142,9 +157,9 @@ how to use the service.
The answer to that question exists in the video processed in the previous
example,
[https://www.youtube.com/watch?v=yaQZFhrW0fU](https://www.youtube.com/watch?v=yaQZFhrW0fU).
![Asking a question to the Dockerbot](images/bot.webp)
In this example, the Dockerbot answers the question and
provides links to the video with timestamps, which may contain more
information about the answer.
@ -164,6 +179,7 @@ how to use the service.
## Explore the application architecture
The following image shows the application's high-level service architecture, which includes:
- yt-whisper: A local service, ran by Docker Compose, that interacts with the
remote OpenAI and Pinecone services.
- dockerbot: A local service, ran by Docker Compose, that interacts with the
@ -245,6 +261,6 @@ OpenAI's cookbook for
## Next steps
Explore how to [create a PDF bot application](../genai-pdf-bot/_index.md) using
Explore how to [create a PDF bot application](/guides/genai-pdf-bot/_index.md) using
generative AI, or view more GenAI samples in the
[GenAI Stack](https://github.com/docker/genai-stack) repository.

View File

@ -3,10 +3,20 @@ title: Go language-specific guide
linkTitle: Go
description: Containerize Go apps using Docker
keywords: docker, getting started, go, golang, language, dockerfile
summary: |
This guide teaches you how to containerize Go applications using Docker,
covering image building, dependency management, multi-stage builds for
smaller images, and best practices for deploying Go apps efficiently in
containers.
toc_min: 1
toc_max: 2
aliases:
- /language/golang/
- /language/golang/
- /guides/language/golang/
languages: [go]
levels: [beginner]
params:
time: 30 minutes
---
This guide will show you how to create, test, and deploy containerized Go applications using Docker.
@ -19,24 +29,24 @@ This guide will show you how to create, test, and deploy containerized Go applic
In this guide, youll learn how to:
* Create a `Dockerfile` which contains the instructions for building a container image for a program written in Go.
* Run the image as a container in your local Docker instance and manage the container's lifecycle.
* Use multi-stage builds for building small images efficiently while keeping your Dockerfiles easy to read and maintain.
* Use Docker Compose to orchestrate running of multiple related containers together in a development environment.
* Configure a CI/CD pipeline for your application using [GitHub Actions](https://docs.github.com/en/actions)
* Deploy your containerized Go application.
- Create a `Dockerfile` which contains the instructions for building a container image for a program written in Go.
- Run the image as a container in your local Docker instance and manage the container's lifecycle.
- Use multi-stage builds for building small images efficiently while keeping your Dockerfiles easy to read and maintain.
- Use Docker Compose to orchestrate running of multiple related containers together in a development environment.
- Configure a CI/CD pipeline for your application using [GitHub Actions](https://docs.github.com/en/actions)
- Deploy your containerized Go application.
## Prerequisites
Some basic understanding of Go and its toolchain is assumed. This isn't a Go tutorial. If you are new to the language,
the [Go website](https://golang.org/) is a great place to explore,
so *go* (pun intended) check it out!
Some basic understanding of Go and its toolchain is assumed. This isn't a Go tutorial. If you are new to the : languages:,
the [Go website](https://golang.org/) is a great place to explore,
so _go_ (pun intended) check it out!
You also must know some basic [Docker concepts](/get-started/docker-concepts/the-basics/what-is-a-container.md) as well as to
You also must know some basic [Docker concepts](/get-started/docker-concepts/the-basics/what-is-a-container.md) as well as to
be at least vaguely familiar with the [Dockerfile format](/manuals/build/concepts/dockerfile.md).
Your Docker set-up must have BuildKit enabled. BuildKit is enabled by default for all users on [Docker Desktop](/manuals/desktop/_index.md).
If you have installed Docker Desktop, you dont have to manually enable BuildKit. If you are running Docker on Linux,
Your Docker set-up must have BuildKit enabled. BuildKit is enabled by default for all users on [Docker Desktop](/manuals/desktop/_index.md).
If you have installed Docker Desktop, you dont have to manually enable BuildKit. If you are running Docker on Linux,
please check out BuildKit [getting started](/manuals/build/buildkit/_index.md#getting-started) page.
Some familiarity with the command line is also expected.
@ -46,5 +56,3 @@ Some familiarity with the command line is also expected.
The aim of this guide is to provide enough examples and instructions for you to containerize your own Go application and deploy it into the Cloud.
Start by building your first Go image.
{{< button text="Build your Go image" url="build-images.md" >}}

View File

@ -5,8 +5,9 @@ weight: 5
keywords: containers, images, go, golang, dockerfiles, coding, build, push, run
description: Learn how to build your first Docker image by writing a Dockerfile
aliases:
- /get-started/golang/build-images/
- /language/golang/build-images/
- /get-started/golang/build-images/
- /language/golang/build-images/
- /guides/language/golang/build-images/
---
## Overview
@ -31,8 +32,8 @@ The example application is a caricature of a microservice. It is purposefully tr
The application offers two HTTP endpoints:
* It responds with a string containing a heart symbol (`<3`) to requests to `/`.
* It responds with `{"Status" : "OK"}` JSON to a request to `/health`.
- It responds with a string containing a heart symbol (`<3`) to requests to `/`.
- It responds with `{"Status" : "OK"}` JSON to a request to `/health`.
It responds with HTTP error 404 to any other request.
@ -50,7 +51,6 @@ $ git clone https://github.com/docker/docker-gs-ping
The application's `main.go` file is straightforward, if you are familiar with Go:
```go
package main
@ -99,7 +99,7 @@ func IntMin(a, b int) int {
To build a container image with Docker, a `Dockerfile` with build instructions is required.
Begin your `Dockerfile` with the (optional) parser directive line that instructs BuildKit to
Begin your `Dockerfile` with the (optional) parser directive line that instructs BuildKit to
interpret your file according to the grammar rules for the specified version of the syntax.
You then tell Docker what base image you would like to use for your application:
@ -183,7 +183,7 @@ COPY *.go ./
This `COPY` command uses a wildcard to copy all files with `.go` extension
located in the current directory on the host (the directory where the `Dockerfile`
is located) into the current directory inside the image.
is located) into the current directory inside the image.
Now, to compile your application, use the familiar `RUN` command:
@ -274,7 +274,7 @@ Build your first Docker image.
$ docker build --tag docker-gs-ping .
```
The build process will print some diagnostic messages as it goes through the build steps.
The build process will print some diagnostic messages as it goes through the build steps.
The following is just an example of what these messages may look like.
```console
@ -406,7 +406,7 @@ gigabyte, which is a lot for a tiny compiled Go application. You may also be
wondering what happened to the full suite of Go tools, including the compiler,
after you had built your image.
The answer is that the full toolchain is still there, in the container image.
The answer is that the full toolchain is still there, in the container image.
Not only this is inconvenient because of the large file size, but it may also
present a security risk when the container is deployed.
@ -423,7 +423,6 @@ other optional components.
The `Dockerfile.multistage` in the sample application's repository has the
following content:
```dockerfile
# syntax=docker/dockerfile:1
@ -457,7 +456,6 @@ USER nonroot:nonroot
ENTRYPOINT ["/docker-gs-ping"]
```
Since you have two Dockerfiles now, you have to tell Docker what Dockerfile
you'd like to use to build the image. Tag the new image with `multistage`. This
tag (like any other, apart from `latest`) has no special meaning for Docker,
@ -477,10 +475,10 @@ docker-gs-ping multistage e3fdde09f172 About a minute ago 28.1MB
docker-gs-ping latest 336a3f164d0f About an hour ago 1.11GB
```
This is so because the ["distroless"](https://github.com/GoogleContainerTools/distroless)
This is so because the ["distroless"](https://github.com/GoogleContainerTools/distroless)
base image that you have used in the second stage of the build is very barebones and is designed for lean deployments of static binaries.
There's much more to multi-stage builds, including the possibility of multi-architecture builds,
There's much more to multi-stage builds, including the possibility of multi-architecture builds,
so feel free to check out [multi-stage builds](/manuals/build/building/multi-stage.md). This is, however, not essential for your progress here.
## Next steps
@ -489,5 +487,3 @@ In this module, you met your example application and built and container image
for it.
In the next module, youll take a look at how to run your image as a container.
{{< button text="Run your image as a container" url="run-containers.md" >}}

View File

@ -5,7 +5,8 @@ weight: 40
keywords: go, CI/CD, local, development
description: Learn how to Configure CI/CD for your Go application
aliases:
- /language/golang/configure-ci-cd/
- /language/golang/configure-ci-cd/
- /guides/language/golang/configure-ci-cd/
---
## Prerequisites
@ -69,27 +70,24 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -123,11 +121,10 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.
{{< button text="Test your deployment" url="./deploy.md" >}}

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, go, local, development
description: Learn how to deploy your Go application
aliases:
- /language/golang/deploy/
- /language/golang/deploy/
- /guides/language/golang/deploy/
---
## Prerequisites
@ -52,7 +53,12 @@ spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command: ['sh', '-c', 'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;']
command:
[
"sh",
"-c",
'until nc -zv db 5432; do echo "waiting for db"; sleep 2; done;',
]
containers:
- env:
- name: PGDATABASE
@ -151,14 +157,14 @@ status:
In this Kubernetes YAML file, there are four objects, separated by the `---`. In addition to a Service and Deployment for the database, the other two objects are:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Go application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Go application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -223,7 +229,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
You should get the following message back.
```json
{"value":"Hello, Oliver!"}
{ "value": "Hello, Oliver!" }
```
4. Run the following command to tear down your application.
@ -237,6 +243,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,8 +5,9 @@ weight: 20
keywords: get started, go, golang, local, development
description: Learn how to develop your application locally.
aliases:
- /get-started/golang/develop/
- /language/golang/develop/
- /get-started/golang/develop/
- /language/golang/develop/
- /guides/language/golang/develop/
---
## Prerequisites
@ -94,7 +95,7 @@ $ docker run -d \
# ... output omitted ...
```
Notice a clever use of the tag `latest-v20.1` to make sure that you're pulling the latest patch version of 20.1. The diversity of available tags depend on the image maintainer. Here, your intent was to have the latest patched version of CockroachDB while not straying too far away from the known working version as the time goes by. To see the tags available for the CockroachDB image, you can go to the [CockroachDB page on Docker Hub](https://hub.docker.com/r/cockroachdb/cockroach/tags).
Notice a clever use of the tag `latest-v20.1` to make sure that you're pulling the latest patch version of 20.1. The diversity of available tags depend on the image maintainer. Here, your intent was to have the latest patched version of CockroachDB while not straying too far away from the known working version as the time goes by. To see the tags available for the CockroachDB image, you can go to the [CockroachDB page on Docker Hub](https://hub.docker.com/r/cockroachdb/cockroach/tags).
### Configure the database engine
@ -123,7 +124,7 @@ $ docker exec -it roach ./cockroach sql --insecure
```
3. Give the new user the necessary permissions:
```sql
GRANT ALL ON DATABASE mydb TO totoro;
```
@ -132,7 +133,6 @@ $ docker exec -it roach ./cockroach sql --insecure
The following is an example of interaction with the SQL shell.
```console
$ sudo docker exec -it roach ./cockroach sql --insecure
#
@ -164,15 +164,14 @@ root@:26257/defaultdb> quit
oliver@hki:~$
```
### Meet the example application
Now that you have started and configured the database engine, you can switch your attention to the application.
The example application for this module is an extended version of `docker-gs-ping` application you've used in the previous modules. You have two options:
* You can update your local copy of `docker-gs-ping` to match the new extended version presented in this chapter; or
* You can clone the [docker/docker-gs-ping-dev](https://github.com/docker/docker-gs-ping-dev) repository. This latter approach is recommended.
- You can update your local copy of `docker-gs-ping` to match the new extended version presented in this chapter; or
- You can clone the [docker/docker-gs-ping-dev](https://github.com/docker/docker-gs-ping-dev) repository. This latter approach is recommended.
To checkout the example application, run:
@ -183,17 +182,17 @@ $ git clone https://github.com/docker/docker-gs-ping-dev.git
The application's `main.go` now includes database initialization code, as well as the code to implement a new business requirement:
* An HTTP `POST` request to `/send` containing a `{ "value" : string }` JSON must save the value to the database.
- An HTTP `POST` request to `/send` containing a `{ "value" : string }` JSON must save the value to the database.
You also have an update for another business requirement. The requirement was:
* The application responds with a text message containing a heart symbol ("`<3`") on requests to `/`.
- The application responds with a text message containing a heart symbol ("`<3`") on requests to `/`.
And now it's going to be:
* The application responds with the string containing the count of messages stored in the database, enclosed in the parentheses.
- The application responds with the string containing the count of messages stored in the database, enclosed in the parentheses.
Example output: `Hello, Docker! (7)`
Example output: `Hello, Docker! (7)`
The full source code listing of `main.go` follows.
@ -375,7 +374,7 @@ $ docker run -it --rm -d \
There are a few points to note about this command.
* You map container port `8080` to host port `80` this time. Thus, for `GET` requests you can get away with literally `curl localhost`:
- You map container port `8080` to host port `80` this time. Thus, for `GET` requests you can get away with literally `curl localhost`:
```console
$ curl localhost
@ -389,11 +388,11 @@ There are a few points to note about this command.
Hello, Docker! (0)
```
* The total number of stored messages is `0` for now. This is fine, because you haven't posted anything to your application yet.
* You refer to the database container by its hostname, which is `db`. This is why you had `--hostname db` when you started the database container.
- The total number of stored messages is `0` for now. This is fine, because you haven't posted anything to your application yet.
- You refer to the database container by its hostname, which is `db`. This is why you had `--hostname db` when you started the database container.
* The actual password doesn't matter, but it must be set to something to avoid confusing the example application.
* The container you've just run is named `rest-server`. These names are useful for managing the container lifecycle:
- The actual password doesn't matter, but it must be set to something to avoid confusing the example application.
- The container you've just run is named `rest-server`. These names are useful for managing the container lifecycle:
```console
# Don't do this just yet, it's only an example:
@ -414,7 +413,7 @@ $ curl --request POST \
The application responds with the contents of the message, which means it has been saved in the database:
```json
{"value":"Hello, Docker!"}
{ "value": "Hello, Docker!" }
```
Send another message:
@ -429,7 +428,7 @@ $ curl --request POST \
And again, you get the value of the message back:
```json
{"value":"Hello, Oliver!"}
{ "value": "Hello, Oliver!" }
```
Run curl and see what the message counter says:
@ -524,9 +523,8 @@ In this section, you'll create a Docker Compose file to start your `docker-gs-pi
In your application's directory, create a new text file named `docker-compose.yml` with the following content.
```yaml
version: '3.8'
version: "3.8"
services:
docker-gs-ping-roach:
@ -570,7 +568,6 @@ networks:
driver: bridge
```
This Docker Compose configuration is super convenient as you don't have to type all the parameters to pass to the `docker run` command. You can declaratively do that in the Docker Compose file. The [Docker Compose documentation pages](/manuals/compose/_index.md) are quite extensive and include a full reference for the Docker Compose file format.
### The `.env` file
@ -587,10 +584,10 @@ The exact value doesn't really matter for this example, because you run Cockroac
The file name `docker-compose.yml` is the default file name which `docker compose` command recognizes if no `-f` flag is provided. This means you can have multiple Docker Compose files if your environment has such requirements. Furthermore, Docker Compose files are... composable (pun intended), so multiple files can be specified on the command line to merge parts of the configuration together. The following list is just a few examples of scenarios where such a feature would be very useful:
* Using a bind mount for the source code for local development but not when running the CI tests;
* Switching between using a pre-built image for the frontend for some API application vs creating a bind mount for source code;
* Adding additional services for integration testing;
* And many more...
- Using a bind mount for the source code for local development but not when running the CI tests;
- Switching between using a pre-built image for the frontend for some API application vs creating a bind mount for source code;
- Adding additional services for integration testing;
- And many more...
You aren't going to cover any of these advanced use cases here.
@ -598,8 +595,8 @@ You aren't going to cover any of these advanced use cases here.
One of the really cool features of Docker Compose is [variable substitution](/reference/compose-file/interpolation.md). You can see some examples in the Compose file, `environment` section. By means of an example:
* `PGUSER=${PGUSER:-totoro}` means that inside the container, the environment variable `PGUSER` shall be set to the same value as it has on the host machine where Docker Compose is run. If there is no environment variable with this name on the host machine, the variable inside the container gets the default value of `totoro`.
* `PGPASSWORD=${PGPASSWORD:?database password not set}` means that if the environment variable `PGPASSWORD` isn't set on the host, Docker Compose will display an error. This is OK, because you don't want to hard-code default values for the password. You set the password value in the `.env` file, which is local to your machine. It is always a good idea to add `.env` to `.gitignore` to prevent the secrets being checked into the version control.
- `PGUSER=${PGUSER:-totoro}` means that inside the container, the environment variable `PGUSER` shall be set to the same value as it has on the host machine where Docker Compose is run. If there is no environment variable with this name on the host machine, the variable inside the container gets the default value of `totoro`.
- `PGPASSWORD=${PGPASSWORD:?database password not set}` means that if the environment variable `PGPASSWORD` isn't set on the host, Docker Compose will display an error. This is OK, because you don't want to hard-code default values for the password. You set the password value in the `.env` file, which is local to your machine. It is always a good idea to add `.env` to `.gitignore` to prevent the secrets being checked into the version control.
Other ways of dealing with undefined or empty values exist, as documented in the [variable substitution](/reference/compose-file/interpolation.md) section of the Docker documentation.
@ -724,8 +721,8 @@ Such distributed set-up offers interesting possibilities, such as applying Chaos
If you are interested in experimenting with CockroachDB clusters, check out:
* [Start a CockroachDB Cluster in Docker](https://www.cockroachlabs.com/docs/v20.2/start-a-local-cluster-in-docker-mac.html) article; and
* Documentation for Docker Compose keywords [`deploy`](/reference/compose-file/legacy-versions.md) and [`replicas`](/reference/compose-file/legacy-versions.md).
- [Start a CockroachDB Cluster in Docker](https://www.cockroachlabs.com/docs/v20.2/start-a-local-cluster-in-docker-mac.html) article; and
- Documentation for Docker Compose keywords [`deploy`](/reference/compose-file/legacy-versions.md) and [`replicas`](/reference/compose-file/legacy-versions.md).
### Other databases
@ -736,5 +733,3 @@ Since you didn't run a cluster of CockroachDB instances, you might be wondering
In this module, you set up a containerized development environment with your application and the database engine running in different containers. You also wrote a Docker Compose file which links the two containers together and provides for easy starting up and tearing down of the development environment.
In the next module, you'll take a look at one possible approach to running functional tests in Docker.
{{< button text="Run your tests" url="run-tests.md" >}}

View File

@ -5,8 +5,9 @@ weight: 10
keywords: get started, go, golang, run, container
description: Learn how to run the image as a container.
aliases:
- /get-started/golang/run-containers/
- /language/golang/run-containers/
- /get-started/golang/run-containers/
- /language/golang/run-containers/
- /guides/language/golang/run-containers/
---
## Prerequisites
@ -207,5 +208,3 @@ Now, you can easily identify your container based on the name.
## Next steps
In this module, you learned how to run containers and publish ports. You also learned to manage the lifecycle of containers. You then learned the importance of naming your containers so that they're more easily identifiable. In the next module, youll learn how to run a database in a container and connect it to your application.
{{< button text="How to develop your application" url="develop.md" >}}

View File

@ -5,8 +5,9 @@ weight: 30
keywords: build, go, golang, test
description: How to build and run your Go tests in a container
aliases:
- /get-started/golang/run-tests/
- /language/golang/run-tests/
- /get-started/golang/run-tests/
- /language/golang/run-tests/
- /guides/language/golang/run-tests/
---
## Prerequisites
@ -93,5 +94,3 @@ You should see output containing the following.
In this section, you learned how to run tests when building your image. Next,
youll learn how to set up a CI/CD pipeline using GitHub Actions.
{{< button text="Configure CI/CD" url="configure-ci-cd.md" >}}

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

Before

Width:  |  Height:  |  Size: 4.5 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 51 KiB

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 3.2 KiB

After

Width:  |  Height:  |  Size: 3.2 KiB

View File

Before

Width:  |  Height:  |  Size: 2.8 KiB

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

Before

Width:  |  Height:  |  Size: 8.5 KiB

After

Width:  |  Height:  |  Size: 8.5 KiB

View File

@ -3,22 +3,30 @@ title: Java language-specific guide
linkTitle: Java
keywords: java, getting started
description: Containerize Java apps using Docker
summary: |
This guide demonstrates how to containerize Java applications using Docker,
covering image building, dependency management, optimizing image size with
multi-stage builds, and best practices for deploying Java apps efficiently in
containers.
toc_min: 1
toc_max: 2
aliases:
- /language/java/
- /language/java/
- /guides/language/java/
languages: [java]
levels: [beginner]
params:
time: 20 minutes
---
The Java getting started guide teaches you how to create a containerized Spring Boot application using Docker. In this module, youll learn how to:
* Containerize and run a Spring Boot application with Maven
* Set up a local development environment to connect a database to the container, configure a debugger, and use Compose Watch for live reload
* Run your unit tests inside a container
* Configure a CI/CD pipeline for your application using GitHub Actions
* Deploy your containerized application locally to Kubernetes to test and debug your deployment
- Containerize and run a Spring Boot application with Maven
- Set up a local development environment to connect a database to the container, configure a debugger, and use Compose Watch for live reload
- Run your unit tests inside a container
- Configure a CI/CD pipeline for your application using GitHub Actions
- Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the Java getting started modules, you should be able to containerize your own Java application based on the examples and instructions provided in this guide.
Get started containerizing your first Java app.
{{< button text="Containerize your first Java app" url="containerize.md" >}}

View File

@ -5,7 +5,8 @@ weight: 40
keywords: java, CI/CD, local, development
description: Learn how to Configure CI/CD for your Java application
aliases:
- /language/java/configure-ci-cd/
- /language/java/configure-ci-cd/
- /guides/language/java/configure-ci-cd/
---
## Prerequisites
@ -72,33 +73,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: test
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -133,11 +130,10 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.
{{< button text="Test your deployment" url="./deploy.md" >}}

View File

@ -5,9 +5,10 @@ weight: 10
keywords: java, containerize, initialize, maven, build
description: Learn how to containerize a Java application.
aliases:
- /language/java/build-images/
- /language/java/run-containers/
- /language/java/containerize/
- /language/java/build-images/
- /language/java/run-containers/
- /language/java/containerize/
- /guides/language/java/containerize/
---
## Prerequisites
@ -15,6 +16,7 @@ aliases:
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md).
Docker adds new features regularly and some parts of this guide may
work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this
section use a command-line based Git client, but you can use any client.
@ -76,12 +78,11 @@ exists, so `docker init` overwrites that file rather than creating a new
directory. Both names are supported, but Compose prefers the canonical
`compose.yaml`.
{{< /tab >}}
{{< tab name="Manually create assets" >}}
If you don't have Docker Desktop installed or prefer creating the assets
manually, you can create the following files in your project directory.
manually, you can create the following files in your project directory.
Create a file named `Dockerfile` with the following contents.
@ -198,7 +199,6 @@ services:
context: .
ports:
- 8080:8080
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
@ -232,7 +232,6 @@ services:
# db-password:
# file: db/password.txt
```
Create a file named `.dockerignore` with the following contents.
@ -326,11 +325,10 @@ In this section, you learned how you can containerize and run a Java
application using Docker.
Related information:
- [docker init reference](/reference/cli/docker/init/)
- [docker init reference](/reference/cli/docker/init/)
## Next steps
In the next section, you'll learn how you can develop your application using
Docker containers.
{{< button text="Develop your application" url="develop.md" >}}

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, java
description: Learn how to develop locally using Kubernetes
aliases:
- /language/java/deploy/
- /language/java/deploy/
- /guides/language/java/deploy/
---
## Prerequisites
@ -45,9 +46,9 @@ spec:
service: server
spec:
containers:
- name: server-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: server-service
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -59,21 +60,21 @@ spec:
selector:
service: server
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
- port: 8080
targetPort: 8080
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Java application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The
container is created from the image built by GitHub Actions in [Configure CI/CD for
your Java application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 8080 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -132,6 +133,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
```
You should get output like the following.
```console
{"status":"UP","groups":["liveness","readiness"]}
```
@ -147,6 +149,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,7 +5,8 @@ weight: 20
keywords: Java, local, development, run,
description: Learn how to develop your application locally.
aliases:
- /language/java/develop/
- /language/java/develop/
- /guides/language/java/develop/
---
## Prerequisites
@ -16,11 +17,11 @@ Work through the steps to containerize your application in [Containerize your ap
In this section, youll walk through setting up a local development environment
for the application you containerized in the previous section. This includes:
- Adding a local database and persisting data
- Creating a development container to connect a debugger
- Configuring Compose to automatically update your running Compose services as
you edit and save your code
- Adding a local database and persisting data
- Creating a development container to connect a debugger
- Configuring Compose to automatically update your running Compose services as
you edit and save your code
## Add a local database and persist data
@ -29,6 +30,7 @@ You can use containers to set up local services, like a database. In this sectio
In the cloned repository's directory, open the `docker-compose.yaml` file in an IDE or text editor. Your Compose file has an example database service, but it'll require a few changes for your unique app.
In the `docker-compose.yaml` file, you need to do the following:
- Uncomment all of the database instructions. You'll now use a database service
instead of local storage for the data.
- Remove the top-level `secrets` element as well as the element inside the `db`
@ -71,7 +73,7 @@ services:
ports:
- 5432:5432
healthcheck:
test: [ "CMD", "pg_isready", "-U", "petclinic" ]
test: ["CMD", "pg_isready", "-U", "petclinic"]
interval: 10s
timeout: 5s
retries: 5
@ -84,7 +86,6 @@ update the instruction to pass in the system property as specified in the
`spring-petclinic/src/resources/db/postgres/petclinic_db_setup_postgres.txt`
file.
```diff
- ENTRYPOINT [ "java", "org.springframework.boot.loader.launch.JarLauncher" ]
+ ENTRYPOINT [ "java", "-Dspring.profiles.active=postgres", "org.springframework.boot.loader.launch.JarLauncher" ]
@ -203,7 +204,7 @@ services:
ports:
- 5432:5432
healthcheck:
test: [ "CMD", "pg_isready", "-U", "petclinic" ]
test: ["CMD", "pg_isready", "-U", "petclinic"]
interval: 10s
timeout: 5s
retries: 5
@ -228,7 +229,61 @@ $ curl --request GET \
You should receive the following response:
```json
{"vetList":[{"id":1,"firstName":"James","lastName":"Carter","specialties":[],"nrOfSpecialties":0,"new":false},{"id":2,"firstName":"Helen","lastName":"Leary","specialties":[{"id":1,"name":"radiology","new":false}],"nrOfSpecialties":1,"new":false},{"id":3,"firstName":"Linda","lastName":"Douglas","specialties":[{"id":3,"name":"dentistry","new":false},{"id":2,"name":"surgery","new":false}],"nrOfSpecialties":2,"new":false},{"id":4,"firstName":"Rafael","lastName":"Ortega","specialties":[{"id":2,"name":"surgery","new":false}],"nrOfSpecialties":1,"new":false},{"id":5,"firstName":"Henry","lastName":"Stevens","specialties":[{"id":1,"name":"radiology","new":false}],"nrOfSpecialties":1,"new":false},{"id":6,"firstName":"Sharon","lastName":"Jenkins","specialties":[],"nrOfSpecialties":0,"new":false}]}
{
"vetList": [
{
"id": 1,
"firstName": "James",
"lastName": "Carter",
"specialties": [],
"nrOfSpecialties": 0,
"new": false
},
{
"id": 2,
"firstName": "Helen",
"lastName": "Leary",
"specialties": [{ "id": 1, "name": "radiology", "new": false }],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 3,
"firstName": "Linda",
"lastName": "Douglas",
"specialties": [
{ "id": 3, "name": "dentistry", "new": false },
{ "id": 2, "name": "surgery", "new": false }
],
"nrOfSpecialties": 2,
"new": false
},
{
"id": 4,
"firstName": "Rafael",
"lastName": "Ortega",
"specialties": [{ "id": 2, "name": "surgery", "new": false }],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 5,
"firstName": "Henry",
"lastName": "Stevens",
"specialties": [{ "id": 1, "name": "radiology", "new": false }],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 6,
"firstName": "Sharon",
"lastName": "Jenkins",
"specialties": [],
"nrOfSpecialties": 0,
"new": false
}
]
}
```
## Connect a Debugger
@ -301,7 +356,7 @@ services:
ports:
- 5432:5432
healthcheck:
test: [ "CMD", "pg_isready", "-U", "petclinic" ]
test: ["CMD", "pg_isready", "-U", "petclinic"]
interval: 10s
timeout: 5s
retries: 5
@ -339,12 +394,10 @@ In this section, you took a look at running a database locally and persisting th
Related information:
- [Compose file reference](/reference/compose-file/)
- [Compose Watch](/manuals/compose/how-tos/file-watch.md)
- [Dockerfile reference](/reference/dockerfile/)
- [Compose file reference](/reference/compose-file/)
- [Compose Watch](/manuals/compose/how-tos/file-watch.md)
- [Dockerfile reference](/reference/dockerfile/)
## Next steps
In the next section, youll take a look at how to run unit tests in Docker.
{{< button text="Run your tests" url="run-tests.md" >}}

View File

Before

Width:  |  Height:  |  Size: 177 KiB

After

Width:  |  Height:  |  Size: 177 KiB

View File

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

View File

Before

Width:  |  Height:  |  Size: 8.2 KiB

After

Width:  |  Height:  |  Size: 8.2 KiB

View File

Before

Width:  |  Height:  |  Size: 83 KiB

After

Width:  |  Height:  |  Size: 83 KiB

View File

@ -5,7 +5,8 @@ weight: 30
keywords: Java, build, test
description: How to build and run your Java tests
aliases:
- /language/java/run-tests/
- /language/java/run-tests/
- /guides/language/java/run-tests/
---
## Prerequisites
@ -103,6 +104,7 @@ $ docker build -t java-docker-image-test --progress=plain --no-cache --target=te
```
You should see output containing the following
```console
...
@ -121,5 +123,3 @@ You should see output containing the following
In the next section, youll take a look at how to set up a CI/CD pipeline using
GitHub Actions.
{{< button text="Configure CI/CD" url="configure-ci-cd.md" >}}

View File

@ -3,6 +3,18 @@ description: Run, develop, and share data science projects using JupyterLab and
keywords: getting started, jupyter, notebook, python, jupyterlab, data science
title: Data science with JupyterLab
toc_max: 2
summary: |
This guide explains how to use Docker to run Jupyter notebooks, covering
image setup, container management, and best practices for creating
reproducible and isolated development environments for data science and
machine learning tasks.
languages: [python]
levels: [beginner]
subjects: [data-science]
aliases:
- /guides/use-case/jupyter/
params:
time: 20 minutes
---
Docker and JupyterLab are two powerful tools that can enhance your data science
@ -43,6 +55,7 @@ In a terminal, run the following command to run your JupyterLab container.
```console
$ docker run --rm -p 8889:8888 quay.io/jupyter/base-notebook start-notebook.py --NotebookApp.token='my-token'
```
The following are the notable parts of the command:
- `-p 8889:8888`: Maps port 8889 from the host to port 8888 on the container.
@ -148,6 +161,7 @@ For this example, you'll use the [Iris Dataset](https://scikit-learn.org/stable/
4. Select the play button to run the code.
5. In the notebook, specify the following code.
```python
from sklearn import datasets
@ -161,6 +175,7 @@ For this example, you'll use the [Iris Dataset](https://scikit-learn.org/stable/
scatter.legend_elements()[0], iris.target_names, loc="lower right", title="Classes"
)
```
6. Select the play button to run the code. You should see a scatter plot of the
Iris dataset.
@ -232,7 +247,7 @@ located, and then run the following command.
$ docker build -t my-jupyter-image .
```
The command builds a Docker image from your `Dockerfile` and a context. The
The command builds a Docker image from your `Dockerfile` and a context. The
`-t` option specifies the name and tag of the image, in this case
`my-jupyter-image`. The `.` indicates that the current directory is the context,
which means that the files in that directory can be used in the image creation
@ -374,6 +389,7 @@ $ docker run --rm -p 8889:8888 YOUR-USER-NAME/my-jupyer-image start-notebook.py
This example uses the Docker Desktop [Volumes Backup & Share](https://hub.docker.com/extensions/docker/volumes-backup-extension) extension. Alternatively, in the CLI you can [back up the volume](/engine/storage/volumes/#back-up-a-volume) and then [push it using the ORAS CLI](/manuals/docker-hub/oci-artifacts.md#push-a-volume).
1. Install the Volumes Backup & Share extension.
1. Open the Docker Dashboard and select **Extensions**.
2. Search for `Volumes Backup & Share`.
3. In the search results select **Install** for the extension.

View File

@ -3,12 +3,22 @@ description: Developing event-driven applications with Kafka and Docker
keywords: kafka, container-supported development
title: Developing event-driven applications with Kafka and Docker
linktitle: Event-driven apps with Kafka
toc_max: 2
summary: |
This guide explains how to run Apache Kafka in Docker containers, covering
setup, configuring Kafka clusters, managing services, and optimizing
deployment for real-time data streaming in a containerized environment.
subjects: [distributed-systems]
languages: [js]
levels: [intermediate]
aliases:
- /guides/use-case/kafka/
params:
time: 20 minutes
---
With the rise of microservices, event-driven architectures have become increasingly popular.
[Apache Kafka](https://kafka.apache.org/), a distributed event streaming platform, is often at the
heart of these architectures. Unfortunately, setting up and deploying your own Kafka instance for development
With the rise of microservices, event-driven architectures have become increasingly popular.
[Apache Kafka](https://kafka.apache.org/), a distributed event streaming platform, is often at the
heart of these architectures. Unfortunately, setting up and deploying your own Kafka instance for development
is often tricky. Fortunately, Docker and containers make this much easier.
In this guide, you will learn how to:
@ -26,7 +36,6 @@ The following prerequisites are required to follow along with this how-to guide:
- [Node.js](https://nodejs.org/en/download/package-manager) and [yarn](https://yarnpkg.com/)
- Basic knowledge of Kafka and Docker
## Launching Kafka
Beginning with [Kafka 3.3](https://www.confluent.io/blog/apache-kafka-3-3-0-new-features-and-updates/), the deployment of Kafka was greatly simplified by no longer requiring Zookeeper thanks to KRaft (Kafka Raft). With KRaft, setting up a Kafka instance for local development is much easier. Starting with the launch of [Kafka 3.8](https://www.confluent.io/blog/introducing-apache-kafka-3-8/), a new [kafka-native](https://hub.docker.com/r/apache/kafka-native) Docker image is now available, providing a significantly faster startup and lower memory footprint.
@ -41,60 +50,60 @@ Start a basic Kafka cluster by doing the following steps. This example will laun
1. Start a Kafka container by running the following command:
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
2. Once the image pulls, youll have a Kafka instance up and running within a second or two.
3. The apache/kafka image ships with several helpful scripts in the `/opt/kafka/bin` directory. Run the following command to verify the cluster is up and running and get its cluster ID:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-cluster.sh cluster-id --bootstrap-server :9092
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-cluster.sh cluster-id --bootstrap-server :9092
```
Doing so will produce output similar to the following:
Doing so will produce output similar to the following:
```plaintext
Cluster ID: 5L6g3nShT-eMCtK--X86sw
```
```plaintext
Cluster ID: 5L6g3nShT-eMCtK--X86sw
```
4. Create a sample topic and produce (or publish) a few messages by running the following command:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
After running, you can enter a message per line. For example, enter a few messages, one per line. A few examples might be:
After running, you can enter a message per line. For example, enter a few messages, one per line. A few examples might be:
```plaintext
First message
```
```plaintext
First message
```
And
```plaintext
Second message
```
And
Press `enter` to send the last message and then press ctrl+c when youre done. The messages will be published to Kafka.
```plaintext
Second message
```
Press `enter` to send the last message and then press ctrl+c when youre done. The messages will be published to Kafka.
5. Confirm the messages were published into the cluster by consuming the messages:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server :9092 --topic demo --from-beginning
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server :9092 --topic demo --from-beginning
```
You should then see your messages in the output:
You should then see your messages in the output:
```plaintext
First message
Second message
```
```plaintext
First message
Second message
```
If you want, you can open another terminal and publish more messages and see them appear in the consumer.
If you want, you can open another terminal and publish more messages and see them appear in the consumer.
When youre done, hit ctrl+c to stop consuming messages.
When youre done, hit ctrl+c to stop consuming messages.
You have a locally running Kafka cluster and have validated you can connect to it.
@ -106,47 +115,47 @@ Since the cluster is running locally and is exposed at port 9092, the app can co
1. If you dont have the Kafka cluster running from the previous step, run the following command to start a Kafka instance:
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
2. Clone the [GitHub repository](https://github.com/dockersamples/kafka-development-node) locally.
```console
$ git clone https://github.com/dockersamples/kafka-development-node.git
```
```console
$ git clone https://github.com/dockersamples/kafka-development-node.git
```
3. Navigate into the project.
```console
cd kafka-development-node/app
```
```console
cd kafka-development-node/app
```
4. Install the dependencies using yarn.
```console
$ yarn install
```
```console
$ yarn install
```
5. Start the application using `yarn dev`. This will set the `NODE_ENV` environment variable to `development` and use `nodemon` to watch for file changes.
```console
$ yarn dev
```
```console
$ yarn dev
```
6. With the application now running, it will log received messages to the console. In a new terminal, publish a few messages using the following command:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
And then send a message to the cluster:
And then send a message to the cluster:
```plaintext
Test message
```
```plaintext
Test message
```
Remember to press `ctrl+c` when youre done to stop producing messages.
Remember to press `ctrl+c` when youre done to stop producing messages.
## Connecting to Kafka from both containers and native apps
@ -171,7 +180,7 @@ Since there are two different methods clients need to connect, two different lis
![Diagram showing the DOCKER and HOST listeners and how they are exposed to the host and Docker networks](./images/kafka-1.webp)
In order to set this up, the `compose.yaml` for Kafka needs some additional configuration. Once you start overriding some of the defaults, you also need to specify a few other options in order for KRaft mode to work.
In order to set this up, the `compose.yaml` for Kafka needs some additional configuration. Once you start overriding some of the defaults, you also need to specify a few other options in order for KRaft mode to work.
```yaml
services:
@ -204,21 +213,21 @@ Give it a try using the steps below.
2. If you have the Kafka cluster running from the previous section, go ahead and stop that container using the following command:
```console
$ docker rm -f kafka
```
```console
$ docker rm -f kafka
```
3. Start the Compose stack by running the following command at the root of the cloned project directory:
```console
$ docker compose up
```
```console
$ docker compose up
```
After a moment, the application will be up and running.
After a moment, the application will be up and running.
4. In the stack is another service that can be used to publish messages. Open it by going to [http://localhost:3000](http://localhost:3000). As you type in a message and submit the form, you should see the log message of the message being received by the app.
This helps demonstrate how a containerized approach makes it easy to add additional services to help test and troubleshoot your application.
This helps demonstrate how a containerized approach makes it easy to add additional services to help test and troubleshoot your application.
## Adding cluster visualization
@ -233,7 +242,7 @@ services:
ports:
- 8080:8080
environment:
DYNAMIC_CONFIG_ENABLED: 'true'
DYNAMIC_CONFIG_ENABLED: "true"
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9093
depends_on:
@ -250,4 +259,4 @@ If youre interested in learning how you can integrate Kafka easily into your
By using Docker, you can simplify the process of developing and testing event-driven applications with Kafka. Containers simplify the process of setting up and deploying the various services you need to develop. And once theyre defined in Compose, everyone on the team can benefit from the ease of use.
In case you missed it earlier, all of the sample app code can be found at dockersamples/kafka-development-node.
In case you missed it earlier, all of the sample app code can be found at dockersamples/kafka-development-node.

View File

@ -3,7 +3,16 @@ title: Deploy to Kubernetes
keywords: kubernetes, pods, deployments, kubernetes services
description: Learn how to describe and deploy a simple application on Kubernetes.
aliases:
- /get-started/kube-deploy/
- /get-started/kube-deploy/
- /guides/deployment-orchestration/kube-deploy/
summary: |
Learn how to deploy and orchestrate Docker containers using Kubernetes, with
step-by-step guidance on setup, configuration, and best practices to enhance
your application's scalability and reliability.
subjects: [deploy]
levels: [beginner]
params:
time: 10 minutes
---
## Prerequisites
@ -11,7 +20,7 @@ aliases:
- Download and install Docker Desktop as described in [Get Docker](/get-started/get-docker.md).
- Work through containerizing an application in [Part 2](02_our_app.md).
- Make sure that Kubernetes is turned on in Docker Desktop:
If Kubernetes isn't running, follow the instructions in [Orchestration](orchestration.md) to finish setting it up.
If Kubernetes isn't running, follow the instructions in [Orchestration](orchestration.md) to finish setting it up.
## Introduction
@ -29,43 +38,45 @@ You already wrote a basic Kubernetes YAML file in the Orchestration overview par
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-demo
namespace: default
name: bb-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: getting-started
imagePullPolicy: Never
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: getting-started
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: bb-entrypoint
namespace: default
name: bb-entrypoint
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
type: NodePort
selector:
bb: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A `Deployment`, describing a scalable group of identical pods. In this case, you'll get just one `replica`, or copy of your pod, and that pod (which is described under the `template:` key) has just one container in it, based off of your `getting-started` image from the previous step in this tutorial.
- A `NodePort` service, which will route traffic from port 30001 on your host to port 3000 inside the pods it routes to, allowing you to reach your Todo app from the network.
Also, notice that while Kubernetes YAML can appear long and complicated at first, it almost always follows the same pattern:
Also, notice that while Kubernetes YAML can appear long and complicated at first, it almost always follows the same pattern:
- The `apiVersion`, which indicates the Kubernetes API that parses this object
- The `kind` indicating what sort of object this is
- Some `metadata` applying things like names to your objects
@ -75,49 +86,49 @@ In this Kubernetes YAML file, there are two objects, separated by the `---`:
1. In a terminal, navigate to where you created `bb.yaml` and deploy your application to Kubernetes:
```console
$ kubectl apply -f bb.yaml
```
```console
$ kubectl apply -f bb.yaml
```
You should see output that looks like the following, indicating your Kubernetes objects were created successfully:
You should see output that looks like the following, indicating your Kubernetes objects were created successfully:
```shell
deployment.apps/bb-demo created
service/bb-entrypoint created
```
```shell
deployment.apps/bb-demo created
service/bb-entrypoint created
```
2. Make sure everything worked by listing your deployments:
```console
$ kubectl get deployments
```
```console
$ kubectl get deployments
```
if all is well, your deployment should be listed as follows:
if all is well, your deployment should be listed as follows:
```shell
NAME READY UP-TO-DATE AVAILABLE AGE
bb-demo 1/1 1 1 40s
```
```shell
NAME READY UP-TO-DATE AVAILABLE AGE
bb-demo 1/1 1 1 40s
```
This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services:
This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services:
```console
$ kubectl get services
```console
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bb-entrypoint NodePort 10.106.145.116 <none> 3000:30001/TCP 53s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 138d
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bb-entrypoint NodePort 10.106.145.116 <none> 3000:30001/TCP 53s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 138d
```
In addition to the default `kubernetes` service, we see our `bb-entrypoint` service, accepting traffic on port 30001/TCP.
In addition to the default `kubernetes` service, we see our `bb-entrypoint` service, accepting traffic on port 30001/TCP.
3. Open a browser and visit your Todo app at `localhost:30001`. You should see your Todo application, the same as when you ran it as a stand-alone container in [Part 2](02_our_app.md) of the tutorial.
4. Once satisfied, tear down your application:
```console
$ kubectl delete -f bb.yaml
```
```console
$ kubectl delete -f bb.yaml
```
## Conclusion
@ -129,6 +140,6 @@ In addition to deploying to Kubernetes, you have also described your application
Further documentation for all new Kubernetes objects used in this article are available here:
- [Kubernetes Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [Kubernetes Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/)
- [Kubernetes Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [Kubernetes Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/)

View File

@ -3,6 +3,17 @@ title: Build a language translation app
linkTitle: Language translation
keywords: nlp, natural language processing, text summarization, python, language translation, googletrans
description: Learn how to build and run a language translation application using Python, Googletrans, and Docker.
summary: |
This guide demonstrates how to use Docker to deploy language translation
models for NLP tasks, covering setup, container management, and running
translation services efficiently in a containerized environment.
levels: [beginner]
subjects: [ai]
languages: [python]
aliases:
- /guides/use-case/nlp/language-translation/
params:
time: 20 minutes
---
## Overview
@ -19,8 +30,8 @@ methods as detect and translate.
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Get the sample application
@ -34,6 +45,7 @@ methods as detect and translate.
2. Verify that you cloned the repository.
You should see the following files in your `Docker-NLP` directory.
```text
01_sentiment_analysis.py
02_name_entity_recognition.py
@ -57,15 +69,17 @@ in a text or code editor to explore its contents in the following steps.
```python
from googletrans import Translator
```
This line imports the `Translator` class from `googletrans`.
Googletrans is a Python library that provides an interface to Google
Translate's AJAX API.
2. Specify the main execution block.
```python
if __name__ == "__main__":
```
This Python idiom ensures that the following code block runs only if this
script is the main program. It provides flexibility, allowing the script to
function both as a standalone program and as an imported module.
@ -103,7 +117,7 @@ in a text or code editor to explore its contents in the following steps.
Here, the `translator.translate` method is called with the user input. The
`dest='fr'` argument specifies that the destination language for translation
is French. The `.text` attribute gets the translated string. For more details
about the available language codes, see the
about the available language codes, see the
[Googletrans docs](https://py-googletrans.readthedocs.io/en/latest/).
6. Print the original and translated text.
@ -228,10 +242,10 @@ The following steps explain each part of the `Dockerfile`. For more details, see
ENTRYPOINT ["/app/entrypoint.sh"]
```
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
You can explore the `entrypoint.sh` script by opening it in a code or text
editor. As the sample contains several applications, the script lets you
specify which application to run when the container starts.
@ -284,12 +298,12 @@ To run the application using Docker:
- `docker run`: This is the primary command used to run a new container from
a Docker image.
- `-it`: This is a combination of two options:
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `basic-nlp`: This specifies the name of the Docker image to use for
creating the container. In this case, it's the image named `basic-nlp` that
you created with the `docker build` command.
@ -330,10 +344,10 @@ Docker.
Related information:
* [Docker CLI reference](/reference/cli/docker/)
* [Dockerfile reference](/reference/dockerfile/)
* [Googletrans](https://github.com/ssut/py-googletrans)
* [Python documentation](https://docs.python.org/3/)
- [Docker CLI reference](/reference/cli/docker/)
- [Dockerfile reference](/reference/dockerfile/)
- [Googletrans](https://github.com/ssut/py-googletrans)
- [Python documentation](https://docs.python.org/3/)
## Next steps

View File

@ -1,60 +0,0 @@
---
description: Language-specific guides overview
linkTitle: Language-specific guides
weight: 10
keywords: guides, docker, language, node, java, python, R, go, golang, .net, c++
title: Language-specific guides overview
toc_min: 1
toc_max: 2
aliases:
- /guides/walkthroughs/containerize-your-app/
- /language/
---
The language-specific guides walk you through the process of:
* Containerizing language-specific applications
* Setting up a development environment
* Configuring a CI/CD pipeline
* Deploying an application locally using Kubernetes
In addition to the language-specific modules, Docker documentation also provides guidelines to build images and efficiently manage your development environment. For more information, refer to the following topics:
* [Building best practices](/manuals/build/building/best-practices.md)
* [Build images with BuildKit](/manuals/build/buildkit/_index.md#getting-started)
## Language-specific guides
Learn how to containerize your applications and start developing using Docker. Choose one of the following languages to get started.
<div class="grid grid-cols-2 md:grid-cols-3 h-auto gap-4">
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/nodejs/"><img class="m-auto rounded" src="/guides/language/images/nodejs.webp" alt="Develop with Node"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/python/"><img class="m-auto rounded" src="/guides/language/images/python.webp" alt="Develop with Python"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/r/"><img class="m-auto rounded" src="/guides/language/images/r.webp" alt="Develop with R"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/java/"><img class="m-auto rounded" src="/guides/language/images/java.webp" alt="Develop with Java"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/golang/"><img class="m-auto rounded" src="/guides/language/images/golang.webp" alt="Develop with Go"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/dotnet/"><img class="m-auto rounded" src="/guides/language/images/c-sharp.webp" alt="Develop with C#"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/cpp/"><img class="m-auto rounded" src="/guides/language/images/cpp.webp" alt="Develop with C++"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/rust/"><img class="m-auto rounded" src="/guides/language/images/rust-logo.webp" alt="Develop with Rust"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/php/"><img class="m-auto rounded" src="/guides/language/images/php-logo.webp" alt="Develop with PHP"></a>
</div>
<div class="flex items-center flex-1 shadow p-4">
<a href="/guides/language/ruby/"><img class="m-auto rounded" src="/guides/language/images/ruby-on-rails.webp" alt="Develop with Ruby"></a>
</div>
</div>

View File

@ -1,24 +0,0 @@
---
title: .NET language-specific guide
linkTitle: C# (.NET)
description: Containerize and develop .NET apps using Docker
keywords: getting started, .net
toc_min: 1
toc_max: 2
aliases:
- /language/dotnet/
---
The .NET getting started guide teaches you how to create a containerized .NET application using Docker. In this guide, you'll learn how to:
* Containerize and run a .NET application
* Set up a local environment to develop a .NET application using containers
* Run tests for a .NET application using containers
* Configure a CI/CD pipeline for a containerized .NET application using GitHub Actions
* Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the .NET getting started modules, you should be able to containerize your own .NET application based on the examples and instructions provided in this guide.
Start by containerizing an existing .NET application.
{{< button text="Containerize a .NET app" url="containerize.md" >}}

View File

@ -1,22 +0,0 @@
---
title: Node.js language-specific guide
linkTitle: Node.js
description: Containerize and develop Node.js apps using Docker
keywords: getting started, node, node.js
toc_min: 1
toc_max: 2
aliases:
- /language/nodejs/
---
The Node.js language-specific guide teaches you how to containerize a Node.js application using Docker. In this guide, youll learn how to:
* Containerize and run a Node.js application
* Set up a local environment to develop a Node.js application using containers
* Run tests for a Node.js application using containers
* Configure a CI/CD pipeline for a containerized Node.js application using GitHub Actions
* Deploy your containerized Node.js application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing Node.js application.
{{< button text="Containerize a Node.js app" url="containerize.md" >}}

View File

@ -1,21 +0,0 @@
---
title: Python language-specific guide
linkTitle: Python
description: Containerize Python apps using Docker
keywords: Docker, getting started, Python, language
toc_min: 1
toc_max: 2
aliases:
- /language/python/
---
The Python language-specific guide teaches you how to containerize a Python application using Docker. In this guide, youll learn how to:
* Containerize and run a Python application
* Set up a local environment to develop a Python application using containers
* Configure a CI/CD pipeline for a containerized Python application using GitHub Actions
* Deploy your containerized Python application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing Python application.
{{< button text="Containerize a Python app" url="containerize.md" >}}

View File

@ -1,20 +0,0 @@
---
title: R language-specific guide
linkTitle: R
description: Containerize R apps using Docker
keywords: Docker, getting started, R, language
toc_min: 1
toc_max: 2
aliases:
- /language/r/
---
The R language-specific guide teaches you how to containerize a R application using Docker. In this guide, youll learn how to:
* Containerize and run a R application
* Set up a local environment to develop a R application using containers
* Configure a CI/CD pipeline for a containerized R application using GitHub Actions
* Deploy your containerized R application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing R application.
{{< button text="Containerize a R app" url="containerize.md" >}}

View File

@ -1,21 +0,0 @@
---
title: Ruby on Rails language-specific guide
linkTitle: Ruby
description: Containerize Ruby on Rails apps using Docker
keywords: Docker, getting started, ruby, language
toc_min: 1
toc_max: 2
aliases:
- /language/ruby/
---
The Ruby language-specific guide teaches you how to containerize a Ruby on Rails application using Docker. In this guide, youll learn how to:
* Containerize and run a Ruby on Rails application
* Set up a local environment to develop a Ruby on Rails application using containers
* Configure a CI/CD pipeline for a containerized Ruby on Rails application using GitHub Actions
* Deploy your containerized Ruby on Rails application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing Ruby on Rails application.
{{< button text="Containerize a Ruby on Rails app" url="containerize.md" >}}

View File

@ -1,26 +0,0 @@
---
title: Rust language-specific guide
linkTitle: Rust
description: Containerize Rust apps using Docker
keywords: Docker, getting started, Rust, language
toc_min: 1
toc_max: 2
aliases:
- /language/rust/
---
The Rust language-specific guide teaches you how to create a containerized Rust application using Docker. In this guide, you'll learn how to:
* Containerize a Rust application
* Build an image and run the newly built image as a container
* Set up volumes and networking
* Orchestrate containers using Compose
* Use containers for development
* Configure a CI/CD pipeline for your application using GitHub Actions
* Deploy your containerized Rust application locally to Kubernetes to test and debug your deployment
After completing the Rust modules, you should be able to containerize your own Rust application based on the examples and instructions provided in this guide.
Start with building your first Rust image.
{{< button text="Build your first Rust image" url="build-images.md" >}}

View File

@ -3,6 +3,15 @@ description: How to develop and test AWS Cloud applications using LocalStack and
keywords: LocalStack, container-supported development
title: Develop and test AWS Cloud applications using LocalStack and Docker
linktitle: AWS development with LocalStack
summary: |
This guide explains how to use Docker to run LocalStack, a local AWS cloud
stack emulator, covering setup, managing cloud service emulation, and testing
cloud-based applications locally in a containerized environment.
subjects: [cloud-services]
languages: [js]
levels: [intermediate]
params:
time: 20 minutes
---
In modern application development, testing cloud applications locally before deploying them to a live environment helps you ship faster and with more confidence. This approach involves simulating services locally, identifying and fixing issues early, and iterating quickly without incurring costs or facing the complexities of a full cloud environment. Tools like [LocalStack](https://www.localstack.cloud/) have become invaluable in this process, enabling you to emulate AWS services and containerize applications for consistent, isolated testing environments.

View File

@ -3,6 +3,18 @@ title: Build a named entity recognition app
linkTitle: Named entity recognition
keywords: nlp, natural language processing, named entity recognition, python, spacy, ner
description: Learn how to build and run a named entity recognition application using Python, spaCy, and Docker.
summary: |
This guide explains how to containerize named entity recognition (NER) models
using Docker, detailing environment setup for large-scale data processing,
optimizing NER model deployment, and managing high-performance containerized
NLP workflows.
subjects: [ai]
languages: [python]
levels: [beginner]
aliases:
- /guides/use-case/nlp/named-entity-recognition/
params:
time: 20 minutes
---
## Overview
@ -15,8 +27,8 @@ The application processes input text to identify and print named entities, like
## Prerequisites
* You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
* You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [Git client](https://git-scm.com/downloads). The examples in this section use a command-line based Git client, but you can use any client.
## Get the sample application
@ -52,7 +64,7 @@ The source code for the name recognition application is in the `Docker-NLP/02_na
```python
import spacy
```
This line imports the `spaCy` library. `spaCy` is a popular library in Python
used for natural language processing (NLP).
@ -61,7 +73,7 @@ The source code for the name recognition application is in the `Docker-NLP/02_na
```python
nlp = spacy.load("en_core_web_sm")
```
Here, the `spacy.load` function loads a language model. The `en_core_web_sm`
model is a small English language model. You can use this model for various
NLP tasks, including tokenization, part-of-speech tagging, and named entity
@ -120,7 +132,6 @@ The source code for the name recognition application is in the `Docker-NLP/02_na
- `for ent in doc.ents:`: This loop iterates over the entities found in the text.
- `print(f"Entity: {ent.text}, Type: {ent.label_}")`: For each entity, it prints the entity text and its type (like PERSON, ORG, or GPE).
8. Create `requirements.txt`.
The sample application already contains the `requirements.txt` file to specify the necessary packages that the application imports. Open `requirements.txt` in a code or text editor to explore its contents.
@ -235,10 +246,10 @@ The following steps explain each part of the `Dockerfile`. For more details, see
ENTRYPOINT ["/app/entrypoint.sh"]
```
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
The `ENTRYPOINT` instruction configures the container to run `entrypoint.sh`
as its default executable. This means that when the container starts, it
automatically executes the script.
You can explore the `entrypoint.sh` script by opening it in a code or text
editor. As the sample contains several applications, the script lets you
specify which application to run when the container starts.
@ -291,21 +302,20 @@ To run the application using Docker:
- `docker run`: This is the primary command used to run a new container from
a Docker image.
- `-it`: This is a combination of two options:
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `-i` or `--interactive`: This keeps the standard input (STDIN) open even
if not attached. It lets the container remain running in the
foreground and be interactive.
- `-t` or `--tty`: This allocates a pseudo-TTY, essentially simulating a
terminal, like a command prompt or a shell. It's what lets you
interact with the application inside the container.
- `basic-nlp`: This specifies the name of the Docker image to use for
creating the container. In this case, it's the image named `basic-nlp` that
you created with the `docker build` command.
- `02_name_entity_recognition.py`: This is the script you want to run inside
the Docker container. It gets passed to the `entrypoint.sh` script, which
runs it when the container starts.
For more details, see the [docker run CLI reference](/reference/cli/docker/container/run/).
For more details, see the [docker run CLI reference](/reference/cli/docker/container/run/).
> [!NOTE]
>
@ -325,7 +335,7 @@ To run the application using Docker:
```console
Enter the text for entity recognition (type 'exit' to end): Apple Inc. is planning to open a new store in San Francisco. Tim Cook is the CEO of Apple.
Entity: Apple Inc., Type: ORG
Entity: San Francisco, Type: GPE
Entity: Tim Cook, Type: PERSON
@ -340,10 +350,10 @@ and then set up the environment and run the application using Docker.
Related information:
* [Docker CLI reference](/reference/cli/docker/)
* [Dockerfile reference](/reference/dockerfile/)
* [spaCy](https://spacy.io/)
* [Python documentation](https://docs.python.org/3/)
- [Docker CLI reference](/reference/cli/docker/)
- [Dockerfile reference](/reference/dockerfile/)
- [spaCy](https://spacy.io/)
- [Python documentation](https://docs.python.org/3/)
## Next steps

View File

@ -0,0 +1,30 @@
---
title: Node.js language-specific guide
linkTitle: Node.js
description: Containerize and develop Node.js apps using Docker
keywords: getting started, node, node.js
summary: |
This guide explains how to containerize Node.js applications using Docker,
covering image building, dependency management, optimizing image size with
multi-stage builds, and best practices for deploying Node.js apps efficiently
in containers.
toc_min: 1
toc_max: 2
aliases:
- /language/nodejs/
- /guides/language/nodejs/
languages: [js]
levels: [beginner]
params:
time: 20 minutes
---
The Node.js language-specific guide teaches you how to containerize a Node.js application using Docker. In this guide, youll learn how to:
- Containerize and run a Node.js application
- Set up a local environment to develop a Node.js application using containers
- Run tests for a Node.js application using containers
- Configure a CI/CD pipeline for a containerized Node.js application using GitHub Actions
- Deploy your containerized Node.js application locally to Kubernetes to test and debug your deployment
Start by containerizing an existing Node.js application.

View File

@ -5,7 +5,8 @@ weight: 40
keywords: ci/cd, github actions, node.js, node
description: Learn how to configure CI/CD using GitHub Actions for your Node.js application.
aliases:
- /language/nodejs/configure-ci-cd/
- /language/nodejs/configure-ci-cd/
- /guides/language/nodejs/configure-ci-cd/
---
## Prerequisites
@ -69,33 +70,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: test
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64/v8
@ -103,7 +100,7 @@ to Docker Hub.
target: prod
tags: ${{ vars.DOCKER_USERNAME }}/${{ github.event.repository.name }}:latest
```
For more information about the YAML syntax for `docker/build-push-action`,
refer to the [GitHub Action README](https://github.com/docker/build-push-action/blob/master/README.md).
@ -130,11 +127,10 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your Node.js application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.
{{< button text="Test your deployment" url="./deploy.md" >}}

View File

@ -9,13 +9,14 @@ aliases:
- /language/nodejs/build-images/
- /language/nodejs/run-containers/
- /language/nodejs/containerize/
- /guides/language/nodejs/containerize/
---
## Prerequisites
* You have installed the latest version of [Docker
- You have installed the latest version of [Docker
Desktop](/get-started/get-docker.md).
* You have a [git client](https://git-scm.com/downloads). The examples in this
- You have a [git client](https://git-scm.com/downloads). The examples in this
section use a command-line based git client, but you can use any client.
## Overview
@ -135,7 +136,6 @@ services:
NODE_ENV: production
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
@ -212,7 +212,6 @@ README.md
{{< /tab >}}
{{< /tabs >}}
You should now have at least the following contents in your
`docker-nodejs-sample` directory.
@ -230,10 +229,10 @@ You should now have at least the following contents in your
```
To learn more about the files, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -277,13 +276,12 @@ In this section, you learned how you can containerize and run your Node.js
application using Docker.
Related information:
- [Dockerfile reference](/reference/dockerfile.md)
- [.dockerignore file reference](/reference/dockerfile.md#dockerignore-file)
- [Docker Compose overview](/manuals/compose/_index.md)
- [Dockerfile reference](/reference/dockerfile.md)
- [.dockerignore file reference](/reference/dockerfile.md#dockerignore-file)
- [Docker Compose overview](/manuals/compose/_index.md)
## Next steps
In the next section, you'll learn how you can develop your application using
containers.
{{< button text="Develop your application" url="develop.md" >}}

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, kubernetes, node, node.js
description: Learn how to deploy locally to test and debug your Kubernetes deployment
aliases:
- /language/nodejs/deploy/
- /language/nodejs/deploy/
- /guides/language/nodejs/deploy/
---
## Prerequisites
@ -45,9 +46,9 @@ spec:
todo: web
spec:
containers:
- name: todo-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: todo-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -59,21 +60,21 @@ spec:
selector:
todo: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
- port: 3000
targetPort: 3000
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Node.js application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3000 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Node.js application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3000 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -136,6 +137,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

View File

@ -5,8 +5,9 @@ weight: 20
keywords: node, node.js, development
description: Learn how to develop your Node.js application locally using containers.
aliases:
- /get-started/nodejs/develop/
- /language/nodejs/develop/
- /get-started/nodejs/develop/
- /language/nodejs/develop/
- /guides/language/nodejs/develop/
---
## Prerequisites
@ -16,9 +17,10 @@ Complete [Containerize a Node.js application](containerize.md).
## Overview
In this section, you'll learn how to set up a development environment for your containerized application. This includes:
- Adding a local database and persisting data
- Configuring your container to run a development environment
- Debugging your containerized application
- Adding a local database and persisting data
- Configuring your container to run a development environment
- Debugging your containerized application
## Add a local database and persist data
@ -50,14 +52,14 @@ You can use containers to set up local services, like a database. In this sectio
NODE_ENV: production
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
depends_on:
db:
condition: service_healthy
@ -75,7 +77,7 @@ You can use containers to set up local services, like a database. In this sectio
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -91,11 +93,10 @@ You can use containers to set up local services, like a database. In this sectio
> To learn more about the instructions in the Compose file, see [Compose file
> reference](/reference/compose-file/).
3. Open `src/persistence/postgres.js` in an IDE or text editor. You'll notice
that this application uses a Postgres database and requires some environment
variables in order to connect to the database. The `compose.yaml` file doesn't
have these variables defined yet.
that this application uses a Postgres database and requires some environment
variables in order to connect to the database. The `compose.yaml` file doesn't
have these variables defined yet.
4. Add the environment variables that specify the database configuration. The
following is the updated `compose.yaml` file.
@ -121,14 +122,14 @@ have these variables defined yet.
POSTGRES_DB: example
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
depends_on:
db:
condition: service_healthy
@ -146,7 +147,7 @@ have these variables defined yet.
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -181,14 +182,14 @@ have these variables defined yet.
POSTGRES_DB: example
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
depends_on:
db:
condition: service_healthy
@ -208,7 +209,7 @@ have these variables defined yet.
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -222,7 +223,7 @@ have these variables defined yet.
6. In the `docker-nodejs-sample` directory, create a directory named `db`.
7. In the `db` directory, create a file named `password.txt`. This file will
contain your database password.
You should now have at least the following contents in your
`docker-nodejs-sample` directory.
@ -376,7 +377,7 @@ services:
expose:
- 5432
healthcheck:
test: [ "CMD", "pg_isready" ]
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
@ -420,12 +421,11 @@ database and persist data. You also learned how to create a multi-stage
Dockerfile and set up a bind mount for development.
Related information:
- [Volumes top-level element](/reference/compose-file/volumes/)
- [Services top-level element](/reference/compose-file/services/)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
- [Volumes top-level element](/reference/compose-file/volumes/)
- [Services top-level element](/reference/compose-file/services/)
- [Multi-stage builds](/manuals/build/building/multi-stage.md)
## Next steps
In the next section, you'll learn how to run unit tests using Docker.
{{< button text="Run your tests" url="run-tests.md" >}}

View File

@ -5,7 +5,8 @@ weight: 30
keywords: node.js, node, test
description: Learn how to run your Node.js tests in a container.
aliases:
- /language/nodejs/run-tests/
- /language/nodejs/run-tests/
- /guides/language/nodejs/run-tests/
---
## Prerequisites
@ -165,10 +166,9 @@ You should see output containing the following.
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information:
- [docker compose run](/reference/cli/docker/compose/run/)
- [docker compose run](/reference/cli/docker/compose/run/)
## Next steps
Next, youll learn how to set up a CI/CD pipeline using GitHub Actions.
{{< button text="Configure CI/CD" url="configure-ci-cd.md" >}}

View File

@ -3,7 +3,16 @@ title: Deployment and orchestration
keywords: orchestration, deploy, kubernetes, swarm,
description: Get oriented on some basics of Docker and install Docker Desktop.
aliases:
- /get-started/orchestration/
- /get-started/orchestration/
- /guides/deployment-orchestration/orchestration/
summary: |
Explore the essentials of container orchestration with Docker, including key
concepts, tools like Kubernetes and Docker Swarm, and practical guides to
efficiently deploy and manage your applications.
subjects: [deploy]
levels: [beginner]
params:
time: 10 minutes
---
Containerization provides an opportunity to move and scale applications to
@ -37,7 +46,7 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
2. Select the checkbox labeled **Enable Kubernetes**, and select **Apply & Restart**. Docker Desktop automatically sets up Kubernetes for you. You'll know that Kubernetes has been successfully enabled when you see a green light beside 'Kubernetes _running_' in **Settings**.
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
```yaml
apiVersion: v1
@ -46,20 +55,20 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
name: demo
spec:
containers:
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
```
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
4. In a terminal, navigate to where you created `pod.yaml` and create your pod:
4. In a terminal, navigate to where you created `pod.yaml` and create your pod:
```console
$ kubectl apply -f pod.yaml
```
5. Check that your pod is up and running:
5. Check that your pod is up and running:
```console
$ kubectl get pods
@ -72,7 +81,7 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
demo 1/1 Running 0 4s
```
6. Check that you get the logs you'd expect for a ping process:
6. Check that you get the logs you'd expect for a ping process:
```console
$ kubectl logs demo
@ -88,7 +97,7 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
...
```
7. Finally, tear down your test pod:
7. Finally, tear down your test pod:
```console
$ kubectl delete -f pod.yaml
@ -105,60 +114,60 @@ Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup a
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: testpod
image: alpine:latest
command: ["ping", "8.8.8.8"]
```
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
4. In PowerShell, navigate to where you created `pod.yaml` and create your pod:
```console
$ kubectl apply -f pod.yaml
```
```console
$ kubectl apply -f pod.yaml
```
5. Check that your pod is up and running:
```console
$ kubectl get pods
```
```console
$ kubectl get pods
```
You should see something like:
You should see something like:
```shell
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 4s
```
```shell
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 4s
```
6. Check that you get the logs you'd expect for a ping process:
```console
$ kubectl logs demo
```
```console
$ kubectl logs demo
```
You should see the output of a healthy ping process:
You should see the output of a healthy ping process:
```shell
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=21.393 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=15.320 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=11.111 ms
...
```
```shell
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=21.393 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=15.320 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=11.111 ms
...
```
7. Finally, tear down your test pod:
```console
$ kubectl delete -f pod.yaml
```
```console
$ kubectl delete -f pod.yaml
```
{{< /tab >}}
{{< /tabs >}}
@ -174,62 +183,62 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
1. Open a terminal, and initialize Docker Swarm mode:
```console
$ docker swarm init
```
```console
$ docker swarm init
```
If all goes well, you should see a message similar to the following:
If all goes well, you should see a message similar to the following:
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
To add a worker to this swarm, run the following command:
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
3. Check that your service created one running container:
```console
$ docker service ps demo
```
```console
$ docker service ps demo
```
You should see something like:
You should see something like:
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
4. Check that you get the logs you'd expect for a ping process:
```console
$ docker service logs demo
```
```console
$ docker service logs demo
```
You should see the output of a healthy ping process:
You should see the output of a healthy ping process:
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
5. Finally, tear down your test service:
```console
$ docker service rm demo
```
```console
$ docker service rm demo
```
{{< /tab >}}
{{< tab name="Windows" >}}
@ -238,62 +247,62 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
1. Open a PowerShell, and initialize Docker Swarm mode:
```console
$ docker swarm init
```
```console
$ docker swarm init
```
If all goes well, you should see a message similar to the following:
If all goes well, you should see a message similar to the following:
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
```shell
Swarm initialized: current node (tjjggogqpnpj2phbfbz8jd5oq) is now a manager.
To add a worker to this swarm, run the following command:
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
docker swarm join --token SWMTKN-1-3e0hh0jd5t4yjg209f4g5qpowbsczfahv2dea9a1ay2l8787cf-2h4ly330d0j917ocvzw30j5x9 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
```console
$ docker service create --name demo alpine:latest ping 8.8.8.8
```
3. Check that your service created one running container:
```console
$ docker service ps demo
```
```console
$ docker service ps demo
```
You should see something like:
You should see something like:
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
```shell
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
```
4. Check that you get the logs you'd expect for a ping process:
```console
$ docker service logs demo
```
```console
$ docker service logs demo
```
You should see the output of a healthy ping process:
You should see the output of a healthy ping process:
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
```shell
demo.1.463j2s3y4b5o@docker-desktop | PING 8.8.8.8 (8.8.8.8): 56 data bytes
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=0 ttl=37 time=13.005 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=1 ttl=37 time=13.847 ms
demo.1.463j2s3y4b5o@docker-desktop | 64 bytes from 8.8.8.8: seq=2 ttl=37 time=41.296 ms
...
```
5. Finally, tear down your test service:
```console
$ docker service rm demo
```
```console
$ docker service rm demo
```
{{< /tab >}}
{{< /tabs >}}

View File

@ -3,22 +3,29 @@ title: PHP language-specific guide
linkTitle: PHP
description: Containerize and develop PHP apps using Docker
keywords: getting started, php, composer
summary: |
This guide explains how to containerize PHP applications using Docker,
covering image building, dependency management, optimizing image size, and
best practices for deploying PHP apps efficiently in containers.
toc_min: 1
toc_max: 2
aliases:
- /language/php/
- /language/php/
- /guides/language/php/
languages: [php]
levels: [beginner]
params:
time: 20 minutes
---
The PHP language-specific guide teaches you how to create a containerized PHP application using Docker. In this guide, you'll learn how to:
* Containerize and run a PHP application
* Set up a local environment to develop a PHP application using containers
* Run tests for a PHP application within containers
* Configure a CI/CD pipeline for a containerized PHP application using GitHub Actions
* Deploy your containerized application locally to Kubernetes to test and debug your deployment
- Containerize and run a PHP application
- Set up a local environment to develop a PHP application using containers
- Run tests for a PHP application within containers
- Configure a CI/CD pipeline for a containerized PHP application using GitHub Actions
- Deploy your containerized application locally to Kubernetes to test and debug your deployment
After completing the PHP language-specific guide, you should be able to containerize your own PHP application based on the examples and instructions provided in this guide.
Start by containerizing an existing PHP application.
{{< button text="Containerize a PHP app" url="containerize.md" >}}

View File

@ -5,7 +5,8 @@ weight: 40
keywords: php, CI/CD
description: Learn how to Configure CI/CD for your PHP application
aliases:
- /language/php/configure-ci-cd/
- /language/php/configure-ci-cd/
- /guides/language/php/configure-ci-cd/
---
## Prerequisites
@ -77,33 +78,29 @@ to Docker Hub.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build and test
- name: Build and test
uses: docker/build-push-action@v6
with:
target: test
load: true
-
name: Build and push
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
@ -138,11 +135,10 @@ Save the workflow file and run the job.
In this section, you learned how to set up a GitHub Actions workflow for your application.
Related information:
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
- [Introduction to GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.
{{< button text="Test your deployment" url="./deploy.md" >}}

View File

@ -5,14 +5,15 @@ weight: 10
keywords: php, containerize, initialize, apache, composer
description: Learn how to containerize a PHP application.
aliases:
- /language/php/containerize/
- /language/php/containerize/
- /guides/language/php/containerize/
---
## Prerequisites
* You have installed the latest version of [Docker
- You have installed the latest version of [Docker
Desktop](/get-started/get-docker.md).
* You have a [git client](https://git-scm.com/downloads). The examples in this
- You have a [git client](https://git-scm.com/downloads). The examples in this
section use a command-line based git client, but you can use any client.
## Overview
@ -80,9 +81,10 @@ directory.
```
To learn more about the files that `docker init` added, see the following:
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
- [Dockerfile](/reference/dockerfile.md)
- [.dockerignore](/reference/dockerfile.md#dockerignore-file)
- [compose.yaml](/reference/compose-file/_index.md)
## Run the application
@ -124,11 +126,10 @@ In this section, you learned how you can containerize and run a simple PHP
application using Docker.
Related information:
- [docker init reference](/reference/cli/docker/init.md)
- [docker init reference](/reference/cli/docker/init.md)
## Next steps
In the next section, you'll learn how you can develop your application using
Docker containers.
{{< button text="Develop your application" url="develop.md" >}}

View File

@ -5,7 +5,8 @@ weight: 50
keywords: deploy, php, local, development
description: Learn how to deploy your application
aliases:
- /language/php/deploy/
- /language/php/deploy/
- /guides/language/php/deploy/
---
## Prerequisites
@ -47,9 +48,9 @@ spec:
hello-php: web
spec:
containers:
- name: hello-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: hello-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
@ -61,21 +62,21 @@ spec:
selector:
hello-php: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
- port: 80
targetPort: 80
nodePort: 30001
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
PHP application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 80 inside the pods it routes to, allowing you to reach your app
from the network.
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
PHP application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 80 inside the pods it routes to, allowing you to reach your app
from the network.
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
@ -139,6 +140,7 @@ To learn more about Kubernetes objects, see the [Kubernetes documentation](https
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
Related information:
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)

Some files were not shown because too many files have changed in this diff Show More