Add Python get started docs (#12184)

* Add Python get started docs

Signed-off-by: Usha Mandya <usha.mandya@docker.com>

* Address Anca's review comments

Signed-off-by: Usha Mandya <usha.mandya@docker.com>

* Update pip to pip3

Signed-off-by: Usha Mandya <usha.mandya@docker.com>

* Update pip freeze to pip3 freeze

Signed-off-by: Usha Mandya <usha.mandya@docker.com>
This commit is contained in:
Usha Mandya 2021-01-28 17:46:36 +00:00 committed by GitHub
parent e3a0c7932e
commit c23d1daf63
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 961 additions and 18 deletions

View File

@ -79,6 +79,8 @@ guides:
path: /language/python/develop/
- title: "Configure CI/CD"
path: /language/python/configure-ci-cd/
- title: "Deploy your app"
path: /language/python/deploy/
- sectiontitle: Java
section:
- title: "Overview"

View File

@ -4,8 +4,238 @@ keywords: python, build, images, dockerfile
description: Learn how to build your first Docker image by writing a Dockerfile
---
> ### Coming soon
{% include_relative nav.html selected="1" %}
## Prerequisites
Work through the orientation and setup in Get started [Part 1](/get-started/) to understand Docker concepts.
## Overview
Now that we have a good overview of containers and the Docker platform, lets take a look at building our first image. An image includes everything needed to run an application - the code or binary, runtime, dependencies, and any other file system objects required.
To complete this tutorial, you need the following:
- Python version 3.8 or later. [Download Python](https://www.python.org/downloads/){: target="_blank" rel="noopener" class="_"}
- Docker running locally. Follow the instructions to [download and install Docker](../../desktop/index.md)
- An IDE or a text editor to edit files. We recommend using [Visual Studio Code](https://code.visualstudio.com/Download){: target="_blank" rel="noopener" class="_"}.
## Sample application
Lets create a simple Python application using the Flask framework that well use as our example. Create a directory in your local machine named `python-docker` and follow the steps below to create a simple web server.
```shell
$ cd /path/to/python-docker
$ pip3 install Flask
$ pip3 freeze > requirements.txt
$ touch app.py
```
Now, lets add some code to handle simple web requests. Open this working directory in your favorite IDE and enter the following code into the `app.py` file.
```shell
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, Docker!'
```
## Test the application
Lets start our application and make sure its running properly. Open your terminal and navigate to the working directory you created.
```shell
$ python3 -m flask run
```
To test that the application is working properly, open a new browser and navigate to `http://localhost:5000`.
Switch back to the terminal where our server is running and you should see the following requests in the server logs. The data and timestamp will be different on your machine.
```shel
127.0.0.1 - - [22/Sep/2020 11:07:41] "GET / HTTP/1.1" 200 -
```
## Create a Dockerfile for Python
Now that our application is running properly, lets take a look at creating a Dockerfile.
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. When we tell Docker to build our image by executing the `docker build` command, Docker reads these instructions and execute them consecutively and create a Docker image as a result.
Lets walk through creating a Dockerfile for our application. In the root of your working directory, create a file named `Dockerfile` and open this file in your text editor.
> **Note**
>
> Thanks for your interest in learning how to containerize a Python application. We are actively working on this topic. Watch this space!
> The name of the Dockerfile is not important but the default filename for many commands is simply `Dockerfile`. Therefore, well use that as our filename throughout this series.
The first thing we need to do is to add a line in our Dockerfile that tells Docker what base image we would like to use for our application.
```dockerfile
FROM python:3.8-slim-buster
```
Docker images can be inherited from other images. Therefore, instead of creating our own base image, well use the official Python image that already has all the tools and packages that we need to run a Python application.
> **Note**
>
> To learn more about creating your own base images, see [Creating base images](https://docs.docker.com/develop/develop-images/baseimages/).
To make things easier when running the rest of our commands, lets create a working directory. This instructs Docker to use this path as the default location for all subsequent commands. By doing this, we do not have to type out full file paths but can use relative paths based on the working directory.
```dockerfile
WORKDIR /app
```
Usually, the very first thing you do once youve downloaded a project written in Python is to install `pip` packages. This ensures that your application has all its dependencies installed.
Before we can run `pip3 install`, we need to get our `requirements.txt` file into our image. Well use the `COPY` command to do this. The `COPY` command takes two parameters. The first parameter tells Docker what file(s) you would like to copy into the image. The second parameter tells Docker where you want that file(s) to be copied to. Well copy the `requirements.txt` file into our working directory `/app`.
```dockerfile
COPY requirements.txt requirements.txt
```
Once we have our `requirements.txt` file inside the image, we can use the `RUN` command to execute the command `pip3 install`. This works exactly the same as if we were running `pip3 install` locally on our machine, but this time the modules are installed into the image.
```dockerfile
RUN pip3 install -r requirements.txt
```
At this point, we have an image that is based on Python version 3.8 and we have installed our dependencies. The next step is to add our source code into the image. Well use the `COPY` command just like we did with our `requirements.txt` file above.
```dockerfile
COPY . .
```
This `COPY` command takes all the files located in the current directory and copies them into the image. Now, all we have to do is to tell Docker what command we want to run when our image is executed inside a container. We do this using the `CMD` command.
```dockerfile
CMD [ "python3", "app.py" ]
```
Here's the complete Dockerfile.
```dockerfile
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "app.py" ]
```
### Directory structure
Just to recap, we created a directory in our local machine called `docker-python` and created a simple Python application using the Flask framework. We also used the `requirements.txt` file to gather our requirements, and created a Dockerfile containing the commands to build an image. The Python application directory structure would now look like:
```shell
python-docker
|____ app.py
|____ requirements.txt
|____ Dockerfile
```
## Build an image
Now that weve created our Dockerfile, lets build our image. To do this, we use the `docker build` command. The `docker build` command builds Docker images from a Dockerfile and a “context”. A builds context is the set of files located in the specified PATH or URL. The Docker build process can access any of the files located in this context.
The build command optionally takes a `--tag` flag. The tag is used to set the name of the image and an optional tag in the format `name:tag`. Well leave off the optional `tag` for now to help simplify things. If you do not pass a tag, Docker uses “latest” as its default tag. You can see this in the last line of the build output.
Lets build our first Docker image.
```shell
$ docker build --tag python-docker .
[+] Building 2.7s (10/10) FINISHED
=> [internal] load build definition from Dockerfile
=> => transferring dockerfile: 203B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> [internal] load metadata for docker.io/library/python:3.8-slim-buster
=> [1/5] FROM docker.io/library/python:3.8-slim-buster
=> [internal] load build context
=> => transferring context: 953B
=> CACHED [2/5] WORKDIR /app
=> [3/5] COPY requirements.txt requirements.txt
=> [4/5] RUN pip3 install -r requirements.txt
=> [5/5] COPY . .
=> exporting to image
=> => exporting layers
=> => writing image sha256:8cae92a8fbd6d091ce687b71b31252056944b09760438905b726625831564c4c
=> => naming to docker.io/library/python-docker
```
## View local images
To see a list of images we have on our local machine, we have two options. One is to use the CLI and the other is to use [Docker Desktop](../../desktop/dashboard.md#explore-your-images). As we are currently working in the terminal lets take a look at listing images using the CLI.
To list images, simply run the `docker images` command.
```shell
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
python-docker latest 8cae92a8fbd6 3 minutes ago 123MB
python 3.8-slim-buster be5d294735c6 9 days ago 113MB
```
You should see at least two images listed. One for the base image `3.8-slim-buster` and the other for the image we just built `python-docker:latest`.
## Tag images
As mentioned earlier, an image name is made up of slash-separated name components. Name components may contain lowercase letters, digits and separators. A separator is defined as a period, one or two underscores, or one or more dashes. A name component may not start or end with a separator.
An image is made up of a manifest and a list of layers. Do not worry too much about manifests and layers at this point other than a “tag” points to a combination of these artifacts. You can have multiple tags for an image. Lets create a second tag for the image we built and take a look at its layers.
To create a new tag for the image weve built above, run the following command.
```shell
$ docker tag python-docker:latest python-docker:v1.0.0
```
The `docker tag` command creates a new tag for an image. It does not create a new image. The tag points to the same image and is just another way to reference the image.
Now, run the `docker images` command to see a list of our local images.
```shell
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
python-docker latest 8cae92a8fbd6 4 minutes ago 123MB
python-docker v1.0.0 8cae92a8fbd6 4 minutes ago 123MB
python 3.8-slim-buster be5d294735c6 9 days ago 113MB
```
You can see that we have two images that start with `python-docker`. We know they are the same image because if you take a look at the `IMAGE ID` column, you can see that the values are the same for the two images.
Lets remove the tag that we just created. To do this, well use the `rmi` command. The `rmi` command stands for remove image.
```shell
$ docker rmi python-docker:v1.0.0
Untagged: python-docker:v1.0.0
```
Note that the response from Docker tells us that the image has not been removed but only “untagged”. You can check this by running the `docker images` command.
```shell
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
python-docker latest 8cae92a8fbd6 6 minutes ago 123MB
python 3.8-slim-buster be5d294735c6 9 days ago 113MB
```
Our image that was tagged with `:v1.0.0` has been removed, but we still have the `python-docker:latest` tag available on our machine.
## Next steps
In this module, we took a look at setting up our example Python application that we will use for the rest of the tutorial. We also created a Dockerfile that we used to build our Docker image. Then, we took a look at tagging our images and removing images. In the next module well take a look at how to:
[Run your image as a container](run-containers.md){: .button .outline-btn}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs ](https://github.com/docker/docker.github.io/issues/new?title=[Python%20docs%20feedback]){:target="_blank" rel="noopener" class="_"} GitHub repository. Alternatively, [create a PR](https://github.com/docker/docker.github.io/pulls){:target="_blank" rel="noopener" class="_"} to suggest updates.
<br />

View File

@ -4,8 +4,239 @@ keywords: python, CI/CD, local, development
description: Learn how to Configure CI/CD for your application
---
> ### Coming soon
>
> Thanks for your interest in learning how to containerize a Python application. We are actively working on this topic. Watch this space!
{% include_relative nav.html selected="4" %}
This page guides you through the process of setting up a GitHub Action CI/CD pipeline with Docker containers. Before setting up a new pipeline, we recommend that you take a look at [Ben's blog](https://www.docker.com/blog/best-practices-for-using-docker-hub-for-ci-cd/){:target="_blank" rel="noopener" class="_"} on CI/CD best practices .
This guide contains instructions on how to:
1. Use a sample Docker project as an example to configure GitHub Actions
2. Set up the GitHub Actions workflow
3. Optimize your workflow to reduce the number of pull requests and the total build time, and finally,
4. Push only specific versions to Docker Hub.
## Set up a Docker project
Lets get started. This guide uses a simple Docker project as an example. The [SimpleWhaleDemo](https://github.com/usha-mandya/SimpleWhaleDemo){:target="_blank" rel="noopener" class="_"} repository contains an Ngnix alpine image. You can either clone this repository, or use your own Docker project.
![SimpleWhaleDemo](../../ci-cd/images/simplewhaledemo.png){:width="500px"}
Before we start, ensure you can access [Docker Hub](https://hub.docker.com/) from any workflows you create. To do this:
1. Add your Docker ID as a secret to GitHub. Navigate to your GitHub repository and click **Settings** > **Secrets** > **New secret**.
2. Create a new secret with the name `DOCKER_HUB_USERNAME` and your Docker ID as value.
3. Create a new Personal Access Token (PAT). To create a new token, go to [Docker Hub Settings](https://hub.docker.com/settings/security) and then click **New Access Token**.
4. Lets call this token **simplewhaleci**.
![New access token](../../ci-cd/images/github-access-token.png){:width="500px"}
5. Now, add this Personal Access Token (PAT) as a second secret into the GitHub secrets UI with the name `DOCKER_HUB_ACCESS_TOKEN`.
![GitHub Secrets](../../ci-cd/images/github-secrets.png){:width="500px"}
## Set up the GitHub Actions workflow
In the previous section, we created a PAT and added it to GitHub to ensure we can access Docker Hub from any workflow. Now, lets set up our GitHub Actions workflow to build and store our images in Hub. We can achieve this by creating two Docker actions:
1. The first action enables us to log in to Docker Hub using the secrets we stored in the GitHub Repository.
2. The second one is the build and push action.
In this example, let us set the push flag to `true` as we also want to push. Well then add a tag to specify to always go to the latest version. Lastly, well echo the image digest to see what was pushed.
To set up the workflow:
1. Go to your repository in GitHub and then click **Actions** > **New workflow**.
2. Click **set up a workflow yourself** and add the following content:
First, we will name this workflow:
```yaml
name: CI to Docker Hub
```
Then, we will choose when we run this workflow. In our example, we are going to do it for every push against the main branch of our project:
```yaml
on:
push:
branches: [ main ]
```
Now, we need to specify what we actually want to happen within our action (what jobs), we are going to add our build one and select that it runs on the latest Ubuntu instances available:
```yaml
jobs:
build:
runs-on: ubuntu-latest
```
Now, we can add the steps required. The first one checks-out our repository under `$GITHUB_WORKSPACE`, so our workflow can access it. The second is to use our PAT and username to log into Docker Hub. The third is the Builder, the action uses BuildKit under the hood through a simple Buildx action which we will also setup
{% raw %}
```yaml
steps:
- name: Check Out Repo
uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: ushamandya/simplewhale:latest
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
```
{% endraw %}
Now, let the workflow run for the first time and then tweak the Dockerfile to make sure the CI is running and pushing the new image changes:
![CI to Docker Hub](../../ci-cd/images/ci-to-hub.png){:width="500px"}
## Optimizing the workflow
Next, lets look at how we can optimize the GitHub Actions workflow through build cache. This has two main advantages:
1. Build cache reduces the build time as it will not have to re-download all of the images, and
2. It also reduces the number of pulls we complete against Docker Hub. We need to make use of GitHub cache to make use of this.
Let us set up a Builder with a build cache. First, we need to set up cache for the builder. In this example, let us add the path and keys to store this under using GitHub cache for this.
{% raw %}
```yaml
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
```
{% endraw %}
And lastly, after adding the builder and build cache snippets to the top of the Actions file, we need to add some extra attributes to the build and push step. This involves:
Setting up the builder to use the output of the buildx step, and then
Using the cache we set up earlier for it to store to and to retrieve
{% raw %}
```yaml
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
push: true
tags: ushamandya/simplewhale:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
```
{% endraw %}
Now, run the workflow again and verify that it uses the build cache.
## Push tagged versions to Docker Hub
Earlier, we learnt how to set up a GitHub Actions workflow to a Docker project, how to optimize the workflow by setting up a builder with build cache. Lets now look at how we can improve it further. We can do this by adding the ability to have tagged versions behave differently to all commits to master. This means, only specific versions are pushed, instead of every commit updating the latest version on Docker Hub.
You can consider this approach to have your commits go to a local registry to then use in nightly tests. By doing this, you can always test what is latest while reserving your tagged versions for release to Docker Hub.
This involves two steps:
1. Modifying the GitHub workflow to only push commits with specific tags to Docker Hub
2. Setting up a GitHub Actions file to store the latest commit as an image in the GitHub registry
First, let us modify our existing GitHub workflow to only push to Hub if theres a particular tag. For example:
{% raw %}
```yaml
on:
push:
tags:
- "v*.*.*"
```
{% endraw %}
This ensures that the main CI will only trigger if we tag our commits with `V.n.n.n.` Lets test this. For example, run the following command:
```bash
git tag -a v1.0.2
git push origin v1.0.2
```
Now, go to GitHub and check your Actions
![Push tagged version](../../ci-cd/images/push-tagged-version.png){:width="500px"}
Now, lets set up a second GitHub action file to store our latest commit as an image in the GitHub registry. You may want to do this to:
1. Run your nightly tests or recurring tests, or
2. To share work in progress images with colleagues.
Lets clone our previous GitHub action and add back in our previous logic for all pushes. This will mean we have two workflow files, our previous one and our new one we will now work on.
Next, change your Docker Hub login to a GitHub container registry login:
{% raw %}
```yaml
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GHCR_TOKEN }}
```
{% endraw %}
Remember to change how the image is tagged. The following example keeps latest as the only tag. However, you can add any logic to this if you prefer:
{% raw %}
```yaml
tags: ghcr.io/${{ github.repository_owner }}/simplewhale:latest
```
{% endraw %}
![Update tagged images](../../ci-cd/images/ghcr-logic.png){:width="500px"}
Now, we will have two different flows: one for our changes to master, and one for our pull requests. Next, we need to modify what we had before to ensure we are pushing our PRs to the GitHub registry rather than to Docker Hub.
## Next steps
In this module, you have learnt how to set up GitHub Actions workflow to an existing Docker project, optimize your workflow to improve build times and reduce the number of pull requests, and finally, we learnt how to push only specific versions to Docker Hub. You can also set up nightly tests against the latest tag, test each PR, or do something more elegant with the tags we are using and make use of the Git tag for the same tag in our image.
You can also consider deploying your application to the cloud. For detailed instructions, see:
[Deploy your application to the cloud](deploy.md){: .button .outline-btn}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs ](https://github.com/docker/docker.github.io/issues/new?title=[Python%20docs%20feedback]){:target="_blank" rel="noopener" class="_"} GitHub repository. Alternatively, [create a PR](https://github.com/docker/docker.github.io/pulls){:target="_blank" rel="noopener" class="_"} to suggest updates.
<br />

29
language/python/deploy.md Normal file
View File

@ -0,0 +1,29 @@
---
title: "Deploy your app to the cloud"
keywords: deploy, ACI, ECS, Python, local, development
description: Learn how to deploy your application to the cloud.
---
{% include_relative nav.html selected="5" %}
Now, that we have configured a CI/CD pipleline, let's look at how we can deploy the application to cloud. Docker supports deploying containers on Azure ACI and AWS ECS.
## Docker and ACI
The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment.
For detailed instructions, see [Deploying Docker containers on Azure](/cloud/aci-integration/).
## Docker and ECS
The Docker ECS Integration enables developers to use native Docker commands in Docker Compose CLI to run applications in Amazon EC2 Container Service (ECS) when building cloud-native applications.
The integration between Docker and Amazon ECS allows developers to use the Docker Compose CLI to set up an AWS context in one Docker command, allowing you to switch from a local context to a cloud context and run applications quickly and easily simplify multi-container application development on Amazon ECS using Compose files.
For detailed instructions, see [Deploying Docker containers on ECS](/cloud/ecs-integration.md).
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs ](https://github.com/docker/docker.github.io/issues/new?title=[Python%20docs%20feedback]){:target="_blank" rel="noopener" class="_"} GitHub repository. Alternatively, [create a PR](https://github.com/docker/docker.github.io/pulls){:target="_blank" rel="noopener" class="_"} to suggest updates.
<br />

View File

@ -4,8 +4,255 @@ keywords: python, local, development, run,
description: Learn how to develop your application locally.
---
> ### Coming soon
>
> Thanks for your interest in learning how to containerize a Python application. We are actively working on this topic. Watch this space!
{% include_relative nav.html selected="3" %}
<br />
## Prerequisites
Work through the steps to build an image and run it as a containerized application in [Run your image as a container](run-containers.md).
## Introduction
In this module, well walk through setting up a local development environment for the application we built in the previous modules. Well use Docker to build our images and Docker Compose to make everything a whole lot easier.
## Run a database in a container
First, well take a look at running a database in a container and how we use volumes and networking to persist our data and allow our application to talk with the database. Then well pull everything together into a Compose file which allows us to setup and run a local development environment with one command. Finally, well take a look at connecting a debugger to our application running inside a container.
Instead of downloading MySQL, installing, configuring, and then running the MySQL database as a service, we can use the Docker Official Image for MySQL and run it in a container.
Before we run MySQL in a container, we'll create a couple of volumes that Docker can manage to store our persistent data and configuration. Lets use the managed volumes feature that Docker provides instead of using bind mounts. You can read all about [Using volumes](../../storage/volumes.md) in our documentation.
Lets create our volumes now. Well create one for the data and one for configuration of MongoDB.
```shell
$ docker volume create mysql
$ docker volume create mysql_config
```
Now well create a network that our application and database will use to talk to each other. The network is called a user-defined bridge network and gives us a nice DNS lookup service which we can use when creating our connection string.
```shell
$ docker network create mysqlnet
```
Now we can run MySQL in a container and attach to the volumes and network we created above. Docker pulls the image from Hub and run it for you locally.
```shell
$ docker run -it --rm -d -v mysql:/var/lib/mysql \
-v mysql_config:/etc/mysql -p 3306:3306 \
--network mysqlnet \
--name mysqldb \
-e MYSQL_ALLOW_EMPTY_PASSWORD=true \
mysql
```
Now, lets make sure that our MySQL database is running and that we can connect to it. Connect to the running MySQL database inside the container using the following command:
```shell
$ docker run -it --network mysqlnet --rm mysql mysql -hmysqldb
Enter password: ********
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.23 MySQL Community Server - GPL
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
```
### Connect the application to the database
In the above command, we used the same MySQL image to connect to the database but this time, we passed the mysql command to the container with the `-h` flag containing the name of our MySQL container name. Press CTRL-D to exit the MySQL interactive terminal.
Next, we'll update the sample application we created in the [Build images](build-images.md#sample-application) module. To see the directory structure of the Python app, see [Python application directory structure](build-images.md#directory-structure).
Okay, now that we have a running MySQL, lets update the`app.py` to use MySQL as a datastore. Lets also add some routes to our server. One for fetching records and one for inserting records.
```shell
import mysql.connector
import json
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, Docker!'
@app.route('/widgets')
def get_widgets() :
mydb = mysql.connector.connect(
host="mysqldb",
user="root",
password="p@ssw0rd1",
database="inventory"
)
cursor = mydb.cursor()
cursor.execute("SELECT * FROM widgets")
row_headers=[x[0] for x in cursor.description] #this will extract row headers
results = cursor.fetchall()
json_data=[]
for result in results:
json_data.append(dict(zip(row_headers,result)))
cursor.close()
return json.dumps(json_data)
@app.route('/db')
def db_init():
mydb = mysql.connector.connect(
host="mysqldb",
user="root",
password="p@ssw0rd1"
)
cursor = mydb.cursor()
cursor.execute("DROP DATABASE IF EXISTS inventory")
cursor.execute("CREATE DATABASE inventory")
cursor.close()
mydb = mysql.connector.connect(
host="mysqldb",
user="root",
password="p@ssw0rd1",
database="inventory"
)
cursor = mydb.cursor()
cursor.execute("DROP TABLE IF EXISTS widgets")
cursor.execute("CREATE TABLE widgets (name VARCHAR(255), description VARCHAR(255))")
cursor.close()
return 'init database'
if __name__ == "__main__":
app.run(host ='0.0.0.0')
```
Weve added the MySQL module and updated the code to connect to the database server, created a database and table. We also created a couple of routes to save widgets and fetch widgets. We now need to rebuild our image so it contains our changes.
First, lets add the `mysql-connector-python `module to our application using pip.
```shell
$ pip3 install mysql-connector-python
$ pip3 freeze -r requirements.txt
```
Now we can build our image.
```shell
$ docker build --tag python-docker .
```
Now, lets add the container to the database network and then run our container. This allows us to access the database by its container name.
```shell
$ docker run \
-it --rm -d \
--network mysqlnet \
--name rest-server \
-p 5000:5000 \
python-docker
```
Lets test that our application is connected to the database and is able to add a note.
```shell
$ curl http://localhost:5000/initdb
$ curl --request POST \
--url http://localhost:5000/widgets \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data 'name=widget01' \
--data 'description=this is a test widget'
```
You should receive the following JSON back from our service.
```shell
[{"name": "widget01", "description": "this is a test widget"}]
```
## Use Compose to develop locally
In this section, well create a Compose file to start our python-docker and the MySQL database using a single command. Well also set up the Compose file to start the `python-docker` application in debug mode so that we can connect a debugger to the running process.
Open the `python-docker` code in your IDE or a text editor and create a new file named `docker-compose.dev.yml`. Copy and paste the following commands into the file.
```yaml
version: '3.8'
services:
web:
build:
context: .
ports:
- 5000:5000
volumes:
- ./:/app
mysqldb:
image: mysql
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=p@ssw0rd1
volumes:
- mysql:/var/lib/mysql
- mysql_config:/etc/mysql
volumes:
mysql:
mysql_config:
```
This Compose file is super convenient as we do not have to type all the parameters to pass to the `docker run` command. We can declaratively do that using a Compose file.
We expose port 5000 so that we can reach the dev web server inside the container. We also map our local source code into the running container to make changes in our text editor and have those changes picked up in the container.
Another really cool feature of using a Compose file is that we have service resolution set up to use the service names. Therefore, we are now able to use “mysqldb” in our connection string. The reason we use “mysqldb” is because that is what we've named our MySQL service as in the Compose file.
Now, to start our application and to confirm that it is running properly, run the following command:
```shell
$ docker-compose -f docker-compose.dev.yml up --build
```
We pass the `--build` flag so Docker will compile our image and then starts the containers.
Now lets test our API endpoint. Run the following curl command:
```shell
$ curl --request GET --url http://localhost:8080/widgets
```
You should receive the following response:
```shell
[]
```
This is because our database is empty.
## Next steps
In this module, we took a look at creating a general development image that we can use pretty much like our normal command line. We also set up our Compose file to map our source code into the running container and exposed the debugging port.
In the next module, well take a look at how to set up a CI/CD pipeline using GitHub Actions. See:
[Configure CI/CD](configure-ci-cd.md){: .button .outline-btn}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs ](https://github.com/docker/docker.github.io/issues/new?title=[Python%20docs%20feedback]){:target="_blank" rel="noopener" class="_"} GitHub repository. Alternatively, [create a PR](https://github.com/docker/docker.github.io/pulls){:target="_blank" rel="noopener" class="_"} to suggest updates.

View File

@ -6,17 +6,21 @@ toc_min: 1
toc_max: 2
---
> ### Coming soon
>
> Thanks for your interest in learning how to containerize a Python application. We are actively writing this guide. Watch this space!
Meanwhile, here's an outline of the tasks we are planning to cover. Using the Python getting started guide, you'll learn how to:
The Python getting started guide teaches you how to create a containerized Python application using Docker. In this guide, you'll learn how to:
* Create a sample Python application
* Create a new Dockerfile which contains instructions required to build a Python image
* Build an image and run the newly built image as a container
* Set up volumes and networking
* Orchestrate containers using Compose
* Use containers for development
* Configure a CI/CD pipeline for your application using GitHub Actions
* Deploy your application to the cloud
After completing the Python getting started modules, you should be able to containerize your own Python application based on the examples and instructions provided in this guide.
Let's get started!
[Build your first Python image](build-images.md){: .button .outline-btn}
<br />

7
language/python/nav.html Normal file
View File

@ -0,0 +1,7 @@
<ul class="pagination">
<li {% if include.selected=="1"%}class="active"{% endif %}><a href="/language/python/build-images/">Build images</a></li>
<li {% if include.selected=="2"%}class="active"{% endif %}><a href="/language/python/run-containers/">Run your image as a container</a></li>
<li {% if include.selected=="3"%}class="active"{% endif %}><a href="/language/python/develop/">Use containers for development</a></li>
<li {% if include.selected=="4"%}class="active"{% endif %}><a href="/language/python/configure-ci-cd/">Configure CI/CD</a></li>
<li {% if include.selected=="5"%}class="active"{% endif %}><a href="/language/python/deploy/">Deploy your app</a></li>
</ul>

View File

@ -4,8 +4,201 @@ keywords: Python, run, image, container,
description: Learn how to run the image as a container.
---
> ### Coming soon
>
> Thanks for your interest in learning how to containerize a Python application. We are actively working on this topic. Watch this space!
{% include_relative nav.html selected="2" %}
<br />
## Prerequisites
Work through the steps to build a Python image in [Build your Python image](build-images.md).
## Overview
In the previous module, we created our sample application and then we created a Dockerfile that we used to produce an image. We created our image using the docker command docker build. Now that we have an image, we can run that image and see if our application is running correctly.
A container is a normal operating system process except that this process is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.
To run an image inside of a container, we use the `docker run` command. The `docker run` command requires one parameter which is the name of the image. Lets start our image and make sure it is running correctly. Run the following command in your terminal.
```shell
$ docker run python-docker
```
After running this command, youll notice that you were not returned to the command prompt. This is because our application is a REST server and runs in a loop waiting for incoming requests without returning control back to the OS until we stop the container.
Lets make a `GET` request to the server using the `curl` command.
```shell
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{
"msg": "testing"
}'
curl: (7) Failed to connect to localhost port 8000: Connection refused
```
As you can see, our `curl` command failed because the connection to our server was refused. This means, we were not able to connect to the localhost on `port 8000`. This is expected because our container is run in isolation which includes networking. Lets stop the container and restart with `port 8000` published on our local network.
To stop the container, press ctrl-c. This will return you to the terminal prompt.
To publish a port for our container, well use the `--publish flag` (`-p` for short) on the `docker run` command. The format of the `--publish` command is `[host port]:[container port]`. So, if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass `3000:8000` to the `--publish` flag.
Start the container and expose port 8000 to port 8000 on the host:
```shell
$ docker run --publish 8000:8000 python-docker
```
Now, lets rerun the curl command from above:
```shell
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{
"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}
```
Success! We were able to connect to the application running inside of our container on port 8000. Switch back to the terminal where your container is running and you should see the POST request logged to the console.
```shell
2020-09-01T17:36:09:8770 INFO: POST /test
```
Press ctrl-c to stop the container.
## Run in detached mode
This is great so far, but our sample application is a web server and we don't have to be connected to the container. Docker can run your container in detached mode or in the background. To do this, we can use the `--detach` or `-d` for short. Docker starts your container the same as before but this time will “detach” from the container and return you to the terminal prompt.
```shell
$ docker run -d -p 8000:8000 python-docker
ce02b3179f0f10085db9edfccd731101868f58631bdf918ca490ff6fd223a93b
```
Docker started our container in the background and printed the Container ID on the terminal.
Again, lets make sure that our container is running properly. Run the same curl command from above.
```shell
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{
"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}
```
## List containers
Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, we can run the `docker ps` command. Just like on Linux to see a list of processes on your machine, we would run the `ps` command. In the same spirit, we can run the `docker ps` command which displays a list of containers running on our machine.
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f python-docker "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 0.0.0.0:8000->8000/tcp wonderful_kalam
```
The `docker ps` command provides a bunch of information about our running containers. We can see the container ID, The image running inside the container, the command that was used to start the container, when it was created, the status, ports that exposed and the name of the container.
You are probably wondering where the name of our container is coming from. Since we didnt provide a name for the container when we started it, Docker generated a random name. Well fix this in a minute, but first we need to stop the container. To stop the container, run the `docker stop` command which does just that, stops the container. You need to pass the name of the container or you can use the container ID.
```shell
$ docker stop wonderful_kalam
wonderful_kalam
```
Now, rerun the `docker ps` command to see a list of running containers.
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
## Stop, start, and name containers
You can start, stop, and restart Docker containers. When we stop a container, it is not removed, but the status is changed to stopped and the process inside the container is stopped. When we ran the `docker ps` command in the previous ,module, the default output only shows running containers. When we pass the `--all` or `-a` for short, we see all containers on our machine, irrespective of their start or stop status.
```shell
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f python-docker "docker-entrypoint.s…" 16 minutes ago Exited (0) 5 minutes ago wonderful_kalam
ec45285c456d python-docker "docker-entrypoint.s…" 28 minutes ago Exited (0) 20 minutes ago agitated_moser
fb7a41809e5d python-docker "docker-entrypoint.s…" 37 minutes ago Exited (0) 36 minutes ago goofy_khayyam
```
You should now see several containers listed. These are containers that we started and stopped but have not been removed.
Lets restart the container that we just stopped. Locate the name of the container we just stopped and replace the name of the container below in the restart command.
```shell
$ docker restart wonderful_kalam
```
Now list all the containers again using the `docker ps` command.
```shell
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f python-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d python-docker "docker-entrypoint.s…" 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d python-docker "docker-entrypoint.s…" 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam
```
Notice that the container we just restarted has been started in detached mode and has port 8000 exposed. Also, observe the status of the container is “Up X seconds”. When you restart a container, it starts with the same flags or commands that it was originally started with.
Now, lets stop and remove all of our containers and take a look at fixing the random naming issue. Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system.
```shell
$ docker stop wonderful_kalam
wonderful_kalam
```
Now that all of our containers are stopped, lets remove them. When you remove a container, it is no longer running, nor it is in the stopped status, but the process inside the container has been stopped and the metadata for the container has been removed.
```shell
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f python-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d python-docker "docker-entrypoint.s…" 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d python-docker "docker-entrypoint.s…" 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam
```
To remove a container, simple run the `docker rm` command passing the container name. You can pass multiple container names to the command using a single command. Again, replace the container names in the following command with the container names from your system.
```shell
$ docker rm wonderful_kalam agitated_moser goofy_khayyam
wonderful_kalam
agitated_moser
goofy_khayyam
```
Run the `docker ps --all` command again to see that all containers are removed.
Now, lets address the random naming issue. Standard practice is to name your containers for the simple reason that it is easier to identify what is running in the container and what application or service it is associated with.
To name a container, we just need to pass the `--name` flag to the `docker run` command.
```shell
$ docker run -d -p 8000:8000 --name rest-server python-docker
1aa5d46418a68705c81782a58456a4ccdb56a309cb5e6bd399478d01eaa5cdda
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aa5d46418a6 python-docker "docker-entrypoint.s…" 3 seconds ago Up 3 seconds 0.0.0.0:8000->8000/tcp rest-server
```
Thats better! We can now easily identify our container based on the name.
## Next steps
In this module, we took a look at running containers, publishing ports, and running containers in detached mode. We also took a look at managing containers by starting, stopping, and, restarting them. We also looked at naming our containers so they are more easily identifiable. In the next module, well learn how to run a database in a container and connect it to our application. See:
[How to develop your application](develop.md){: .button .outline-btn}
## Feedback
Help us improve this topic by providing your feedback. Let us know what you think by creating an issue in the [Docker Docs ](https://github.com/docker/docker.github.io/issues/new?title=[Python%20docs%20feedback]){:target="_blank" rel="noopener" class="_"} GitHub repository. Alternatively, [create a PR](https://github.com/docker/docker.github.io/pulls){:target="_blank" rel="noopener" class="_"} to suggest updates.
<br />