engine: restructure mult-service container page

Signed-off-by: David Karlsson <david.karlsson@docker.com>
This commit is contained in:
David Karlsson 2023-05-11 16:54:23 +02:00
parent 7079dc4572
commit a2d22f16e6
1 changed files with 85 additions and 79 deletions

View File

@ -10,8 +10,8 @@ title: Run multiple services in a container
--- ---
A container's main running process is the `ENTRYPOINT` and/or `CMD` at the A container's main running process is the `ENTRYPOINT` and/or `CMD` at the
end of the `Dockerfile`. It is generally recommended that you separate areas of end of the `Dockerfile`. It's best practice to separate areas of concern by
concern by using one service per container. That service may fork into multiple using one service per container. That service may fork into multiple
processes (for example, Apache web server starts multiple worker processes). processes (for example, Apache web server starts multiple worker processes).
It's ok to have multiple processes, but to get the most benefit out of Docker, It's ok to have multiple processes, but to get the most benefit out of Docker,
avoid one container being responsible for multiple aspects of your overall avoid one container being responsible for multiple aspects of your overall
@ -28,91 +28,97 @@ container exits. Handling such processes this way is superior to using a
full-fledged init process such as `sysvinit`, `upstart`, or `systemd` to handle full-fledged init process such as `sysvinit`, `upstart`, or `systemd` to handle
process lifecycle within your container. process lifecycle within your container.
If you need to run more than one service within a container, you can accomplish If you need to run more than one service within a container, you can achieve
this in a few different ways. this in a few different ways.
- Put all of your commands in a wrapper script, complete with testing and ## Use a wrapper script
debugging information. Run the wrapper script as your `CMD`. This is a very
naive example. First, the wrapper script:
```bash Put all of your commands in a wrapper script, complete with testing and
#!/bin/bash debugging information. Run the wrapper script as your `CMD`. This is a very
naive example. First, the wrapper script:
# Start the first process ```bash
./my_first_process & #!/bin/bash
# Start the second process
./my_second_process &
# Wait for any process to exit
wait -n
# Exit with status of process that exited first
exit $?
```
Next, the Dockerfile: # Start the first process
./my_first_process &
```dockerfile # Start the second process
# syntax=docker/dockerfile:1 ./my_second_process &
FROM ubuntu:latest
COPY my_first_process my_first_process
COPY my_second_process my_second_process
COPY my_wrapper_script.sh my_wrapper_script.sh
CMD ./my_wrapper_script.sh
```
- If you have one main process that needs to start first and stay running but # Wait for any process to exit
you temporarily need to run some other processes (perhaps to interact with wait -n
the main process) then you can use bash's job control to facilitate that.
First, the wrapper script:
```bash # Exit with status of process that exited first
#!/bin/bash exit $?
```
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
```
```dockerfile Next, the Dockerfile:
# syntax=docker/dockerfile:1
FROM ubuntu:latest
COPY my_main_process my_main_process
COPY my_helper_process my_helper_process
COPY my_wrapper_script.sh my_wrapper_script.sh
CMD ./my_wrapper_script.sh
```
- Use a process manager like `supervisord`. This is a moderately heavy-weight ```dockerfile
approach that requires you to package `supervisord` and its configuration in # syntax=docker/dockerfile:1
your image (or base your image on one that includes `supervisord`), along with FROM ubuntu:latest
the different applications it manages. Then you start `supervisord`, which COPY my_first_process my_first_process
manages your processes for you. Here is an example Dockerfile using this COPY my_second_process my_second_process
approach, that assumes the pre-written `supervisord.conf`, `my_first_process`, COPY my_wrapper_script.sh my_wrapper_script.sh
and `my_second_process` files all exist in the same directory as your CMD ./my_wrapper_script.sh
Dockerfile. ```
```dockerfile ## Use Bash job controls
# syntax=docker/dockerfile:1
FROM ubuntu:latest If you have one main process that needs to start first and stay running but
RUN apt-get update && apt-get install -y supervisor you temporarily need to run some other processes (perhaps to interact with
RUN mkdir -p /var/log/supervisor the main process) then you can use bash's job control to facilitate that.
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf First, the wrapper script:
COPY my_first_process my_first_process
COPY my_second_process my_second_process ```bash
CMD ["/usr/bin/supervisord"] #!/bin/bash
```
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
```
```dockerfile
# syntax=docker/dockerfile:1
FROM ubuntu:latest
COPY my_main_process my_main_process
COPY my_helper_process my_helper_process
COPY my_wrapper_script.sh my_wrapper_script.sh
CMD ./my_wrapper_script.sh
```
## Use a process manager
Use a process manager like `supervisord`. This is a moderately heavy-weight
approach that requires you to package `supervisord` and its configuration in
your image (or base your image on one that includes `supervisord`), along with
the different applications it manages. Then you start `supervisord`, which
manages your processes for you. Here is an example Dockerfile using this
approach, that assumes the pre-written `supervisord.conf`, `my_first_process`,
and `my_second_process` files all exist in the same directory as your
Dockerfile.
```dockerfile
# syntax=docker/dockerfile:1
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
```